text
stringlengths
4
602k
Grand jury is a legal body empowered to conduct official proceedings and investigate potential criminal conduct, and determine whether criminal charges should be brought. A grand jury may compel the production of documents and compel sworn testimony of witnesses to appear before it. Grand jury is separate from the courts, which do not preside over its functioning. United States and Liberia are the only countries that retain grand juries, even though some other common law jurisdictions formerly employed them, and most others employ some other form of preliminary hearing. Grand juries perform both accusatory and investigatory functions. Investigatory functions of grand juries include obtaining and reviewing documents and other evidence, and hearing sworn testimonies of witnesses who appear before it. Grand jury's accusatory function is to determine whether there is probable cause to believe that one or more persons committed a certain offence within the venue of district court. Grand jury in the United States is usually composed of 16 to 23 citizens, though in Virginia, it is composed of lesser numbers for regular or special grand juries. In Ireland, they also functioned as local government authorities. In Japan, Law of July 12, 1948 created the Kensatsu Shinsakai (Prosecutorial Review Commission or PRC system), inspired by the American system. |This section needs additional citations for verification. (January 2017) (Learn how and when to remove this template message)| The function of a grand jury is to accuse persons who may be guilty of an offense, but the institution is also a shield against unfounded and oppressive prosecution. It is a means for lay citizens, representative of the community, to participate in the administration of justice. It can also make presentments on crime and maladministration in its area. The traditional number of the grand jury is 23. The mode of accusation is by a written statement in solemn form (indictment) describing the offense with proper accompaniments of time and circumstances, and certainty of act and person or by a mode less formal, which is usually the spontaneous act of the grand jury, called presentment. No indictment or presentment can be made except by concurrence of at least twelve of the jurors. The grand jury may accuse upon their own knowledge, but it is generally done upon the testimony of witnesses under oath and other evidence heard before them. The proceedings of grand jury are, in the first instance, at the instigation of the government or other prosecutor, and ex parte and in secret deliberation. The accused has no knowledge nor right to interfere with their proceedings. If they find the accusation true, which is usually drawn up in form by the prosecutor or an officer of the court, they write upon the indictment the words "a true bill" which is signed by the foreman of the grand jury and presented to the court publicly in the presence of all the jurors. If the indictment is not proven to the satisfaction of the grand jury, the word "ignoramus" or "not a true bill" is written upon it by the grand jury, or by their foreman and then said to be ignored, and the accusation is dismissed as unfounded. If the grand jury returns an indictment as a true bill ("billa vera"), the indictment is said to be founded and party stand indicted, and required to be put upon trial. Grand jury only hears evidence on behalf of the prosecution for the finding of an indictment only in the nature of inquiry or accusation, which is afterwards to be tried and determined. However, they ought to be thoroughly persuaded of the truth of indictment, so far as their evidence goes. And thus a body of persons brought together for an occasion and for it only, are placed between the government and citizen, as a shield against oppression and injury, and to afford reasonable protection to the accused if not justly suspected of a crime. The first instance of a grand jury can be traced back to the Assize of Clarendon in 1166, an Act of Henry II of England. Henry's chief impact on the development of the English monarchy was to increase the jurisdiction of the royal courts at the expense of the feudal courts. Itinerant justices on regular circuits were sent out once each year to enforce the "King's Peace". To make this system of royal criminal justice more effective, Henry employed the method of inquest used by William the Conqueror in the Domesday Book. In each shire, a body of important men was sworn (juré) to report to the sheriff all crimes committed since the last session of the circuit court. Thus originated the more recent grand jury that presents information for an indictment. The grand jury was later recognized by King John in Magna Carta in 1215 on demand of the nobility. The Grand Jury can be said to have "celebrated" its 800th birthday in 2015, because a precursor to the Grand Jury is defined in Article 61, the longest of the 63 articles of the Magna Carta, also called Magna Carta Libertatum (Latin: "the Great Charter of Liberties") executed on 15 June 1215 by King John and by the Barons. The document was primarily composed by the Archbishop of Canterbury, Stephen Langton (1150-1228). He and Cardinal Hugo de Sancto Caro developed schemas for division of the Bible into chapters and it is the system of Archbishop Langton which prevailed. He was a Bible scholar, and the concept of the Grand Jury may possibly derive from Deuteronomy 25:1: "If there be a controversy between men, and they come unto judgment, that the judges may judge them; then they shall justify the righteous, and condemn the wicked." (King James Version) Thus the Grand Jury has been described as the "Shield and the Sword" of the People, as a "Shield for the People" from abusive indictments of the government, or malicious indictments of individuals, and as the "Sword of the People" to cut away crime by any private individual, or cut away crime by any public servant, whether in the Judicial, Executive, or Legislative branches. On 2 July 1681, a popular statesman, Anthony Ashley Cooper, 1st Earl of Shaftesbury was arrested on suspicion of high treason and committed to the Tower of London. He immediately petitioned the Old Bailey on a writ of habeas corpus, but the Old Bailey said it did not have jurisdiction over prisoners in the Tower of London, so Shaftesbury had to wait for the next session of the Court of King's Bench. Shaftesbury moved for a writ of habeas corpus on 24 October 1681, and his case finally came before a grand jury on 24 November 1681. The government's case against Shaftesbury was particularly weak – the government admitted that most of the witnesses brought against Shaftesbury had already perjured themselves, and the documentary evidence was inconclusive; and the jury was handpicked by the Whig Sheriff of London. For these reasons the government had little chance of securing a conviction, and on 13 February 1682 the case was dropped when the Grand Jury issued an ignoramus bill, rather than comply with the King's intent of a "True Bill", known as a Grand Jury Indictment. The grand jury's theoretical function against abuse of executive power was seen during the Watergate crisis in America, in United States v. Nixon, the U.S. Supreme Court ruled 8 to 0 on 23 July 1974 (Justice William Rehnquist who had been appointed by Nixon recused himself from the case) that executive privilege applied only to the co-equal branches, the legislative and judicial, not to grand jury subpoenas, thus implying a grand jury constituted protections equaled to a "fourth branch of government". The second Watergate grand jury indicted seven lawyers in the White House, including former Attorney General John Mitchell and named President Nixon as a "secret, un-indicted, co-conspirator." Despite evading impeachment, Nixon was still required to testify before a grand jury. Similarly, in 1998, President Clinton became the first sitting president required to testify before a grand jury as subject of an investigation by the Office of Independent Counsel. The testimony came after a four-year investigation into Clinton and his wife Hillary's alleged involvement in several scandals including Whitewater and the Rose Law Firm. Revelations from the investigation sparked a battle in Congress over whether or not to impeach Clinton. Grand juries have also been of importance in earlier history. President Jefferson tried unsuccessfully to obtain a bill of indictment of Aaron Burr in the Commonwealth of Kentucky, the Missouri Territory, and the Louisiana Territory. Though eventually indicted in Virginia, Burr would be found not guilty by the Supreme Court. Criticism of grand juries in relation to cases of police brutality were also highlighted in the grand juries held in several cases in 2014, such as the case against Officer Darren Wilson in the shooting of Michael Brown. England and Wales The sheriff of every county was required to return to every quarter sessions and assizes (or more precisely the commission of oyer and terminer and of gaol delivery), 24 men of the county "to inquire into, present, do and execute all those things which, on the part of our Lord the King (or our Lady the Queen), shall then be commanded them". Grand jurors at the assizes or at the borough quarter sessions did not have property qualifications; but, at the county quarter sessions, they had the same property qualification as petty jurors. However, at the assizes, the grand jury generally consisted of gentlemen of high standing in the county. After the court was opened by the crier making proclamation, the names of those summoned to the grand jury were called and they were sworn. They numbered at least 12 and not more than 23. The person presiding (the judge at the assizes, the chairman at the county sessions, the recorder at the borough sessions) gave the charge to the grand jury, i.e. he directed their attention to points in the various cases about to be considered which required explanation. The charge having been delivered, the grand jury withdrew to their own room, having received the bills of indictment. The witnesses whose names were endorsed on each bill were sworn as they came to be examined, in the grand jury room, the oath being administered by the foreman, who wrote his initials against the name of the witness on the back of the bill. Only the witnesses for the prosecution were examined, as the function of the grand jury was merely to inquire whether there was sufficient ground to put the accused on trial. If the majority of them (and at least 12) thought that the evidence so adduced made out a sufficient case, the words "a true bill" were endorsed on the back of the bill. If they were of the opposite opinion, the phrase "not a true bill", or the single Latin word ignoramus ("we do not know" or "we are ignorant (of)"), was endorsed instead and the bill was said to be "ignored" or thrown out. They could find a true bill as to the charge in one count, and ignore that in another; or as to one defendant and not as to another; but they could not, like a petty jury, return a special or conditional finding, or select part of a count as true and reject the other part. When some bills were "found", some of the jurors came out and handed the bills to the clerk of arraigns (in assizes) or clerk of the peace, who announced to the court the name of the prisoner, the charge, and the endorsements of the grand jury. They then retired and considered other bills until all were disposed of; after which they were discharged by the judge, chairman, or recorder. If a bill was thrown out, although it could not again be preferred to the grand jury during the same assizes or sessions, it could be preferred at subsequent assizes or sessions, but not in respect of the same offence if a petty jury had returned a verdict. Ordinarily, bills of indictment were preferred after there had been an examination before the magistrates. But this need not always take place. With certain exceptions, any person could prefer a bill of indictment against another before the grand jury without any previous inquiry into the truth of the accusation before a magistrate. This right was at one time universal and was often abused. A substantial check was put on this abuse by the Vexatious Indictments Act 1859. This Act provided that for certain offences which it listed (perjury, libel, etc.), the person presenting such an indictment must be bound by recognizance to prosecute or give evidence against the accused, or alternatively had judicial permission (as specified) so to do. If an indictment was found in the absence of the accused, and he/she was not in custody and had not been bound over to appear at assizes or sessions, then process was issued to bring that person into court, as it is contrary to the English law to "try" an indictment in the absence of the accused. The grand jury's functions were gradually made redundant by the development of committal proceedings in magistrates' courts from 1848 onward when the (three) Jervis Acts, such as the Justices Protection Act 1848, codified and greatly expanded the functions of magistrates in pre-trial proceedings; these proceedings developed into almost a repeat of the trial itself. In 1933 the grand jury ceased to function in England, under the Administration of Justice (Miscellaneous Provisions) Act 1933 and was entirely abolished in 1948, when a clause from 1933 saving grand juries for offences relating to officials abroad was repealed by the Criminal Justice Act 1948. The grand jury was introduced in Scotland, solely for high treason, a year after the union with England, by the Treason Act 1708, an Act of the Parliament of Great Britain. Section III of the Act required the Scottish courts to try cases of treason and misprision of treason according to English rules of procedure and evidence. This rule was repealed in 1945. The first Scottish grand jury under this Act met at Edinburgh on 10 October 1748 to take cognisance of the charges against such rebels as had not surrendered, following the Jacobite rising of 1745. An account of its first use in Scotland illustrates the institution's characteristics. It consisted of 23 good and lawful men, chosen out of 48 who were summoned: 24 from the county of Edinburgh (Midlothian), 12 from Haddington (East Lothian) and 12 from Linlithgow (West Lothian). The court consisted of three judges from the High Court of Justiciary (Scotland's highest criminal court), of whom Tinwald (Justice Clerk) was elected preses (presiding member). Subpoenas under the seal of the court and signed by the clerk were executed on a great number of persons in different shires, requiring them to appear as witnesses under the penalty of £100 each. The preses named Sir John Inglis of Cramond as Foreman of the Grand Jury, who was sworn first in the English manner by kissing the book; the others followed three at a time; after which Lord Tinwald, addressing the jurors, informed them that the power His Majesty's advocate possessed before the union, of prosecuting any person for high treason, who appeared guilty on a precognition taken of the facts, being now done away, power was lodged with them, a grand jury, 12 of whom behoved to concur before a true bill could be found. An indictment was then preferred in court and the witnesses endorsed on it were called over and sworn; on which the jury retired to the exchequer chambers and the witnesses were conducted to a room near it, whence they were called to be examined separately. Two solicitors for the crown were present at the examination but no-one else; and after they had finished and the sense of the jury was collected, the indictment was returned a "true bill", if the charges were found proved, or "ignoramus" if doubtful. The proceedings continued for a week, in which time, out of 55 bills, 42 were sustained and 13 dismissed. Further Acts of Parliament in the 19th century regarding treason did not specify this special procedure and the Grand Jury was used no longer. In Ireland, grand juries were active from the Middle Ages during the Lordship of Ireland in parts of the island under the control of the English government (The Pale), that was followed by the Kingdom of Ireland. They mainly functioned as local government authorities at the county level. The system was so-called as the grand jurors had to present their public works proposals and budgets in court for official sanction by a judge. Grand jurors were usually the largest local payers of rates, and therefore tended to be the larger landlords, and on retiring they selected new members from the same background. Distinct from their public works function, as property owners they also were qualified to sit on criminal juries hearing trials by jury, as well as having a pre-trial judicial function for serious criminal cases. Many of them also sat as magistrates judging the less serious cases. They were usually wealthy "country gentlemen" (i.e. landowners, landed gentry, farmers and merchants): A country gentleman as a member of a Grand Jury...levied the local taxes, appointed the nephews of his old friends to collect them, and spent them when they were gathered in. He controlled the boards of guardians and appointed the dispensary doctors, regulated the diet of paupers, inflicted fines and administered the law at petty sessions. From 1691 to 1793, Dissenters and Roman Catholics were excluded from membership. The concentration of power and wealth in a few families caused resentment over time. The whole local government system started to become more representative from the passing of the Municipal Corporations (Ireland) Act 1840. Grand juries were replaced by democratically elected County Councils by the Local Government (Ireland) Act 1898, as regards their administrative functions. After the formation of Irish Free State in 1922, grand juries were not required, but they persisted in Northern Ireland until abolished by the Grand Jury (Abolition) Act of the Parliament of Northern Ireland in 1969. The Fifth Amendment to the Constitution of the United States reads, "No person shall be held to answer for a capital, or otherwise infamous crime, unless on a presentment or indictment of a grand jury..." In the early decades of the United States grand juries played a major role in public matters. During that period counties followed the traditional practice of requiring all decisions be made by at least 12 of the grand jurors, (e.g., for a 23-person grand jury, 12 people would constitute a bare majority). Any citizen could bring a matter before a grand jury directly, from a public work that needed repair, to the delinquent conduct of a public official, to a complaint of a crime, and grand juries could conduct their own investigations. In that era most criminal prosecutions were conducted by private parties, either a law enforcement officer, a lawyer hired by a crime victim or his family, or even by laymen. A layman could bring a bill of indictment to the grand jury; if the grand jury found there was sufficient evidence for a trial, that the act was a crime under law, and that the court had jurisdiction, it would return the indictment to the complainant. The grand jury would then appoint the complaining party to exercise the authority[clarification needed] of an attorney general, that is, one having a general power of attorney to represent the state in the case. The grand jury served to screen out incompetent or malicious prosecutions. The advent of official public prosecutors in the later decades of the 19th century largely displaced private prosecutions. While all states currently have provisions for grand juries, today approximately half of the states employ them and 22 require their use, to varying extents. The constitution of Pennsylvania required, between 1874 and 1968, that a grand jury indict all felonies. Grand juries were once common across Canada. The institution of British civil government in 1749 at Nova Scotia brought the judicature system peculiar to that form, and the grand jury was inherent to it. A similar form derived in Quebec from the promise of the Royal Proclamation of 1763 that a faithful copy of Laws of England would be instituted in the North American possessions of the Crown. Archival records are found that document the presentments of a grand jury in Quebec as early as 16 October 1764. One of the chief complaints was related to the jury trial, and the use of language. The desire for English law was a driver for the division in 1791 of Quebec, as it was then known, at the Ottawa river into Upper Canada and Lower Canada, as each of the two groups (French and English) desired to maintain their traditions. In point of fact, the second law passed in Upper Canada relates to (petit) jury trial. This was continued so that Chapter 31 of the 1859 Consolidated Statutes of Upper Canada specifies the constitution of Grand and Petit Juries in the province (now known as Ontario). The colony at St. John's Island, ceded by France in 1763, and separated on 30 May 1769 from Nova Scotia, became Prince Edward Island on 29 November 1798. Prince Edward Island derived its grand jury from its administrative parent between 1763 and 1769, Nova Scotia, as did Sunbury County when it was split off in 1784 to become the Colony of New Brunswick. The Colony of British Columbia, when it was formed on 2 August 1858, instituted a grand jury, along with the Colony of the Queen Charlotte Islands (1853-1863) and the Colony of Vancouver Island (1848-1866) when the latter were absorbed by the former. Old courthouses with the two jury boxes necessary to accommodate the 24 jurors of a grand jury can still be seen. The grand jury would evaluate charges and return what was called a "true bill (of indictment)" if the charges were to proceed. or a verdict of nolle prosequi if not. The practice gradually disappeared in Canada over the course of the twentieth century, after being the subject of extended discussions late in the 19th. It was ultimately abolished in 1984 when the Nova Scotia courts formally ended the practice. Prince Edward Island maintained a grand jury as recently as 1871. The grand jury existed in New South Wales for a short period in the 1820s. The New South Wales Act 1823 (UK) enabled the establishment of Quarter Sessions, as a subsidiary court structure below that of the Supreme Court. Francis Forbes, Chief Justice, reasoned that this entailed the creation of Quarter Sessions as they existed in England. Thus, inadvertently, trial by jury and indictment by grand jury were introduced, but only for these subsidiary courts. Grand Juries met in Sydney, Parramatta, Windsor and other places. This democratic method of trial proved very popular, but was resented by conservatives. Eventually, conservative elements in the colony were successful in having these innovations suppressed by the Australian Courts Act 1828 (UK). George Forbes, a member of the Legislative Council, unsuccessfully moved for the reintroduction of Grand Juries in 1858, but this was thwarted by the Attorney-General and the Chief Justice. In South Australia and Western Australia, grand juries existed for longer periods of time. In South Australia, the first Grand Jury sat on 13 May 1837, but it was abolished in 1852. In Western Australia, by the Grand Jury Abolition Act Amendment Act 1883 (WA), the Grand Jury was abolished (section 4: A Grand Jury shall not be summoned for the Supreme Court of Western Australia, nor for any General Quarter Sessions for the said Colony). The Australian state of Victoria maintained, until 2009, provisions for a grand jury in the Crimes Act 1958 under section 354 indictments, which had been used on rare occasions by individuals to bring other persons to court seeking them to be committed for trial on indictable offences. Grand juries were introduced by the Judicature Act 1874 and have been used on a very limited number of occasions. Their function in Victoria particularly relates to alleged offences either by bodies corporate or where magistrates have aborted the prosecution. Trial by jury was introduced in the Cape Colony by Richard Bourke, Lieutenant Governor and acting Governor of the colony between 1826-28. The acting Governor, who was later influential in the establishment of jury trial in New South Wales, obtained the consent of the Secretary of State for the Colonies in August 1827 and the first Charter of Justice was issued on 24 August 1827. Jury trial was brought into practical operation in 1828 and the 1831 Ordinance 84 laid down that criminal cases would be heard by a panel of nine, selected from males aged between 21 and 60, owning or renting property to a value of £1:17 shillings per annum or having liability for taxes of 30 shillings in Cape Town and 20 shillings outside the town. Black (i.e. non-white) jurors were not entirely excluded and sat occasionally. This is not to imply, however, that juries did not operate in an oppressive manner towards the Black African and Asian residents of the Cape, whose participation in the jury lists was, in any event, severely limited by the property qualification. The property qualification was amended in 1831 and 1861 and, experimentally, a grand jury came into operation. The grand jury was established for Cape Town alone. It met quarterly. In 1842 it was recorded that it served a district of 50,000 inhabitants and in one quarterly session there were six presentments (1 homicide, 2 assaults, 1 robbery, 1 theft, 1 fraud). As elsewhere, the judge could use his charge to the Grand Jury to bring matters of concern to him to the attention of the public and the government. In May 1879 Mr. Justice Fitzpatrick, returning from circuit in the northern and western parts of Cape Colony, gave a charge to the grand jury at the Criminal Sessions at Cape Town, in which, after congratulating them upon the lightness of the calendar, he observed there were indications in the country of a growing mutual bad feeling between the races, etc. This was reported in the Cape Argus and was a subject of a question to the government in the House of Commons in London. The jury law of 1791 created an eight-man jury d'accusation in each arrondissement (a subdivision of the departement) and a 12-man jury de jugement in each departement. In each arrondissement the procureur-syndic drew up a list of 30 jurors from the electoral roll every three months for the jury d'accusation. There was no public prosecutor or juge d'instruction. Instead the police or private citizens could bring a complaint to the Justice of the Peace established in each canton (a subdivision of the arrondissement). This magistrate interrogated the accused to determine whether grounds for prosecution existed and if so sent the case to the directeur du jury (the director of the jury d'accusation), who was one of the arrondissement's civil court judges, and who served in the post for six months on a rotating basis. He decided whether to dismiss the charges or, if not, whether the case was a délit (misdemeanour) or a crime (felony, i.e. imprisonable for 2 years or more). Délits went to the tribunal de police correctionnelle of the arrondissement, while for crimes the directeur de jury convoked the jury d'accusation of the arrondissement, in order to get an indictment. The directeur du jury drew up the bill of indictment (act d'accusation) summarising the charges to be presented to the jury d'accusation. The directeur made a presentation to the jury in the absence of the accused and the jury heard the witnesses. The jury then decided by majority vote whether there were sufficient grounds for the case to go to the tribunal criminel of the departement. Between 1792-5 there was no property qualification for jurors. The functions of the jury d’accusation were prescribed in the law of 1791 passed by the Constituent Assembly and were maintained and re-enacted in the Code des Délits et des Peines of 3 Brumaire, Year 4 (25 October 1795) and this was the operative law until it was abolished in 1808. Special juries and special grand juries were originally defined in law, for cases thought to require more qualified jurors, but these were abolished in Year 8 (1799). After World War II, under the influence of the Allies, Japan passed the Prosecutorial Review Commission Law on July 12, 1948, which created the Kensatsu Shinsakai (or Prosecutorial Review Commission (PRC) system), a figure analogue to the grand jury system. However, until 2009 the PCR's recommendations were not binding, and were only regarded as advisory. Additionally, a survey conducted by the Japanese Cabinet Office on October 1990 showed that 68. 8% of surveyed Japanese citizens were not familiar with the PRC system. On May 21, 2009, the Japanese government introduced new legislation which would make the PRC's decisions binding. A PRC is made up of 11 randomly selected citizens, is appointed to a six-month term, and its primary purpose is examining cases prosecutors have chosen not to continue prosecuting. It has therefore been perceived as a way to combat misfeasance in public officials. From 1945-1972 Okinawa was under American administration. Grand Jury proceedings were held in the territory from 1963 - 1972. By an ordinance of the civil administration of the Ryukyu Islands promulgated in 1963, grand jury indictment and petit jury trial were assured for criminal defendants in the civil administration courts. This ordinance reflected the concern of the U.S. Supreme Court that U.S. civilians tried for crimes abroad under tribunals of U.S. provenance should not be shorn of the protections of the U.S. Bill of Rights. Indeed, the District Court in Washington twice held that the absence of the jury system in the civil administration courts in Okinawa invalidated criminal convictions By article 21 of the constitution, 'No person shall be held to answer for a capital or infamous crime except in cases of impeachment, cases arising in the Armed Forces and petty offenses, unless upon indictment by a Grand Jury". For example, the national Port Authority’s managing director was indicted by the Monteserrado County Grand Jury in July 2015, on charges of economic sabotage, theft of property and criminal conspiracy. Grand Jury in Liberia dates from the time of the original constitution in 1847. Under the administration of the Sierra Leone Company, which began in 1792, the Governor and Council or any two members thereof, being also Justices of the Peace, held Quarter Sessions for the trial of offences committed within the colony. The process for indictment etc was the same as the practice in England or as near as possible thereto. To effect this, they were empowered to issue their warrant or precept to the Sheriff, commanding him to summon a Grand Jury to sit at the court of Quarter Sessions. Grand Jury continued in operation after the transfer to the colony to the Crown in 1807. Governor Kennedy (1852-1854) was concerned that jurors were frustrating government policy by being biased in certain cases; in particular he felt that Liberated Africans on the Grand Jury would never convict another Liberated African on charges of owning or importing slaves. He promulgated the Ordinance of 29 Nov 1853 which abolished the grand jury. Opposition was immediately mounted in Freetown. A public meeting launched a petition with 550 names to the Colonial Secretary in London, and the opposition declared that the Kennedy ordinance was a reproach upon the loyalty of the Community. Grand Jury had been considered one colonial body representative of local opinion and the Colonial Secretary's support for Kennedy upholding the abolition inspired a round of agitation for local voice in government decision-making. - Inquests in England and Wales - Committal procedure - Immunity from prosecution - Counties of Ireland - Civil grand jury - 112 S.Ct. 1735 504 U.S. 36 118 L.Ed.2d 352 UNITED STATES, Petitioner v. John H. WILLIAMS, Jr. No. 90-1972. Argued Jan. 22, 1992. Decided May 4, 1992 - Nestmann, Mark (2011). The Lifeboat Strategy. The Nestmann Group. p. 110. ISBN 9781891266409. Retrieved 1 December 2014. - Zapf, Patricia A.; Roesch, Ronald; Hart, Stephen D. (2009). Forensic Psychology and Law. Hoboken, NJ: Wiley. p. 182. ISBN 978-0-470-57039-5. Retrieved 2 December 2014. - Harris, Seymour F.; Attenborough, Charles L. (1896). Harris's Principles of the Criminal Law (7th ed.). London: Stevens & Haynes. book III, chapter VII. - Encyclopedia Americana, ed.Francis Lieber,pub. 1831, p.285 - Medieval Sourcebook: Assize of Clarendon 1166 - The Making of Modern Britain - Turley, Hugh: "The Grand Jury", Hyattsville Life & Times, 2007 - Hebrew Bible article in the Catholic Encyclopedia. - Moore, G.F. The Vulgate Chapters and Numbered Verses in the Hebrew Bible, 1893, at JSTOR. - Bruce M. Metzger, The early versions of the New Testament: Their origin, transmission and limitations, Oxford University Press (1977), p.347. Cited in Stephen Langton and the modern chapter divisions of the bible by British translator Roger Pearse, 21 June 2013. - 22 & 23 Vict. c. 17, s. l. - See Indictable Offences Act 1848 (11 and 12 Vict c. 42); title: An Act to facilitate the Performance of the Duties of Justices of the Peace out of Sessions within England and Wales with respect to Persons charged with indictable Offences - legislation.gov.uk: "Administration of Justice (Miscellaneous Provisions) Act 1933" (23 & 24 Geo 5 c 36) - Treason Act, 1708 (7 Ann c 21) - Treason Act 1945 (c. 44), section 2(2) and Schedule. - The "History of Scotland, With Notes, and a Continuation to the Present Time", by George Buchanan (up to 16th Century) and James Aikman, Edinburgh 1829. See Vol. 6,p.486 - McDowell, R. B (1975). Moody, T.W.; Beckett, J.C.; Kelleher, J.V., eds. The Church of Ireland, 1869–1969. Routledge & Kegan Paul. p. 2. ISBN 0 7100 8072 7. Retrieved 2011-09-03. - Chandler, J. A (1993). J. A. Chandler, ed. Local government in liberal democracies: an introductory survey. Routledge. p. 31. ISBN 978-0-415-08875-6. Retrieved 2009-08-19. - Acts of the Northern Ireland Parliament, 1969 c.15 - Edwards, George John (1906). Ward, Richard H., ed. The Grand Jury: Considered from an Historical, Political and Legal Standpoint, and the Law and Practice Relating Thereto. University of Michigan: G.T. Bisel. ISBN 0-404-09113-X. Retrieved 22 May 2011. - Roots, Roger (1999–2000). "If It's Not a Runaway, It's Not a Real Grand Jury". Creighton L.R. 33 (4): 821. - Brenner, Susan; Lori Shaw (2003). "State Grand Juries". University of Dayton School of Law. Archived from the original on 3 July 2016. Retrieved 2010-08-02. - "Frequently Asked Questions About the Grand Jury System". American Bar Association. Archived from the original on 2011-04-24. Retrieved 2011-05-11. - Brenner, Susan; Lori Shaw (2003). "Power to abolish Grand Jury". University of Dayton School of Law. Retrieved 2007-03-29. - Royal Proclamation of 1763 - archive.org: "Consolidated Statutes of Upper Canada, 1859" - "Timeline History of the Nova Scotia Supreme Court" - Parker, Nancy (1995). "Swift Justice and the Decline of the Criminal Trial Jury: The Dynamics of Law and Authority in Victoria, BC 1858–1905". In Flaherty, David H.; McLaren, John; Foster, Hamar. Essays in the History of Canadian Law: The Legal History of British Columbia and the Yukon. University of Toronto Press. - Stokes, Mary: "Grand Juries and ‘Proper Authorities’: Low Law, Soft Law and Local Governance in Canada West/Ontario, 1850-1880" - Phillips Cables Ltd. v. United Steelworkers of America, Local 7276 (Nicolosi grievance), O.L.A.A. No. 13, at para. 15. - "Who invented the grand jury?". The Straight Dope. 2006-07-18. Retrieved 2010-10-17. - (Consolidated) Acts of the General Assembly of Prince Edward Island, 1871 - Bennett, J.M. (1961). The Establishment of Jury Trial in New South Wales. Faculty of Law, University of Sydney. - A History of Criminal Law in New South Wales: The Colonial Period, 1788-1900, by G.D. Woods QC, Federation Press 2002, p.56-59. - Taylor, Greg (October 2001). "The Grand Jury of South Australia". American Journal of Legal History. 45 (4): 468–516. doi:10.2307/3185314. - "Grand Jury Abolition Act Amendment Act 1883". Retrieved 9 May 2013. - Histed, Elise (September 1987). "The introduction and use of the grand jury in Victoria". Journal of Legal History. 8 (2): 167–177. doi:10.1080/01440368708530896. - Crown Commission of Inquiry into the Administration of Justice in the Colony of the Cape of Good Hope (Records of the Cape Colony xxviii (1905) I-III, George McCall Theale) - E. Kahn: South African Law Journal(1991), pp.672-87; SALJ(1992), pp.87-111, 307-18, 666-79; SALJ(1993), pp.322-37 - The international development of the jury : the role of the British empire, by Richard Vogler in Revue internationale de droit pénal, 2001 (vol 72) - Cape Law Journal, 10 Cape L.J. page 216 (1893) - Wilkes Narrative of the U.S. Exploring Expedition. Page 302 - See "Grand Jury", by George Edwards, pub. Philadelphia 1906. Part IV p.124 - HC Deb 19 June 1879 vol 247 cc169-71 - Statutes of the Cape of Good Hope, 1652-1895: Vol 1872-1886, pub. by J.C. Juta, Cape Town, 1895 - History of Trial by Jury, by William Forsyth, pub J.W. Parker, London 1852. Page 348. - Donovan, James (2010). Juries and the Transformation of Criminal Justice in France in the 19th and 20th Centuries. University of North Carolina Press. Ch. 1. ISBN 978-0-8078-3363-6. - Oudot, Charles-François (1845). Théorie du Jury. Paris: Joubert. p. 327. - Archives de Droit et de Legislation, Tome 5, 2nd Semester, Brussels 1841. Page 83: Loi Belge du 15 Mai 1838 Relative au Jury Expliquée - Archives de Droit et de Legislation, Tome 5, 2nd Semester, Brussels 1841. Page 73: Loi Belge du 15 Mai 1838 Relative au Jury Expliquée - Fukurai, Hiroshi (2011). "Japan's Prosecutorial Review Commissions: Lay Oversight of the Government's Discretion of Prosecution". University of Pennsylvania East Asia Law Review: 5–10. Retrieved 2 December 2014. - Gastil, John; Fukurai, Hiroshi; Anderson, Kent; Nolan, Mark (September 13, 2014). "Seeing Is Believing: The Impact of Jury Service on Attitudes Toward Legal Institutions and the Implications for International Jury Reform" (PDF). Court Review. 48: 126. Archived from the original (PDF) on 26 February 2015. Retrieved 2 December 2014. - Fukurai, Hiroshi (January 2011). "Japan's Quasi-Jury and Grand Jury Systems as Deliberative Agents of Social Change: De-Colonial Strategies and Deliberative Participatory Democracy". Chicago-Kent Law Review. 86 (2): 825. Retrieved 2 December 2014. - Japan and Civil Jury Trials: The Convergence of Forces by Matthew J. Wilson, Hiroshi Fukurai and Takashi Maruta, pub Edward Elgar Publications, October 2015. Page 134 - U.S. Supreme Court decision: Reid v. Covert, 354 U.S. 1, 77 S.Ct. 1222, 1 L.Ed.2d 1148 (1957) - District (i.e. federal) court of the District of Columbia decisions: re Nicholson, H.C. 141-61, D.D.C., Nov. 19, 1963, and Ikeda v. McNamara, H.C. 416-62, D.D.C., Oct. 19, 1962 - Constitution of Liberia, 1984 - The Maritime Executive jounal, see web site http://maritime-executive.com/article/liberian-grand-jury-indicts-port-director retrieved Jan 2016 - Constitution of Liberia, 1847, Sec 7 - George, Claude (1904). The Rise of British West Africa: Comprising the Early History of the Colony of Sierra Leone, Gambia, Lagos, Gold Coast, etc. London: Houlston & sons. pp. 146, 147, 171. - Walker, James W. St. G. (1993). The Black Loyalists: The Search for a Promised Land in Nova Scotia and Sierra Leone, 1783-1870. University of Toronto Press. pp. 364–365. - House of Commons. Reports from Committees, Vol 5, Session 7 Feb-6 Jul 1865 - Grand Jury FAQ from the American Bar Association - The California Grand Jurors' Association - "Federal Grand Jury", a website from a professor at the University of Dayton - More on Grand Jury reform, from the National Association of Criminal Defense Lawyers - How Federal Grand Juries Work NPR. Accessed 2008-09-06. - Gottlieb, Bruce (5 August 1998). "Who Is a Grand Jury?". Slate (magazine). - Who invented the grand jury? from the Straight Dope - Questioning Double Jeopardy - Grand Juries - The Grand Jury, Hugh Turley, Hyattsville Life and Times, January, 2007 - Craig Rosebraugh: Tools of Government Repression - Grand juror handbooks from the court system
Presentation on theme: "ANIMAL GROWTH AND DEVELOPMENT Principles of Agriculture, Food and Natural Resources."— Presentation transcript: ANIMAL GROWTH AND DEVELOPMENT Principles of Agriculture, Food and Natural Resources Introduction Growth and development have important implications for domestic animal production because they significantly influence the value of the animal being produced. Photo by Keith Weller courtesy of USDA Agricultural Research Service. A substantial proportion of agricultural research focuses on how to make animal growth and development processes more efficient. This research involves several disciplines because animal growth and development are controlled by genes and hormones. Because growth and development are continuous and dynamic processes requiring integration of numerous physiological functions, they are influenced by: nutrition, efficiency of metabolism and respiration, hormonal regulation, immune response, physiological status of the animal, diseases and parasites, and maintenance of homeostasis. Animal growth and development can be separated into processes occurring before birth or hatching (pre-natal) and those occurring after birth or hatching (post-natal). An animal originates from a single cell (ovum or egg), which is fertilized by the male spermatozoon (sperm). The resulting zygote then develops in an enclosed environment (either the uterus or an egg) for a certain period of time known as gestation or incubation period. Length of gestation: in cattle – approximately 283 days; in sheep – approximately 150 days; and in swine – about 112 days. The length of incubation of a chicken egg is 21 days. After they are born or hatched, young animals experience a period of rapid growth and development until they reach maturity. After an animal matures, some processes stop (Ex. bone elongation), while others slow down (Ex. muscle deposition). The maximum size of an animal is determined by its genetics, but nutrition and disease influence whether the animal reaches its genetic potential for size. Pre-Natal Growth and Development Pre-natal growth and development are broken down into two stages: embryogenesis, and organogenesis. Embryogenesis Embryogenesis extends from the union of female and male gametes to the emergence of the embryonic axis and development of organ systems at the neurula stage. During embryogenesis, the zygote develops into the morula, which becomes the blastula, and then the gastrula. The zygote is a single cell that is repeatedly cleaved to form a multi-celled ball known as the morula. Cleavage is a process that involves mitotic division of the original cell into two cells, which then divide in to four cells and then eight cells. Although the number of cells double at each stage of cleavage, individual cells do not grow or enlarge in size. So, the morula is the same size as the original zygote, even though it is made up of numerous cells, called blastomeres. Cleavage continues until the cells of the developing embryo are reduced to the size of cells in the adult animal. The cells of the morula are rearranged to form a hollow sphere filled with fluid. At this stage, the embryo is referred to as a blastula and the fluid-filled space inside the sphere is called the blastocoel. The blastula undergoes a process known as gastrulation and becomes a gastrula. Gastrulation of a diploblast: The formation of germ layers from a (1) blastula to a (2) gastrula. Some of the ectoderm cells (orange) move inward forming the endoderm (red). Up until the gastrula stage, cell division has occurred but the blastomeres (cells) have not increased in size. It is when the embryo is in the gastrula stage that cell growth occurs at the same time as cell division. The process of gastrulation involves extensive rearrangement of the blastomeres. The cells on one side of the blastula move inward and form a two-layered embryo. The two layers formed are the ectoderm (outer layer) and the endoderm (inner layer). A third cell layer known as the mesoderm is formed between the ectoderm and the endoderm. The cavity that forms within the gastrula is known as the primitive gut; it later develops into the animal ’ s digestive system. All tissues and organs form from one of the three layers of cells in the gastrula. After the germ layers are established, the cells rearrange and develop into tissues and organs. During this phase, known as organogenesis, cells grow and differentiate. Organogenesis The process of organogenesis extends from the neurela stage to birth or hatching. The neurela stage is distinguished by differentiation, which is when unspecialized embryonic cells change into specialized cells destined to form specific tissues or organs. Differentiation starts at the upper surface to the gastrula. Cells of the ectoderm divide and form the neural plate. Two raised edges or neural folds appear and gradually come together to form the neural tube. A mass of cells called the neural crest is pinched off the top of the neural tube and then migrates to other parts of the embryo to give rise to neural and other structures. Eventually, the front part of the neural tube thickens and forms the brain; the remainder of the tube becomes the spinal cord. In the first few weeks after conception, cells differentiate into organs and body structures. The embryo is then referred to as a fetus and the body structures continue to grow and develop until birth. In horses, the embryo is referred to as a fetus at about 40 days following conception, while in humans it takes approximately 56 days to develop the fetus. Body tissues and organs are formed in a specific sequence; the head is formed before the tail and the spinal cord is formed before other organs. Some highly differentiated cells, such as brain and nerve cells, cannot be replaced if they are destroyed after the original number is fixed during the fetal stage. Thus, nerve cells that are seriously damaged thereafter are not replaced and usually remain permanently damaged. Muscle cell numbers are also fixed during the fetal stage and can only increase in size, not in number. Bone, and therefore skeletal size, can be increased to a degree by environmental conditions, but not beyond the genetic potential of the animal. Post-Natal Growth The period of post-natal growth extends from birth or hatching until death. The length of this period depends greatly on the species. The average life span of a mouse is about 2 years, while humans and elephants live to be well over 60 years of age. Sheep and cattle tend to live to be around 15 and 30 years of age, respectively. Muscle, bone, and fat are the three main types of tissues that develop as an animal grows. The rate of deposition depends on the age of the animal and the type of tissue being deposited. Muscle fibers are formed from multiple cells called myoblasts. While the animal is still in the prenatal stage, myoblasts fuse together to form a myotube, which develops into a muscle fiber. As a result, one muscle fiber has multiple nuclei. Because no new fibers are formed after birth, postnatal growth of muscle is characterized by increases in length and diameter. Muscle fibers are predominantly protein; fiber size is determined by the rate of protein synthesis minus the rate of degradation. The deoxyribonucleic acid (DNA) content of muscle cells also increases as the animal develops. Bone tissue grows both before and after birth. A bone grows in length through the ossification or hardening of the cartilage at each end. After the cartilage on the ends of a bone has completely hardened, the bone stops growing. However, bones have the capability of increasing in width and can repair themselves, if broken. Although individual bones reach a mature length and stop elongating, bone tissue is constantly being deposited and resorbed. Fat tissue is comprised of fat cells and connective tissue. Fat cells increase or decrease in size depending on the nutritional status of the animal. Two types of fat tissue include white fat, which stores energy, and brown fat, which maintains a constant body temperature. Fat is deposited in four different areas throughout the body or carcass. Fat that is deposited in the abdominal cavity around the kidneys and pelvic area is called intra-abdominal fat; it is usually the first fat deposited. Fat deposited just under the skin is referred to as subcutaneous fat, or backfat, and is usually the largest amount of fat deposited. Fat between the muscles of animals is called intermuscular fat, while fat deposited within the muscle is called intramuscular fat. The level of intramuscular fat is referred to as the degree of marbling and affects the quality and taste of meat. In the United States, an important factor effecting the value of a beef carcass is its quality grade, which is determined by the degree of marbling in the carcass. Therefore, manipulation of the this process is very important in meat production systems. Intramuscular fat is the last type of fat to be deposited, so animals with high degrees of marbling also have large amounts of fat deposited in other areas of the carcass. Muscle, bone, and fat are deposited differently throughout the animal ’ s life. Bone elongation stops after the animal reaches a mature body size, but bone tissue deposition and resorption continue until the animal dies. Deposition of Different Tissues The majority of muscle tissue develops between birth and maturity. Muscle growth then slows down, but it is not physiologically halted as is bone growth. Fat deposition occurs mainly after the bulk of the muscle has been deposited. It is a common misconception that fat is only deposited in middle aged or mature animals; a significant amount of fat is deposited in the young. It is only because protein deposition declines markedly with age that fattening is more apparent in mature animals. The rate of deposition and the amount of fat deposited depend on the diet of the animal. Young animals receiving an overabundance of milk or nutrients become fat. During early stages of an animal ’ s life, growth occurs very quickly. After puberty, bone elongation stops so skeletal size does not increase much after that point, although live weight continues to increase. In cattle, puberty occurs at about 10 months of age, while in sheep and pigs it occurs around 6 and 5 months, respectively. Hormonal Control Deposition of different tissues and partitioning of energy for various processes involved in growth and development are regulated by hormones. Some of the more important hormones involved in growth and development are insulin, growth hormone, Insulin-like Growth Factor 1 (IGF-1), thyroid hormones, glucocorticoids, and the sex steroids. Insulin Insulin is a very important hormone involved in muscle growth and development. It stimulates the transport of certain amino acids into muscle tissue and is active in reducing the rate of protein degradation. Insulin is also a key hormone in the regulation of food intake, nutrient storage, and nutrient partitioning. Growth Hormone Growth hormone stimulates protein anabolism in many tissues. This effect reflects increased amino acid uptake, increased protein synthesis, and decreased oxidation of proteins. Growth hormones enhance the utilization of fat by stimulating triglyceride breakdown and oxidation in adipocytes. In addition, growth hormones seem to have a direct effect on bone growth by stimulating the differentiation of chondrocytes. The growth hormone is one of many hormones that serve to maintain blood glucose within a normal range. For example, it is said to have anti-insulin activity because it suppresses the ability of insulin to stimulate uptake of glucose in peripheral tissues, and it enhances glucose synthesis in the liver. Somewhat paradoxically, the administration of the growth hormone stimulates insulin secretion, leading to hyperinsulinemia. The major role of growth hormone in stimulating body growth is to stimulate the liver and other tissues to secrete IGF-1. Insulin-like Growth Factor 1 IGF-1 stimulates proliferation of chondrocytes (cartilage cells), thus resulting in bone growth. It is also important in protein, fat, and carbohydrate metabolism. IGF-1 stimulates the differentiation and proliferation of myoblasts and the amino acid uptake and protein synthesis in muscle and other tissues. Thyroid Hormones Animals require thyroid hormones for normal growth. Deficiencies of T 4 (thyroxine) and T 3 (tri-iodothyronine) cause reduced growth as a result of decrease muscle synthesis and increased proteolysis. Alterations in thyroid status require several days to take effect and are associated with changes in the ribonucleic acid (RNA)/protein ratio in skeletal muscle. In addition, thyroid hormones have an important influence on the prenatal development of muscle. Glucocorticoids Glucocorticoids restrict growth and induce muscle wasting; they have different effects on different types of muscle. Some evidence indicates that glucocorticoids also effect metabolic rate and energy balance. Sex Steroids Androgens (male sex hormones) have an obvious effect on muscle development and growth in general because male animals grow faster and develop more muscle than do females. However, estrogens (female sex hormones) also have significant roles in maximizing growth and are commonly used in artificial growth promotants for both male and female cattle. Estrogen is thought to act indirectly through its effects on the secretion of other hormones. However, it is believed that androgens have a more direct effect because of androgen receptors located on muscle cells. Homeostasis Homeostasis is a concept that is closely integrated with the growth and development of an animal. Normal growth patterns are affected if homeostasis is not maintained at all times. Homeostasis refers to the animal ’ s maintenance of an internal equilibrium. Many processes and functions, both voluntary and involuntary, contribute to maintaining this state of internal balance, which is controlled by the nervous system (nervous regulation) and the endocrine system (chemical regulation). Homeostasis is maintained at all levels, from individual cells to the whole animal. For example, cells must maintain suitable salt and water levels, while tissues and organs require specific blood glucose levels. Therefore, maintaining a state of homeostasis requires a high level of interaction between hormonal and nervous activities. Another example of homeostasis is the maintenance of a constant internal temperature. Temperature is something that must be kept within a certain range for an animal to remain alive and grow and function normally. If an animal is becoming increasingly hot, it may move from an open area to a shaded area to help reduce body heat. This is a voluntary action performed by the animal. At the same time, the animal may involuntarily start to sweat. This is a mechanism that many animals use to dissipate heat, but it is not something controlled by the animal. Rather, it occurs automatically in response to internal stimuli. Genetic Control Most processes involved in growth and development are occurring at a cellular level. Because this is such a finite level, it can be difficult to control or manipulate these processes outside of a scientific laboratory. However, managers of livestock systems must manipulate growth and development to optimize production. Consequently, the knowledge of what is happening at a cellular level must be applied at a whole animal level so that growth and development can be managed. Manipulation of genetics is an important factor in the management of livestock operations because the genetic composition of an animal determines its potential for growth and development. All animals have a set genotype that determines their potential for growth. However, their phenotype is affected by environmental factors, including nutrition, disease, parasites, and injuries. Traits are heritable, which means that they are passed on to an individual from its parents; however, some traits are more heritable than others. That is, the genotype of an individual is expressed more strongly and environment is less influential for particular traits. Specific genes code for different traits and some traits are influenced by multiple genes. For example, rate of growth is a trait influenced by many genes controlling things such as appetite, tissue deposition, skeletal development, energy expenditure, and body composition. The genes for all of these traits add together to produce the growth rate we can measure. Heritability of some growth- related traits and how they differ between species are listed in the following table. Effect of Genetics on Pre-Natal Growth Genetic potential for prenatal growth can be inhibited by environmental factors. For example, prenatal growth in chickens is limited by egg size because of the amount of nutrients available to the developing chick. Photo courtesy of USDA Photography Center. In litter-bearing animals, such as swine and rabbits, birth weight of individuals may be affected by the size of the litter and, consequently, the available uterine space and supply of nutrients. Photo courtesy of USDA Agricultural Research Service. Embryos from small breed parents have been transplanted into larger breeds within the same species resulting in birth weights that are greater than their non-transplanted contemporaries. However, birth weights were not as large as offspring of larger breeds with the genetic potential for heavier birth weights. Effect of Genetics on Growth from Birth to Weaning Growth from birth to weaning is affected significantly by the amount of milk produced by the dam. Photo by Ken Hammond courtesy of USDA Photography Center. Many studies of swine indicate that up to 20% of growth during this period is controlled by heritability, while 35% to 50% of the weaning weight is affected by the milking ability of the dam, litter size, and other environmental factors. In cattle and sheep, growth during this period is more strongly related to genetic ability, with heritability estimates ranging from 20% to 30%. Photo by Scott Bauer courtesy of USDA Agricultural Research Service. Effects of Genetics on Post- Weaning Growth During the post-weaning period, the individual ’ s genetic potential for growth can be more easily evaluated, provided the nutritional levels are adequate and disease and parasites are controlled. Selecting for mature size differences over time has developed large and small strains of chickens, rabbits, swine, cattle, and sheep. The mature size of animals is directly related to their rate of gain and feed efficiency. Large and late maturing animals are still growing when they reach conventional market weights and are carrying less fat and waste. These larger framed animals are more suitable for markets requiring lean meat. Thus, the grower who produces animals with high-yielding carcasses is rewarded financially. On the other hand, small and early maturing animals have just about finished growing when they reach desirable market weights and are frequently carrying much higher proportions of fat. So, in markets where marbling is desired, this is a good characteristic. Combining Growth Traits with Other Breeding Priorities The objectives of individual livestock production operations need to be considered when planning breeding programs. Genetic manipulation through breeding is a long-term commitment; producers need to carefully consider their long-term market objectives and opportunities. Most animals are produced for a specific market; throughout Texas and the United States, cattle production is focused on feedlot systems that produce meat almost entirely for domestic consumption. Photo by M. Jasek. Cattle that produce high-yielding carcasses, which possess sufficient marbling, and have high feed efficiencies are considered most valuable. All levels of production, including cow-calf operations, stocker operations, and feedlots, focus on producing beef of acceptable quality as efficiently as possible. In Australia, cattle are grass-fed until they are two to three years old, resulting in leaner, larger carcasses that are ultimately destined for export to Asian countries, such as Japan or the Philippines. Greater emphasis is placed on growth rates in male animals and calving percentages in females. Survival is also a major factor to be considered because of harsh environmental conditions. For example, tick resistance and heat tolerance are very important traits. Selecting for increased growth rates ultimately result in a line of larger framed animals. The negative results of this can be decreased marbling and feed efficiency, increased feed costs, higher birth weights, and higher rates of dystocia. This has led some producers to consider feed efficiency a more suitable selection trait; however, the heritability of feed efficiency is low and genetic improvement is slow. For these reasons, selection based on indirect traits may be more effective. The Influence of External Factors An animal never reaches its genetic potential for growth, fattening, milk production, egg laying and other developmental processes, if diet and environmental conditions are not optimal or at least favorable. Nutrition Nutrition is the variable that managers of livestock production systems have the most control over in the short-term. An animal requires a certain level of nutrition for the normal development and functioning of its body systems. This is commonly referred to as the maintenance requirements of an animal. Additional nutrients are then required if optimal growth of muscle and fat is to occur. Poor nutrition can have multiple consequences, such as: stunted growth, malformed organs, disease, brittle skeletons, increased susceptibility to parasites, and poor reproductive performance. All of these consequences lead to reduced income for the owner of the animals. Consequently, livestock operations spend a lot of time and money trying to provide optimal nutrition for their animals. Photo by M. Jasek. For more intensive livestock operations, such as swine and cattle feeding operations or broiler grow-out farms, feed costs can contribute to more than 80% of the total costs involved in producing an animal. Photo by M. Jasek. Nutrition affects all stages of growth and development. The nutritional status of the dam throughout the gestation and lactation periods has significant effects on the offspring ’ s development. Photo by M. Jasek. Poor nutrition in reproducing females leads to low birth weights and heavy death losses in newborn progeny. Species differ in how they adapt to poor nutrition. For example, sheep and cattle partition as many nutrients as possible into the fetus and even use their own reserves to meet nutritional deficiencies. Iron deficiencies cause problems because the dam utilizes her own reserves to supply the iron requirements for the growing fetus. In comparison, some species abort the fetus if their nutritional status falls below a certain level. The effects of poor nutrition after birth on postnatal growth and ultimate mature size depend on three factors: 1. the age at which poor nutrition occurs, 2. the length of time during which the animal was subjected to poor nutrition, and 3. the kind of poor nutrition to which the animal was subjected (for example, a specific imbalance of one or more essential amino acids). Poor nutrition at any stage in an animal ’ s development has long- term effects. For example, cattle that experience a period of poor nutrition as young calves never meet their genetic potential to marble. However, structural development continues as normal if the period of poor nutrition is relatively short in duration. Poor nutrition even provides a benefit in the form of compensatory growth. Compensatory growth is a phenomenon that has been identified in animals that go through a short period of malnutrition, but then return to an adequate or high plane of nutrition. Animals lose weight or their development is temporarily slowed but, as the animal ’ s nutritional status improves, they start utilizing nutrients more efficiently. Thus, the resulting weight gain occurs more quickly and more efficiently. Nutrition is used to manipulate the growth patterns of animals. For example, in feedlots, high- energy diets are commonly fed in the finishing phase to encourage deposition of fat (marbling). Photo by M. Jasek. The nutritional strategies used depend on the desired end- product, the age at turn-off, and the available feed sources. Diseases Any form of disease negatively impacts the growth and development of an animal. Sickness usually requires nutrients to be repartitioned and commonly causes reductions in intake. Some diseases also create long- term consequences that impair the animal ’ s ability to harvest, digest, or absorb nutrients, causing long-term impairment of growth and development. Parasites The effect of parasites varies from mild to severe and can be as drastic as death. Both internal and external parasites cause: a decrease in appetite and, therefore, decreased intake of food. depressed wool production, inhibited normal digestive functions, permanent internal tissue damage, and the animal to become physically sick (for example, blood poisoning by ticks). Many treatments are available to prevent and combat parasitic infections. Cattle going through tick treatment bath at APHIS facility (McAllen, TX) to control cattle fever ticks. Photo by Scott Bauer courtesy of USDA Agricultural Research Service. The Aging Process in Animals Aging involves a series of changes in animals that lead to physical deterioration and eventually to death. There is an age at which each species reaches the peak of its productive life. For example, maximum egg laying is highest during a hen ’ s first year of production; maximum litter size in swine occurs at 3 to 4 years of age. It has been said that as an animal is born, it begins to die. Photo by Scott Bauer courtesy of USDA Agricultural Research Service. In a physiological sense, this is true, because shortly after formation of the embryo, cells of certain tissues stop dividing. Subsequently, cell division stops in other tissues until only those tissues essential to the maintenance of life (that is, skin and blood) continue to divide. An animal ’ s longevity is roughly proportional to the length of time required for the animal to reach maturity. For example, rabbits, which reach maturity in about 6 months, have a life expectancy of about 4 years. Cattle require 2 to 3 years to mature and have a life expectancy of 20 to 25 years. Most physiological functions of animals deteriorate with age. The reproductive organs secrete lower levels of hormones, muscular strength and speed of motion decline, and reaction time is increased. The time required for recovery from body substance imbalances becomes longer with age. Collagen or proteins in the skin and blood vessels become less elastic with age; thus, wrinkles form and vessels collapse or burst. An increased breakdown of neural and glandular control involved in the aging process also occurs. Reproductive and lactating abilities of females decrease with age, lowering their productivity. Sows become inefficient producers even earlier because they reach excessive sizes creating higher body maintenance requirements and resulting in more injuries to baby pigs. Therefore, sows are frequently culled by 3 years of age. Cows are usually culled from the breeding herd at 10 to 11 years and ewes at 7 to 8 years. Many factors, both genetic and environmental, affect the life span of animals. Longevity of animals is a heritable trait, so it can be estimated by knowing the life span of an individual ’ s parents and siblings. Moreover, life span is decreased if an animal is required to produce at higher than normal levels for a substantial period of time. This is commonly seen in high- producing dairy cows. Inadequate or excessive nutrition also hastens the aging process. Higher environmental temperatures seem to shorten life expectancy. The sex of an animal appears to be involved in longevity because females usually outlive males. ALL RIGHTS RESERVED Reproduction or redistribution of all, or part, of this presentation without written permission is prohibited. Instructional Materials Service Texas A&M University 2588 TAMUS College Station, Texas 77843-2588 http://www-ims.tamu.edu 2007
In mathematics, understanding the domain and range of a function is crucial for solving equations, graphing functions, and analyzing their behavior. The domain represents the set of input values for which the function is defined, while the range represents the set of output values produced by the function. Whether you’re a student learning calculus or someone interested in mathematics, this article will guide you through the process of finding the domain and range of a function. What Is a Function? A function is a mathematical relationship that assigns each element from one set (called the domain) to a unique element in another set (called the codomain or range). In simpler terms, it takes an input (usually denoted as “x”) and produces an output (usually denoted as “f(x)”). The output of a function depends on the input, and each input corresponds to one and only one output. Finding the Domain of a Function The domain of a function defines the set of all permissible input values for that function. In other words, it answers the question: “What values of ‘x’ can I plug into the function?” Start by examining the function for any potential restrictions. Common restrictions include: Division by zero: Identify any denominators in the function, and determine when they equal zero. Exclude those values from the domain. Square roots: If the function contains square roots or any even roots, the value inside the root (the “radicand”) must be non-negative. So, set the radicand greater than or equal to zero and solve for ‘x’. Logarithms: If the function involves logarithms, the argument of the logarithm must be greater than zero. For functions with rational expressions (fractions), look for values of ‘x’ that make the denominator equal to zero. These values must be excluded from the domain to avoid division by zero. For functions containing square roots or even roots, ensure that the radicand is non-negative. Set the radicand greater than or equal to zero and solve for ‘x’ to find the valid domain. If the function is subject to specific inequalities (e.g., “x > 0” or “x < 5”), these conditions should be taken into account when determining the domain. In piecewise functions, each piece may have its own domain restrictions. Find the domain for each piece separately. Finally, combine all the valid input ranges you found in the previous steps. The domain of the function is the intersection of all these ranges. Finding the Range of a Function The range of a function represents the set of all possible output values that the function can produce. To find the range of a function, follow these steps: Start by analyzing the behavior of the function, particularly its graph. Visualize the graph and observe its highest and lowest points, horizontal asymptotes, and any intervals where the function is increasing or decreasing. If you’re dealing with more complex functions, you can use calculus to find the range. Determine the derivative of the function and find critical points where the derivative is zero or undefined. These critical points are potential extrema of the function. If the function has horizontal asymptotes, take into account how the function approaches these values as ‘x’ approaches positive or negative infinity. Choose test values within the domain of the function and evaluate the function at these points to determine the corresponding output values. These test points can provide insight into the range. Combine all the possible output values you found using the above methods. The range of the function is the set of all these output values. Example: Let’s find the domain and range of the function f(x) = 1/x. Domain: The domain of this function excludes x = 0, as division by zero is undefined. So, the domain is all real numbers except x = 0, often expressed as “x ∈ ℝ, x ≠ 0.” Range: The range is all real numbers except zero because as ‘x’ approaches positive or negative infinity, the function approaches zero. This can be expressed as “f(x) ∈ ℝ, f(x) ≠ 0.” Determining the domain and range of a function is an essential part of understanding its behavior and applications in mathematics. By analyzing the function for potential restrictions, inequalities, and asymptotic behavior, you can effectively find the valid domain and range. Whether you’re working with simple linear functions or complex equations, following these steps will help you identify the input and output values that make up the domain and range of a function.
The yearning of the Europeans especially Portuguese, Spanish, British and the Dutch for exploration, colonisation and imperialism was a major factor in expanding the slave trade networks in the Atlantic. As discussed by Timothy P. Grady in the book The Atlantic World 1450-2000, “explorers from Portugal, Spain and other European nations expanded the geographic knowledge southward along the coast of Africa and westward across the Atlantic shores of the Americas”. The urge for this exploration was triggered by the fall of Constantinople in May 1943, the last vestige of the Roman Empire, to the Muslim Turks which shook the fortitude of the European countries and the Christian faith. The expansion of the Ottoman Empire around the Mediterranean region deprived European merchants of the lucrative trade routes along the Silk Road to the East. The threat of lost communication and trade routes across the Mediterranean into China, India and other regions of eastern Asia and lost access to silk and other precious commodities carried along this route, forced Europeans to explore alternate trade routes to Asia by turning westward for new opportunities. Discovery of new routes west of Europe through the Atlantic, led to European arrival off West coast of Africa in the late fifteenth century. By mid seventeenth century, the coast line of West Africa was infiltrated by fifty forts and slave trading posts of competing European countries – Portugal, Spain, Britain, Holland, Denmark, Sweden and Germany dividing the coastline into – Ivory Coast, Gold Coast and Slave Coast. The political set up in Africa also facilitated slave trade. Africa was divided into a number of small and large states, chieftaincies and independent villages each with their own form of government, religion, customs and traditions. These territories often fought with each other and the captives of war were taken as slaves. Such conflicts were justified wars which according to Warren. C. Whatley was “natural struggles of nation building” conducted in the normal course of affairs. The captives referred to as “joint-products of war” or “stolen goods” were then exported. With the advent of the Europeans, domestic conflicts became slave raids. As Robin Law asserted, the Kingdom of Dahomey dominated the slave raiding and trading from 1715 to 1850. Their kings held a royal monopoly on the trade and conducted slave raids through their armies. Thus the political ambitions of the European and African monarchy led to the development of the slave trade. The developments in technology and its impact on navigation, ship building, and firearms aided the growth in Atlantic slave trade. Navigation The desire for exploration spurred European scholars, navigators and sailors to expand their knowledge of geography and devise new ways of charting and mapping their journeys. Increased use of the hour glass and logs to measure time and distance and the Portolan charts clearly documented navigation. In 1462, the Portuguese navigators devised methods of figuring out latitudes by measuring the height of the Pole Star above the horizon. Later in 1484, astronomers in the court of King Joao II, using the midday sun to figure latitudes, produced a set of declination tables. Under the patronage of Prince Henry of Portugal, other significant developments were made in the study of winds, tides and ocean currents; documents from previous explorations were compiled and maps and charts were continuously improved. Thus a good number of problems associated with navigation were resolved by late fifteenth century. As navigation across the great oceans became manageable, the transportation of the slaves between the continents – Europe, Africa and America became less complicated. Ship Building The changes to the design and functionalities of the European ships were another major factor that contributed to the expansion of Atlantic slave trade. Between the fourteenth and mid- nineteenth centuries, sailing ships were the main means of transport of the slaves. These sailing ships kept changing over time in terms of design, fittings, equipments and materials used as sail. Use of three to four masts, sturdy hull, square lateen and sprit sails, and stern rudder enhanced their sailing power, speed and eased control of the ships in wild weather conditions. Small ships such as the caravel, highly manoeuvrable ships introduced in the fifteenth century encouraged the Portuguese to explore regions around West African coast such as Senegal and Cape Verde and Canary islands to secure staples, gold and slaves. Other ships designed by Portuguese for travel in the Atlantic Ocean were the carracks, four masted ships and the galleon, heavily armed multi deck sailing ships. The ships also grew in size and multi decks were able to accommodate larger number of slaves. The mean tonnage of the slave ships from Liverpool in 1730 was 75 tons. This increased to 130 tons in 1790 and 226 tons in 1805. Weapons The supremacy of Europe in the slave trade was driven by its guns, cannons and restraints. They used a variety of weapons to threaten the slaves and the enemy ships at sea, to maintain control both on land and at sea. The diffusion of the new gunpowder technology accelerated the slave trade. The African communities, threatened by armed neighbours, resorted to trading the captives for gunpowder, guns and muskets. In the words of Warren. C. Whatley, the vicious cycle, “a raid or be raided” arms race known as the Gun- Slave-Cycle was created. The replacement of the ineffective matchlock musket by the flintlock in1680s, drastically increased firearms demand in West Africa. According to J. E. Inikori, the firearms imported from England during the eighteenth century were between 283,000 and 394,000 guns per annum. The demand for firearms from West Africa was so high that manufacturing companies such as Farmer and Galton were forced to pressurise their workers to increase production. The demand for firearms was matched by supply of slaves. The developments in restraining technology aided the slave trade in terms of terrorising the slaves and reducing escapes. The restraints used in the trade included, neck restraints, iron collars linked by chains, tongue restraints and leg and wrist shackles to trammel movement. The ability to stow more slaves per cubic foot of the ship, ability to navigate better around the coast of Africa, the reduction in escapees due to draconian restraints, and the organisation of forts around the coast to lodge the captives helped to reduce costs and promote trade. African Demand for goods from Europe The introduction of a wide range of consumption goods in West Africa, the possession of which was a matter of social status and power, was another factor leading to the development of Atlantic slave trade. The African demand for iron and copper bars, textiles, salt, earthenware, weapons and firearms, rum, wine, gin and cowrie shells and a variety of both European and oriental goods had a profound impact on slave trade. The demands for these goods were so high that the European suppliers could not cope with the increased demand. J. E. Inikori commented that firearms and textiles were in such high demand by the slave traders that they were not prepared to clear their slave cargo, if they were not satisfied with the quantity of supply of these items of trade. The merchants were willing to trade their morality to capture slaves in exchange for European goods. Alan Rice clearly identifies this when he asserts, “The desire for luxury goods was so great that these African elites would consign war captives and domestic slaves to an unknown fate across the ocean in exchange for them”. Growth in Slave trading institutions Growth in social institutions to perform a more organised slave trade was a key factor in Atlantic slave trade. The increase in demand and prices of slaves encouraged the development of various institutions to address the issues associated with the trade – capture, enslavement, seasoning, trade, regulations and taxation. The merchants explored new ways of trapping the slaves – deception, kidnapping, ambush attacks, promoting conflicts between villages and the pretence of family substitution for the runaways. The kidnap of Olaudah Equiano in 1750s in his words, “One day when all our people were gone out to their works as usual and only I and my sister were left to mind the house, two men and woman got over our walls and in a moment seized us both… and ran off with us into the nearest wood”. The drought and famine in Africa due to marginal rainfalls in the Savannah areas – Angola and the grasslands extending from Senegambia to Cameron, forced desponding families to sell themselves. People were too poor to survive and offered themselves as collateral for credits. Non repayment made them slaves. Development of enforcement mechanisms also encouraged the slave trade. Credit was offered to slave traders to cover costs of acquiring, transporting and housing slaves until they were boarded on the ships. Other types of such mechanisms, described by Warren. C. Whatley were “the use of factories and forts as holding pens and warehouses, African canoe houses and other trade coalitions, secret societies and treaties between European and African nations. The cycle of violence to hunt down the slaves continued leading to an upsurge in slave trade The decline in population in the Americas. This was another important factor that led to the development of Atlantic slave trade. With the European colonisation of the Americas, there was a growth in mining and plantations in the islands between North and South America and the labour demands were met by native Indians. The massive mortality rates of the natives due to poor working conditions and new European and African diseases such as measles, small pox, the plague, influenza, malaria and yellow fever led to decline in the population of Americas. Figure 1 presents data on the drastic decline in population in Americas which led to a decline in labour. The Europeans now turned to the Negroes in Africa for labour. They soon found that the African slaves were more productive and the output quadrupled. Shiploads of slaves were exported to work in these American islands and soon the slave trade was transformed from a marginal institution to a global phenomenon. Growth in Plantations The development of Atlantic slave trade stemmed from the growth in plantation agriculture such as sugar, cotton, tobacco, tea and rice in the New World. The demand for plantation workers in sixteenth century Brazil, seventeenth century Caribbean and nineteenth century Cuba instigated slave supply from Africa. The intensity of the growth in plantations could be seen in small islands like Barbados. By 1650 Barbados had 300 plantations which multiplied to 900 by 1670, a rate of 100% per annum. The growing demand for sugar, multiplying at a compound rate of 5% per annum in the seventeenth century to about 10% in the nineteenth century, increased the demand for African slaves to work in the sugar plantations in the New World lands. As H. Hobhouse puts it, “‘food’ became responsible for the Africanization of the Caribbean”. This small group of islands accounted for 80% of the sugar and slave trade until the eighteenth century. The slave labour for majority of these plantations was secured from Africa through the Atlantic. As plantations became the expanded into a global trade network, so did the Atlantic slave trade. Slave Trade and Profitability There were various groups of stakeholders in the Atlantic Slave trade who participated in it due to the profitability from the trade in slaves. African Rulers profited in terms of taxes and custom duties paid by the European merchants. They were given the first choice of any merchandise that was brought into Africa for trade and were able to bargain lower prices for these goods. The rulers also commanded premium prices for their own slaves. They also received considerable gifts from the merchants in order to secure preferential trading agreements. Ouidah, a coastal town in Benin, West Africa was a strong European trading post since 1720 and was accessed by forty to fifty European trading vessels per year. Hence the ruler who started off with ten slaves in exchange for opening his market in 1700 was able to command a higher price of twenty slaves by 1720. This was in addition to the privileges in the purchase or sale of the commodities which included the slaves as well. According to Miles Ogborn, by 1800s the rulers in Africa were able to obtain “goods for each slave worth three or four times as much in 1700”. Both African and European slave traders were paid well. Overwhelmed by the profits from slave exports, wealthy merchants both in Africa and Europe, expanded slave trading networks to prodigious numbers. Figure 2 analyses changes in supply by African slave merchants in response to changes in prices. The data reveals that the supply increased as price increased. Hence, the largest emigration of slaves in the eighteenth century can be attributed to the increase in price from ? 14 to ? 25. Between the years 1779 and 1788, there was a decrease in demand for slaves due to the War of American Independence. This created excess supply of slaves in the African coast. Hence the planter in Americas started restocking their slave supply. The European slave traders capitalised on this by securing supply at cheaper prices from Africa and selling higher prices in the Americas; thereby making abnormal profits between these years. Thus slave trade allowed African and European slave traders to maximise profits from the trade. The consumers of Europe profited in terms of cheaper commodity prices due to increased output by African slaves in the plantations. Figure 3 presents data on the production of sugar and tobacco by British colonies. The increased volume of production of these commodities reduced their prices much to the favour of European consumers. Tobacco which fetched twenty to forty shillings in 1619 was sold for a shilling or less while the price of sugar halved between 1630 and 1680. Thus the consumers were able to enjoy the luxury of these commodities at affordable prices. The planters were another group of stakeholders in the trade who profited in their own way. Labour became cheap and more available due to Atlantic Slave trade. The planters always worked with a motive of profitable exploitation of the factors of production, especially labour and work was dictated by discipline and violence. Successful planters were able to create immense wealth and have extravagant lifestyles. While the slaves slogged day and night in the plantations, the owners were able to retreat in the Great Houses built on commanding positions, with beautiful gardens, imported china, furniture and furnishings. The fortune and lifestyle of Sir Charles Price, the largest land and slave owner of Jamaica between 1738 and 1772 demonstrates the height of planter lifestyles. The Decoy”, the Great House he built was a mansion with magnificent rooms with mirrors and wood carving in the decor, lakes and parks around the house and elegant gardens with fruits, flowers and vegetables. This essay has clearly illustrated the factors that led to the development of the Atlantic Slave trade. Eventhough the political set up in Europe and Africa and the growth in plantations laid the foundation for the trade, it was the technological developments and social influences on the Europeans and Africans that took the trade to global heights. Overall, the technological improvements lowered transport, handling and shipping costs enabling the achievement of economies of scale. Similarly, the growing demand for goods from Europe in Africa, the growth in slave trading institutions and the decline in Americas’ population fostered the slave trade. Finally, the profitability from the trade influenced various groups of stakeholders to become intensely involved making it an international trade spanning four continents and altering their social, economic and political composition.
Learn something new every day More Info... by email Macroeconomics is a study of the aggregate in an economy in a specific nation. Economists use information gleaned from the aggregate level in order to determine the strength of an economy and the current stage of the business cycle. A few different macroeconomic indicators include gross domestic product, inflation, unemployment, and a variety of others. Economists track and report these macroeconomic indicators on a quarterly and annual basis for many stakeholders. Trends and other movements — such as short-term spikes — help a nation diagnose economic issues and make corrections if necessary. Gross domestic product is often among the most commonly reported macroeconomic indicators. Its purpose is to determine the market value of all goods produced by a nation in a given time period. Growth occurs when the resulting figures are positive, such as 2.1 or 4.3 percent for a given quarter. Higher figures indicate higher growth, naturally. Negative gross domestic product figures are also possible, which indicate negative growth and a potential for a business cycle contraction. Inflation is also a very important indicator; it determines the purchasing power of currency for a given period. While natural economic growth can result in inflation, the most common occurrence of inflation comes from government intervention in mixed economies. Lowering interest rates or increasing money supply can trigger inflation, traditionally defined as too many dollars chasing too few goods. Macroeconomic indicators tracking inflation may be a monthly computation rather than quarterly. This allows a nation to assess this important figure on a more frequent basis and make changes as necessary to ward off the negative affects of this economic problem. Unemployment is also an important indicator in macroeconomic terms. Here, nations desire information on the investments made by private-sector businesses. When unemployment decreases, more individuals are working and making money, which eventually finds its way back into the economy. Rising unemployment can signal businesses that are unsure of the moves in the aggregate economy and are attempting to downsize in order to remain profitable. With rising unemployment, a nation’s gross domestic product will fall, and the economy may enter a contraction period, with the length potentially unknown. The macroeconomic indicators above are all lagging indicators, meaning they report on activities in the past. Significant downsides to lagging indicators are primarily from the fact that the economy may already have changed since computing the above indicators. This means the economy may actually be doing better or worse than the numbers indicate. Therefore, it can be difficult to actually determine the strength of an economy based on these indicators alone.
Here are step-by-step instructions. 1. Calculate the added mass of a thin circular disc of radius R. Keep in mind that this will describe the momentum of water only during a very short time period when the cylinder is immersed into water only very shallowly (during ca two frames in the video). Don’t forget that only the lower half-space is filled with water. Detailed suggestions about how to calculate this added math without using advanced math are given in Hint 4. To ensure there are no mistakes in your calculations, I recommend comparing your expression for the added mass with the one you found in the literature (a simple internet search will help). However, if you fail for some reason following Hint 4, you may try understanding a textbook where this added mass is calculated. 2. Measure from the video v_0 and v_1, the velocities of the cylinder immediately before and after plunging into water, in arbitrary units. Units don’t matter because we’ll be calculating the ratio of velocities from where the units cancel out. Measuring v_0 with a reasonably good accuracy is a fairly easy task. Indeed, Indeed, one can easily arrive to the conclusion that the frame rate is fast enough so that the speed increase due to the free fall acceleration can be neglected, even if a period covering several frames is considered. When using displacement over several frames, the relative accuracy of the measurement will increase. If you want to do even better, you can still consider even longer time periods while taking into account the effect of the free-fall acceleration, which means that instead of a linear regression of the cylinder’s displacement versus frame number data, you need to do a quadratic regression. The real challenge is determining the v_1 as this needs to be based on two neighboring frames. The first frame needs to show that the cylinder has already touched the water (if we were to use the previous frame, too, then during a part of the inter-frame period, the cylinder would have had the speed v_0). In the next frame, the cylinder immersion depth is small, i.e. the frame is usable, but that cannot be said about the following frame. Uncertainty is increased by the fact that the image of the cylinder is not exceedingly sharp, so it is difficult to determine the exact position of the cylinder. A small hint: instead of trying to figure out the contour of the cylinder, copy a small piece from the image of the cylinder in one frame, and displace it when it is overlaid to the other frame until the two images seemingly merge. 3. Apply the momentum conservation law: the momentum of water and cylinder together is conserved. The impact is essentially a plastic one, hence the energy is not conserved. What happens most likely to the energy is that during the impact, a short shock wave is generated that travels through water and carries away the excess energy. Note that the momentum carried away by the shock wave is small and can be neglected because its energy-to-momentum ratio is given by the speed of sound in water that is much larger than the speed of the cylinder. 4. To obtain the final answer, you will also need the radius-to-height ratio of the cylinder that can be easily measured from a video frame. Please submit the solution to this problem via e-mail to email@example.com.
The elbow is a complex joint formed by the articulation of three bones – the humerus, radius and ulna. The elbow joint helps in bending or straightening of the arm to 180 degrees and assists in lifting or moving objects. The bones of the elbow are supported by Ligaments and tendons. Bones and Joints of the elbow joint: The elbow joint is formed at the junction of three bones: The Humerus (upper arm bone) forms the upper portion of the joint. The lower end of the humerus divides into two bony protrusions known as the medial and lateral epicondyles which can be felt on either side of the elbow joint. The Ulna is the larger bone of the forearm located on the inner surface of the joint. The curved shape of the ulna articulates with the humerus. The Radius is the smaller bone of the forearm situated on the outer surface of the joint. The head of the radius is circular and hollow which allows movement with the humerus. The connection between the ulna and radius helps the forearm to rotate. The elbow consists of three joints from articulation of the three bones namely: Humeroulnar joint is formed between the humerus and ulna and allows flexion and extension of the arm. Humeroradial joint is formed between the radius and humerus, and allows movements like flexion, extension, supination and pronation. Radioulnar joint is formed between ulna and radius bones, and allows rotation of the lower arm. Articular cartilage lines the articulating regions of the humerus, radius and ulna. It is a thin, tough, flexible, and slippery surface that acts as a shock absorber and cushion to reduce friction between the bones. The cartilage is lubricated by synovial fluid, which further enables the smooth movement of the bones. Muscles of the Elbow Joint. There are several muscles extending across the elbow joint that help in various movements. These include the following: Biceps brachii: upper arm muscle enabling flexion of the arm. Triceps brachii: muscle in the back of the upper arm that extends the arm and fixes the elbow during fine movements. Brachialis: upper arm muscle beneath the biceps which flexes the elbow towards the body. Brachioradialis: forearm muscle that flexes, straightens and pulls the arm at the elbow. Pronator teres: this muscle extends from the humeral head, across the elbow, and towards the ulna, and helps to turn the palm facing backward. Extensor carpi radialis brevis: forearm muscle that helps in movement of the hand. Extensor digitorum: forearm muscle that helps in movement of the fingers. Elbow joint ligaments and tendons: The elbow joint is supported by ligaments and tendons, which provide stability to the joint. Ligaments are a group of firm tissues that connect bones to other bones. The most important ligaments of the elbow joint are the: Medial or ulnar collateral ligament: comprised of triangular bands of tissue on the inner side of the elbow joint. Lateral or radial collateral ligament: a thin band of tissue on the outer side of the elbow joint. Together, the medial and lateral ligaments are the main source of stability and hold the humerus and ulna tightly in place during movement of the arm. Annular ligament: These are a group of fibers that surrounds the radial head, and holds the ulna and radius tightly in place during movement of the arm. The ligaments around a joint combine to form a joint capsule that contains synovial fluid. Any injury to these ligaments can lead to instability of the elbow joint. Tendons are bands of connective tissue fibers that connect muscle to bone. The various tendons which surround the elbow joint include: Biceps tendon: attaches the biceps muscle to the radius, allowing the elbow to bend. Triceps tendon: attaches the triceps muscle to the ulna, allowing the elbow to straighten. Nerves of the elbow joint: The main nerves of the elbow joint are the ulnar, radial and median nerves. These nerves transfer signals from the brain to the muscles that aid in elbow movements. They also carry the sensory signals like touch, pain, and temperature back to the brain. Any injury or damage to these nerves causes pain, weakness or joint instability. Arteries are blood vessels that carry oxygen-pure blood from the heart to the hand. The main artery of the elbow is the brachial artery that travels across the inside of the elbow and divides into two small branches below the elbow to form the ulnar and the radial artery. The elbow joint is a hinge joint that provides great stability and movement for performing daily activities. The strong muscles that extend across the elbow joint bring about actions like flexion, extension, supination and pronation enabling us to perform activities of daily living. These activities can be impaired if there is an injury or trauma to the elbow.
The earliest verifiable records of Jewish settlement in Poland date from the late 11th century. However, it is generally believed that Jews arrived in Poland much earlier. Many scholars discard the theory that a large number of followers of the Judaic faith came to Poland from the east in about 965 after the fall of the Khazar state. While it is true that the rulers of Khazar converted to Judaism, there is substantial disagreement amongst researchers as to whether or not their subjects converted in significant numbers. The first Jews to arrive on Polish territory were merchants who were referred to as Radhanites. The Radhanites were merchants whose trade extended over vast distances between east ans west. They were fluent in Arabic, Persian, Greek, Spanish, “Franklish” and “Slav” languages. Their entrance occurred simultaneously with the formation of the Polish state. One of them was Ibrahim ibn Jacob, the author of the first extensive account about Poland. In the summer of 965 or 966 Jacob made a trade and diplomatic journey from his native Toledo in Moslem Spain to the Holy Roman Empire and Slavonic countries. Feudal disintegration, the birth of towns and the development of commodity money relations favored the settlement by Jews in Poland. Nevertheless, the influx of Jews was brought about mostly by their persecution in Western Europe, which gained in force during the crusades. Among the first Jews to arrive in Poland (in 1097 or 1098) were those banished from Prague. Jews from Bohemia and Germany settled primarily in Silesia. They usually engaged in trade and agriculture and some owned landed estates. By the middle of the14th century they had occupied thirty-five Silesian towns. Jewish settlement in other parts of Poland proceeded at a much slower pace and the first mention of Jewish settlers in Plock dates from 1237, in Kalisz from 1287 and a Zydowska (Jewish) street in Krakow in 1304. Earlier, Mieszko III, the prince of Great Poland between 1138 and 1202 and the ruler of all Poland in 1173-77 and 1198-1202, employed Jews in his mint as engravers of dies and technical supervisors of all workers. Until 1206, Jews worked on commission for other contemporary Polish princes, including Casimir the Just, Boleslaus the Tall and Ladislaus Spindleshanks. From pure silver they struck coins called bracteates, which they emblazoned with inscriptions in Hebrew. In 1264, a successor to Mieszko III in Great Poland, Boleslaus the Pious, granted Jews a privilege known as the Kalisz statute. According to this statute, (which was modeled on similar decrees issued in Austria, Bohemia and Hungary) Jews were exempted from municipal and castellan jurisdiction and were subject only to princely courts. The same statute granted Jews free trade and the right to conduct moneylending operations which were, however, limited only to loans made on security of ” immovable property”. The Kalisz statute, which described the Jews as “slaves of the treasury”, ensured protection of persons, protection of property and freedom in conducting religious rites. They were also given the opportunity to organize their internal life on the principle of self-government of their individual communities. Similar privileges were granted to the Silesian Jews by the local princes, Prince Henry Probus of Wroclaw in 1273-90, Henry of Glogow in 1274 and 1299, Henry of Legnica in 1290 – 95 and Bolko of Legnica and Wroclaw in 1295. These privileges resulted in hostile reactions against the Jews by the Catholic clergy. In 1267, the Council of Wroclaw created segregated Jewish quarters in citiesand towns and ordered Jews to wear a special emblem. Jews were banned from holding offices where Christians would be subordinated to them and were forbidden to build more than one prayer house in each town. These resolutions, however, though they were reiterated during the subsequent councils in Buda in 1279 and Leczyca in 1285, were generally not enforced due to the profits which the Jews’ economic activity yielded to the princes. The turn of the 13th and 14th centuries saw the end of feudal disintegration in Poland. In the reunited kingdom the role of towns and the burghers grew. The rulers, interested in the development of a commodity money economy, encouraged Jewish immigration. The most outstanding of those rulers was Casimir the Great who in 1334, a year after ascending the throne, acknowledged the privilege granted the Jews in Great Poland by Boleslaus the Pious in 1264. As a result Jews were exempted from German law and came under the jurisdiction of the voivodes. In the 14th and 15th centuries the main occupation of Jews in Poland was local and long distance trade. Jews performed the role of middlemen in trade between Poland and Hungary, Turkey and the Italian colonies on the Black Sea. They also took part in the Baltic trade and commercial operations in Silesia. Owing to their links with Jewish communities in other countries as well as experience in trade and moneylending operations, Jewish merchants gained the advantage over local merchants, both in European and overseas trade. Following protests by the rich Polish burghers and the clergy, the scope of credit operations conducted by the Jews was seriously curtailed in the early 15th century. In 1423 the statute of Warka forbade Jews the granting of loans against letters of credit or mortgage and limited their operations exclusively to loans made on security of moveable property. The amassed capital was invested by the Jews in leaseholds. In the 14th and 15th centuries rich Jewish merchants and moneylenders leased the royal mint, salt mines and the collecting of customs and tolls. The most famous of them were Jordan and his son Lewko of Krakow in the 14th century and Jakub Slomkowicz of Luck, Wolczko of Drohobycz, Natko of Lvov, Samson of Zydaczow, Josko of Hrubieszow and Szania of Belz in the 15th century. For example, Wolczko of Drohobycz, King Ladislaus Jagiello’s broker, was the owner of several villages in the Ruthenian voivodship and the soitys (administrator) of the village of Werbiz. Also Jews from Grodno were in this period owners of villages, manors, meadows, fish ponds and mills. However until the end of the 15th century agriculture as a source of income played only a minor role among Jewish families. More important were crafts for the needs of both their fellow Jews and the Christian population (fur making, tanning, tailoring). The expansion of the scope of economic activity carried out by the Jews sharpened competition between them and their Christian counterparts. In the 14th century anti-Jewish riots broke out in Silesia which was ruled by the Bohemian-German dynasty of Luxembourg. These reached their climax during the epidemics of the Black Death when, as earlier in Western Europe, Jews were accused of systematically poisoning the wells. In 1349 pogroms took place in many towns in Silesia and some of the refugees from those towns, as well as Jews banished from West European countries, sought shelter from persecution in Poland. Streams of Jewish immigrants headed east to Poland during the reign of Casimir the Great who encouraged Jewish settlement by extending royal protection to them. First mentions about Jewish settlements in Lvov (1356), Sandomierz (1367), Kazimierz near Krakow (1386) and several other cities date from the second half of the 14th century. In the 15th century Jews appeared in many cities in Great Poland, Little Poland, Kuyavia, Pomerania and Red Ruthenia. In the 1450s Polish towns gave shelter to Jewish refugees from Silesia which was then ruled by the Habsburgs. In 1454 anti-Jewish riots flared up in Wroclaw and other Silesian cities. They were inspired by the papal envoy, the Franciscan friar John of Capistrano. Though his main aim was to instigate a popular rebellion against the Hussites, he also carried out a ruthless campaign against the Jews whom he accused of profaning the Christian religion. As a result of Capistrano’s endeavors, Jews were banished from Lower Silesia. Shortly after, John of Capistrano, invited to Poland by Zbigniew Olesnicki, conducted a similar campaign in Krakow and several other cities where, however, anti-Jewish unrest took on a much less acute form. Forty years later, in 1495, Jews were ordered out of the center of Krakow and allowed to settle in the “Jewish town” of Kazimierz. In the same year, Alexander Jagiellon, following the example of Spanish rulers, banished the Jews from Lithuania. For several years they took shelter in Poland until they were allowed back to the Grand Duchy of Lithuania in 1503. Towards the end of the Middle Ages Jews lived in 85 towns in Poland and their total number amounted to 18,000 in Poland and 6,000 in Lithuania, which represented merely 0.6 per cent of the total population of the two states. The 16th and the first half of the 17th century saw increased settlement and a relatively fast rate of natural population growth among both Polish and Lithuanian Jews. The number of immigrants also grew, especially in the 16th century. Among the new arrivals there were not only the Ashkenazim, banished from the countries belonging to the Habsburg monarchy, that is Germany, Bohemia, Hungary and Lower Silesia (in the 1580’s the whole of Silesia had only two Jewish communities, in Glogow and Biala), but also the Sephardim who were driven away from Spain and Portugal. Moreover many Sephardic Jews from Italy and Turkey came to Poland of their own free will. Towards the end of the 16th century the flood of immigration abated and new communities were founded generally as a result of the movement of the population from the crowded districts to new quarters. In around 1648 Jews lived in over half of all cities in the Commonwealth, but the center of Jewish life moved from the western and central parts of Poland to eastern voivodships where two out of three townships had Jewish communities. Beginning in the middle of the16th century Jews started to settle in the countryside in larger numbers. In the middle of the 17th century there were 500,000 Jews living in Poland, which meant some five per cent of the total population of Poland and the Grand Duchy of Lithuania. The legal position of the Jews was still regulated by royal and princely privileges and Sejm statutes, with the difference that in 1539 Polish Jews from private towns and villages became subordinated to the judiciary and administration of the owners. From that time on, an important role was played by privileges granted by individual lords. On top of that, the legal status of Jews was still influenced by synodal resolutions and the common law. All this amounted to a considerable differentiation in the legal position of the Jewish population. In some cities Jews were granted municipal citizenship, without, however, the right to apply for municipal positions. In many towns, especially the gentry towns, Jews were given complete freedom in carrying out trade and crafts, while in others these freedoms as well as the right to settle were restricted. Finally there were also towns where Jews were not allowed to settle. In the 16th century more than twenty towns obtained the privilegia de non tolerandis Judaeis. These included Miedzyrzec in 1520, Warsaw in 1525, Sambor in 1542, Grodek in 1550, Vilna in 1551, Bydgoszcz in 1556, Stryj in 1567, Biez, Krosno and Tarnogrod 1569, Pilzno in 1577, Drohobycz in 1578, Mikolajow in 1596, Checiny in 1597. In practice, however, this ban was inconsistently observed. In other locations, separate suburbs, “Jewish towns”, were formed (for example in Lublin, Piotrkow, Bydgoszcz, Drohobycz and Sambor) or the Jews fought for and won the revocation of those discriminatory regulations, for example in Stryj and Tarnogrod. The restrictions imposed on the territorial expansion of Jewish quarters forced the Jews to seek the privlegia de non tolerandis christianis, or bans on Christian settlement in Jewish quarters. Such privileges were won by the Jewish town of Kazimierz in 1568, the Poznan community in 1633 and all Lithuanian communities in 1645. Between 1501 and 1648 Jews further intensified their economic activity. This was accompanied by a basic change in the occupational structure of the Jewish population in comparison with the previous period. The primary sources of income for Jewish families were crafts and local trade. The magnates for whom Jewish traders and craftsmen were an important element in their rivalry with the royal towns, generally favored the development of Jewish crafts. On the other hand, in larger royal towns as well as in the ecclesiastical towns Jewish craftsmen and also Christian craftsmen who were not members of a guild (known as partacze or patchers) were exposed to permanent harassment from the municipal authorities and the Christian guilds. They could carry out their occupations only clandestinely. In a small number of towns, for example in Grodno, Lvov, Luck and Przemysl, some Jewish craftsmen managed to wrest for themselves the right to perform their trade from the local guilds, but that only after having to pay heavy charges. Despite these difficulties Jewish crafts, which were encouraged by royal starosts and owners of gentry jurisdictions, not only maintained their state of ownership but expanded it considerably. In the middle of the 17th century Polish and Lithuanian Jews practiced over 50 trades (43 in Red Ruthenia) and were represented in all branches of craftsmanship. The most numerous of them were those who made food, leather and textile products, clothing, objects of gold and pewter and glass manufacturers. In the first half of the 17th century Jewish craftsmen founded their own guilds in Krakow, Lvov and Przemysl. In Biala Cerkiew several Jewish craftsmen (tailors and slaughterers) belonged to Christian guilds in 164I. In the 16th and the first half of the 17th century Jews played an outstanding role in Poland’s foreign trade. They contributed to the expansion of contacts with both the east and the west and were instrumental in importing foreign commercial experience to Poland. Particularly animated trade contacts were maintained by Jewish merchants with England and the Netherlands through Gdansk, and Hungary and Turkey through Lvov and Krakow. Jews exported not only Polish agricultural produce and cattle but also ready-made products, particularly furs and clothing. In return they brought in goods from east and west which were much sought after in Poland. Jewish wholesalers appeared at large fairs in Venice, Florence, Leipzig, Hamburg, Frankfurt on Main, Wroclaw and Gdansk. In order to expand their trade contacts they entered into partnerships. For example in the mid-16th century Jewish merchants from Brest Litovsk, Tykocin, Grodno and Sledzew founded a company for trade with Gdansk, while in 1616 a similar company was established by merchants from Lvov, Lublin, Krakow and Poznan. At the turn of the 16th and 17th centuries, in many towns Jewish and Christian merchants set up joint ad hoc companies in order to conclude profitable financial operations. In European and overseas trade only a relatively small number of Jews were engaged. The most numerous group among Jewish merchants were owners of shops as well as stall keepers and vendors whose whole property was what they put on show on the stall in front of their houses or on a cart, or what they carried in a sack on their backs. The expansion of Jewish trade troubled the burghers for whom Jewish competition was all the more painful since they now had yet another rival in the developing gentry trade. The struggle of part of the burghers against Jewish merchants manifested itself among other things in attempts at curtailing Jewish trade. The monarchs, though generally favorably disposed towards the Jews, under the pressure from the burghers and the clergy passed a number of decrees which restricted Jewish wholesale trade to some commodities or else to certain quotas of purchases they were allowed to make. More severe restrictions were contained in agreements concluded between municipal authorities and Jewish communities, though these were seldom observed in practice. In private towns, Jewish trade, which yielded considerable profit to the owners, could develop without any obstacles. The Jews’ trading activity also encompassed credit operations. The richest Jewish merchants were often at the same time financiers. The most famous Jewish bankers were the Fiszels in Krakow and the Nachmanowiczs in Lvov as well as Mendel Izakowicz and Izak Brodawka in Lithuania. Those and a number of other Jews pioneered centralized credit operations in Poland. Though banking institutions created by them mainly financed large Jewish tenancies and wholesale trade, as a sideline they also lent money to the gentry on pledge of incoming crops and to Jewish entrepreneurs. A positive role was also played by much smaller loans granted by Jews to many small craft and trade shops. In many cases these loans were instrumental in opening a business. However, the other side of the matter must not be overlooked. The lending of money at high interest led to the impoverishment of both Jewish and Christian debtors. Some of them were put in prison as a result and their families were left with no means of subsistence. This money lending activity aggravated prejudice against Jews among the burghers, something which had always been there anyway due to their religious and traditional separateness. An important field of the Jews’ economic activity were tenancies. In the period under discussion, next to rich merchants and bankers who held in lease large economic enterprises and the collecting of incomes from customs and taxes, there appeared a numerous group of small lease holders of mills, breweries and inns. There also increased the number of Jewish subtenants, scribes and tax collectors employed by rich holders. Some of the latter sometimes attained important positions. For example, in 1525, during the ceremonies connected with the Prussian Homage, without relinquishing his Jewish faith the main collector of Jewish taxes in Lithuania, Michal Ezofowicz was knighted and given the crest of Leliwa. His brother Abraham Ezofowicz, who had been baptized, was also knighted and granted the starosty of Minsk and the office of Lithuanian deputy treasurer. In the first quarter of the 16th century, Jewish lease holders performed their functions as full-fledged heads of enterprises subordinated to them, for example salt mines and customs offices. “In this period,” wrote in 1521 Justus Ludwik Decius, the chronicler of Sigismund the Old, “Jews are gaining in importance; there is hardly any toll or tax for which they would not be responsible or at least to which they would not aspire. Christians are generally subordinate to the Jews. Among the rich and noble families of the Commonwealth you will not find one who would not favor the Jews on their estates and give them power over Christians.” The gentry, who in the 16th century conducted an unrelentless struggle against the magnates, came out against the leasing of salt mines, customs and tolls to the Jews by the lords and the king. Under the influence of the gentry, the diet of Piotrkow in 1538 forbade Jews to take in lease public incomes. This ban was reiterated several times by subsequent diets but it proved only partly effective. In 1581 the autonomous representation of the Jews (the Diet of the Four Lands), which gathered in Lublin, took a decision which, under penalty of anathema, forbade fellow Jews taking the lease of salt mines, mints, taxes on the sale of liquor and customs and tolls in Great Poland, Little Poland and Mazovia. This ban was justified in the following way: “People fired by the greed of great income and wealth owing to those large tenancies, may bring unto the whole [Jewish population]- God forbid-a great danger.” From that time on, Jewish lease holders were active only in Red Ruthenia, Podolya, Volhynia, west bank Ukraine and Lithuania. In the tenancies supervised by the Jews as well as in the estates run by the gentry, feudal exploitation of the peasant serfs often led to local revolts which in the Ukraine turned into a Cossack and peasant uprising. The cooperation of the Jewish lease holders with the magnates in the latter’s colonial policy caused these revolts often to be held under the slogan of struggle against the Poles and Jews. Next to crafts, trade, banking and leasing operations, agriculture had become an increasingly important source of income for the Jewish population in the eastern regions of the Commonwealth. Maciej Miedhowita, author of the Polish Chronicle (1519), when mentioning Jews, says that in Ruthenia they were engaged not only in moneylending and trade but also soil cultivation. In towns Jews owned fields and gardens. In Chelm in 1636 Jewish landless peasants were forced to do serf labor. In villages Jews also toiled the land adjoining the inns, mills and breweries they held in lease. Some Jews earned their living as paid kahal officials, musicians, horse drivers, factors on gentry estates and in the houses of rich merchants, as middlemen known as barishniki, servants, salesmen, etc. There was also a large group of beggars and cripples without any means of subsistence. Only some of them obtained from time to time assistance from charity organizations and were given a place to sleep in an almshouse. In view of the growing financial differentiation among the Jews social conflicts intensified. The middle of the 16th century saw the beginning of opposition by Jewish craftsmen against individuals who placed their capital in leather, textile and clothing manufacture. The struggle of the populace against rich merchants and bankers was reflected in the activity of Salomon Efraim of Keczyca, an outstanding plebeian preacher. In his book Ir Gibborim (The Town of Heroa), published in 1580 in Basle, he sharply criticized the exploitation of the poor by the rich. He also attacked the rabbis who tried to gain the favor of the wealthy Jews. He presented his views not only in his books and lectures in the synagogue, but also during fairs which were attended by numerous Jews. There are records of joint revolts by Jewish craftsmen and Christian “patcher” against the guild elders. There were also joint revolts of the Jews and the burghers against the gentry. This found expression in an agreement which in 1589 Jews in Kamionka Strumilowa concluded with the municipal authorities “with the consent of all the populace”. The councilors “accepted the Jews into their own laws and freedoms while they [the Jews] undertook to carry the same burdens as the burghers”. Jews pledged themselves to help in keeping order and cleanliness in the town, hold guard and take part in anti-flood operations together with Christians. The latter promised that they would “defend those Jews as our real neighbors from intrusions and violence of both the gentry and soldiers. They would defend them and prevent all harm done to them… since they are our neighbors.” The rapid development of Jewish settlement and economic activity was accompanied by expansion of their self-government organization. In the 16th century its structure had no equal in all of Europe. As in the Middle Ages, every autonomous Jewish community was governed by its kahal or a collegiate body composed of elders elected as a rule from among the local wealthiest The kahal organized funerals and administered cemeteries, schools, baths, slaughterhouses and the sale of kosher meat. In the closed “Jewish cities” it also took care of cleanliness and order in the Jewish quarter and the security of its inhabitants. To this should be added the administering of charities such as the organization of hospitals and other welfare institutions and the dowering of poor brides. Another important function was to establish the amount of taxes each individual household in the given community was to pay. The further hierarchic development of the Jewish autonomous institutions was connected with the difficulties which in the early 16th century the authorities encountered in exacting taxes. Between 1518 and 1522 Sigismund Augustus decreed the foundation of four Jewish regions called lands. Each of these lands was to elect at a special diet its elders, tax assessors and tax collectors. In 1530 the king established a permanent arbitration tribunal based in Lublin which was to examine disputes between Jews from various lands. In 1579 Stephen Bathory called into being a central representation of Jews from Poland and Lithuania with responsibility for exacting poll taxes which had been introduced for the Jewish population in 1549. This institution, known as the Diet of the Four Lands (Va ‘ad Arba Arazot), was constituted at a congress in Lublin in 1581. The Diet of the Four Lands, which usually was summoned once a year, elected from among its number a council, known as the Jewish Generality. The latter was headed by a Marshal General and included a Rabbi General, Scribe General and Treasurers General. The diets were attended by representatives of both Poland and Lithuania until 1623 when, following the establishment of a separate taxation tribunal for Lithuanian Jews, a separate diet of Lithuanian Jews was also set up. These institutions continued in existence until 1764. The diet of Polish Jews usually convened in Lublin, sometimes in Jaroslaw or Tyszowce, while the Lithuanian diets debated most often in Brest Litovsk. The diet or Va ‘ad represented all the Jews. It carried out negotiations with central and local authorities through its liaison officers (shtadlans) who, by their contacts with deputies, tried to influence the decisions concerning Jews taken by the Sejm and local diets of the gentry. During the sessions of the ra ‘ads not only fiscal matters were discussed but also those related to the well-being and cultural life of the Jewish population in the Commonwealth. They took decisions on the lease of state products, the amount of interests in credit transactions among Jews, the protection of creditors against dishonest bankrupts, the upbringing of young people, the protection of the family, etc. The Va ‘ad also took decisions on the taxation of the Jewish population, for example for defensive needs of the country. The main tax was the poll tax. In addition the Jews, like the rest of the burghers, paid taxes for the city’s defenses. Besides taxes, all townsfolk, irrespective of religion, were obliged to perform certain tasks and contribute money in order to build and expand defensive systems and maintain permanent crews of guards. The Jews, like the Christian population, had personally to contribute to the town’s defense preparedness. In the Jewish quarter the most important structure was the fortified synagogue. In the 16th and 17th centuries several dozen such buildings were erected in Poland’s eastern borderlands, including such places as Brody, Buczacz, Czortkow, Husiatyri, Jaroslaw, Leszniow, Lublin, Luck, Podkamien, Pomorzany, Sokal, Stryj, Szarogrod, Szczebrzeszyn, Szydlow, Tarnopol, Zamosc and Zolkiew. One of the main duties of all townsfolk, including the Jews, was to defend the city as a fortified point of resistance in case enemy troops succeeded in forcing their way through into the country. In the early 16th century in the Grand Duchy of Lithuania to this was added the duty of providing a contingent of soldiers. After 1571 this duty was changed to appropriate money dues. For the first time Jews were ordered to provide an army contingent in 1514 but this obligation began to be exacted more consistently only after 1648. As was the case with the remaining population Jews acquired their military training during obligatory exercises and their fighting preparedness and ability to wield arms were tested during special parade. The first mention of a Jew’s direct participation in battle against enemies of the Commonwealth dates from the middle of the16th century. During the reign of Stephen Bathory there served in the Polish army one Mendel Izakowicz from Kazimierz near Krakow. He was a bridge builder and military engineer and during the war against Muscovy rendered considerable services to the Polish army. During the war with Muscovy in 1610-12 in one regiment only, probably one of those belonging to Lisowski’s light cavalry, more than ten Jews served at one time. A certain number of Jews also fought on the Polish side in the Smolensk war of 1632-34 and some of them were taken prisoner by the enemy. The year 1648, when the Cossack uprising under Bohdan Chmielnicki broke up, was a breakthrough in the history of both the Commonwealth and Polish Jewry. The country was plunged into economic crisis made worse by war devastation. The wars against the Ukraine, Russia, Sweden, Turkey and the Tartars, which Poland fought almost uninterruptedly between 1648 and 1717, brought in their wake a permanent downfall of towns and agriculture and decimated the population. During Bohdan Chmielnicki’s revolt and wars against the Ukraine and Russia Jewish communities in the areas occupied by enemy troops were completely wiped out. Some Jews were murdered, some emigrated to central Poland and the rest left for Western Europe. The drop in the number of the Jewish population during the Ukrainian uprisings (1648-54) is estimated as amounting to some 20 to 25 per cent, that is between 100,000 and 125,000. A rapid growth in the number of the Jewish population was recorded only in the 18th century, after 1717. It is estimated that in 1766, when the census of Jews obliged to pay poll taxes was concluded, there were in the Commonwealth as a whole some 750,000 Jews, which constituted seven per cent of the total population of Poland and the Grand Duchy of Lithuania. According to Rafal Mahler, at this time some 29 per cent of all Jews lived in ethnically Polish areas, 44 per cent in Lithuania and Byelorussia and 27 per cent in regions with a predominantly Ukrainian population. Two thirds of all Jews lived in towns and the remainder in the countryside. Following the first partition of Poland some 150,000 Jews found themselves under Austrian occupation, about 25,000 in the Russian zone and only 5,000 in Prussia. The population census conducted in Poland in 1790-9I demonstrated a further increase in the number of Jewish inhabitants. Tadeusz Czacki estimated them at over 900,000, that is some 10 per cent of the total population of the then Commonwealth. In the same period (1780) in the Austrian zone there were over 150,000 Jews and several tens of thousands in the remaining partition zones. The reconstruction of towns after each war took a long time. The quickest to emerge from ruin were the estates of magnates who willingly employed the Jewish population. In the eastern part of the Commonwealth and partly in central Poland Jews played an important role in reactivating crafts, and not only such traditionally Jewish branches as goldsmithery, pewter, haberdashery and glass manufacture, furriery and tailoring, but also tin and copper working, arms production, carpentry, printing, dying and soap manufacture. There appeared in this period a large number of Jewish craftsmen who traveled from village to village, from manor to manor, in search of temporary employment. The material situation of Jewish craftsmen was generally difficult. The pauperization of towns and villages made it hard to sell their products both for Jewish craftsmen and their Christian counterparts. In the large cities, rivalry between the guilds on the one hand and the Jewish and Christian “patchers” on the other bred conflicts. These often ended in compromise and Jews more often than ever before were admitted to Christian guilds. At the same time, next to the old ones, new, purely Jewish guilds were formed, for example in Poznari, Krakow, Lvov, Przemysl, Kepno, Leszno, Luck, Berdyczow, Minsk, Tykocin and Bialystok. During the wars of the middle of the 17th century Jewish wholesale trade, both long distance and foreign, came nearly to a standstill. Only in some cities, for example Brody and Leszno, Jewish merchants, thanks to considerable support on the part of the magnates, succeeded in renewing contacts with Gdansk, Wroclaw, Krolewiec, Frankfurt on Oder and to a lesser degree with England. Thanks to the magnates’ assistance local, Jewish trade also began to expand. Most shops in the reconstructed town halls were leased to Jews (for example in Staszow, Siemiatycze, Kock, Siedlce and Bialystok). Peddling was also spreading as a result of which trade exchange between town and country, interrupted during the wars, was revived. After the middle of the 17th century wars radical changes took place in the organization of credits. Large banking houses disappeared and the kahals, instead of being creditors, turned into debtors. Representatives of the gentry and the clergy increasingly often placed their money in Jewish communities at the same time forcing the latter to take genuine responsibility for the debts of individual Jews. In case a kahal was unable to repay its debts, the gentry had the right to seal and dose down its prayer house, imprison the elders and confiscate goods belonging to merchants. In order to safeguard themselves against the lightheartedness of individual debtors the communities applied the credit hazakah, which consisted in the community issuing permissions to its members who wanted to avail themselves of credit. Whether someone was given a loan or not was often decided by a clique consisting of the kahal elders. Part of the capital leased from the gentry and the clergy and augmented by means of interest disappeared into the pockets of the kahal oligarchy, while part of it was turned over to nonproductive purposes, for example to financing defense in ritual murder trials, paying for the lords’ protection, etc. In the first half of the 18th century the gentry and the clergy became anxious of the fate of money located in the Jewish communities and the interests from unpaid debts which were growing in a landslide. When the above mentioned methods failed to produce adequate results, the krupki were applied, that is a consumption taxation, the income from which was destined totally for paying off the debts. Finally in 1764 a decision was taken on abolishing kahal banks altogether and servicing debts by taxing each Jew. As a result of the general impoverishment of the Jewish population in the second half of the 17th and in the 18th century, differences between the people and the kahal oligarchy deepened, the latter trying to pass the burden of the growing state and kahal taxes onto the shoulders of the poorer classes. In several cities, for example in Krakow, Leszno and Drohobycz, the Jewish poor revolted against the kahal oligarchies. A fierce struggle against the kahals was carried out by Jewish guilds which tried to free themselves from their economic dependence. At the same time, especially in larger royal towns, conflicts fired by economic rivalry broke out between Jews and Christians. The tense atmosphere of this struggle, conducted usually under religious slogans, was conducive to the outbreak of anti-Jewish riots and pogroms, for example in Krakow, Poznan, Lvov, Vilna, Brest Litovsk and several other cities. Particularly menacing were ritual trials organized in the period of religious prejudices. However much more dangerous was the situation in the Ukraine where the Jews returned only in the late 17th century. The role played in the 18th century by Jewish lease holders in the Polish magnates’ colonial policy turned the anger of the local populace, as was the case during Bohdan Chmielnicki’s uprising, against both the Polish gentry and Jews generally. In 1768, during a peasant rebellion called kolisczzyz na, which was organized under the slogans of “winning independence” and defense of the Russian Orthodox religion, in Humari and several other Ukrainian cities several thousand gentry and several tens of thousand Jews were murdered. The events in the Ukraine in 1768 turned the minds of the more enlightened section of Polish society to the problem of carrying out fundamental political reforms and solving both the peasant and the Jewish question. The latter was not only discussed in the last decades of the Commonwealth but practical ways of solving it were sought. Many pamphlets and Sejm speeches dealt with this matter. Some were for the further limitation of the Jews’ economic activity while others spoke of turning the Jews into subjects of the gentry, as was the case with the peasants. Finally there were also those who demanded the expulsion of Jews from Poland. These views were opposed by an enlightened group of the gentry, led by Tadeusz Czacki and Maciej Topor Butrymowicz. This group demanded the limitation of the authority of the kahals and a change in the occupational structure of Jews through their employment in manufacturing and agricultural farms. It was also for the assimilation of the Jews and their inclusion in the burgher estate. In the 1760s the Jewish question was the subject of Sejm debates. In 1764 the Sejm passed a resolution on the liquidation of the central and land organization of the Jews. In 1768 it decided that Jews might perform only such occupations which were allowed to them by individual agreements with towns. From the point of view of Jews, this meant full dependence on their all-time rival in the economic field, that is on the burghers. The Sejm of 1775 undertook the problem of agrarianization of the Jewish community and passed a resolution granting tax exemptions to those Jews who settled on uncultivated land. The same law forbade rabbis to wed those who had no permanent earnings. Jewish reforms were also discussed during the Great Sejm which elected a special commission for Jewish affairs. However this commission did not manage to submit its findings before 14 April 1791, that is the date when the law on towns was passed, on the basis of which Jews were not included in the burgher estate. Later the Jewish question was dealt with several times; however the Four Year Sejm failed to approve any fundamental reforms in this field. The only important concession for the Jews during the debates of the Four Year Sejm was contained in the law of the police commission of 24 May 1792 which said that Jews, like all other citizens of the Commonwealth, could avail themselves of the right not to be put in prison without a court verdict. Though no important law concerning the solving of the Jewish question was approved by the Four Year Sejm, the very fact that the matter was discussed was welcomed by part of the Jewish community with appreciation. On the first anniversary of the passing of the Third of May Constitution services of thanksgiving were held in all synagogues and a special hymn was published. Neither was the difficult Jewish question solved in the Prussian and Austrian partition zones. In the Prussian zone, according to the decree issued by Frederick II, the Jewish population was to be subordinated to the Prussian Jewish ordinance (General Judenreglement) of 17 April 1797. The right to permanent residence in towns was granted only to rich Jews and those engaged in trade. Jews were forbidden to pursue those occupations which were already represented in the guilds. The poor Jews, the Bettel Juden, were ordered by Frederick II to be expelled from the country. The activity of Jewish self-government organizations was limited almost exclusively to religious affairs. In the Austrian partition zone the attitude towards the Jewish question went through two stages, In the initial period, that is during the reign of Maria Theresa and the first years of rule of Joseph II, the separateness of the Jewish population from the rest of Galician society was retained and, with only slight modifications, Jewish self-government was preserved. The poorest Jews were expelled from the country. The remainder were limited in their right to get married, removed from many sources of income and forced to pay high taxes. In the second half of the reign of Joseph II the Jews were recruited into the army (1788) and then, on the strength of the grand Jewish ordinance of 1789 certain restrictions in relation to the Jewish population were lifted and attempts were made to make them equal with the burghers. Expulsions of the Jewish population from Galicia were discontinued, the separate Jewish judiciary was abolished, Jewish self-government was restricted. Jews were ordered to wear dress similar to the Christian population and obliged to attend either German or reformed Jewish schools. However the separate Jewish tax was retained and their economic activity in the countryside was restricted. Some of these decrees met with a decided opposition on the part of the Jews and were eventually revoked. In 1792 Leopold II, Joseph II’s successor, changed the military duty of the Jews into a money contribution, while the decree ordering the Jews to wear Christian dress was never introduced in practice. In the second half of the 17th century Jews took an increasingly numerous part in the wars fought by the Commonwealth. During wars against the Cossacks and the Tartars, the Jewish population provided infantry and mounted troops. Some young Jews fought in the open field, for example in the battle of Beresteczko. Jews also fought in defense of besieged cities, for example Tulczyn, Polonne, Lvov and others. During Poland’s wars with Sweden (1655-60), Russia (1654-67) and Turkey (1667-99) Jews provided recruits and participated in the city’s defense (for example Przemysl, Vitebsk, Stary Bychow, Mohylew, Lvov and Trembowla), together with the burghers and gentry organized sorties to the enemy’s camp (for example at Suraz in 1655, in the vicinity of Podhajce in 1667 and in Przemysl in 1672). The military engineer Jezue Moszkowicz of Kazimierz near Krakow, who in 1664 served in the Polish army, saved heavy mortars and other weapons from being sunk during the war against Russia. During the Kosciuszko Insurrection and wars against Tsarist Russia in 1794 Jews supported the uprising either in auxiliary services or in arms. For example they took part in the April revolution in Warsaw where many of them perished. After the Russian army was repulsed from Warsaw the idea was born to create a separate military unit composed of Jewish volunteers. This idea was backed by the commander in chief of the Insurrection, Tadeusz Kosciuszko. “Nothing can convince more the far away nations about the holiness of our cause and the justness of the present revolution,” he wrote in a Statement on the Formation of a Regiment of Jews, “than that, though separated from us by their religion and customs, they sacrifice their own lives of their own free will in order to support the uprising.” The Jewish regiment under Colonel Berek Josielewicz took part in the fighting during the storming of the Praga district of Warsaw by Tsarist troops on 4 November 1794. With the blood shed in this war they documented the loyalty of the Jewish population to the cause of the revolution and the slogans it upheld-equality and fraternity.
What does it mean to be Australian? What do people from other countries think of Australia? Would they enjoy visiting Australia? Why? Why not? In this unit of work students begin to analyse how cultures and values differ and why. They look at aspects of Australia which could interest people from other countries, and start to discover the world’s many varied cultures, and how much there is potentially to understand and learn. In this unit the students’ deeper investigation focuses on Japan. Students research and compare Japanese and Australian culture, and analyse what tourist attractions and destinations may appeal to a Japanese student. They create 2 appropriate itineraries and justify the choices made in their creation. Teachers encourage students to revisit the Curriculum Framing Questions throughout the unit to guide research and discussion, and to sustain the purpose of the activities. - Essential Question Who lives and travels in our world? - Unit Questions What is special about Australia? What in our lives reflects our culture? Why do people visit Australia? How can I use what I value to plan an entertaining and informative itinerary within a reasonable budget? How do we ascertain/accommodate the interests of other people? - Content Questions What is culture? What places within Australia have cultural or historical significance? What are a travel itinerary and a reasonable budget? Which activities, attractions and destinations will appeal to people from other countries and cultures? Why? Where can I obtain the necessary information for my travel itinerary? View how assessment is used in this unit plan. These assessments help students and teachers set understandable goals, monitor student progress, provide feedback, assess thinking, performance and products and reflect on learning throughout the activities. Students’ Prior Knowledge - ICT skills in Microsoft Excel* (essential) - ICT skills in Microsoft Word* and PowerPoint* (desired) - Interpreting information from a variety of formats (e.g. timetables, travel brochures, websites) - Mapping skills - Understanding and appreciating the interests of a person from another culture - Awareness of cultural differences and similarities - Using a variety of mathematical skills (e.g. using 24 hour time) - Persuasive writing - Popular tourist destinations in Queensland and Australia Teachers’ Professional Learning - ICT Professional Development in: Microsoft Excel* - Word processed tables - Hyper linking - Digital Photos - Internet (e.g. saving pictures of tourist attractions, accommodation information, etc.) - Publication and website software - Multimedia Presentation Software - Professional Learning Team with: - LOTE teacher - ICT teacher - Japanese Guest speaker - Year 6 and 7 students attending Japanese Trip - Professional reading: - Productive Pedagogies (e.g. Intellectual Quality, Connectedness, Recognition of Difference) - Use of Higher Thinking Skills (e.g. Y chart, T chart, Venn Diagrams, etc.) Teaching and Learning Strategies - Whole class discussions - Typical Australian (e.g. stereotypes) - Stimulus pictures “When Cultures Meet” cartoons – progress discussion about cultures - Y Chart to brainstorm types of foods in Australia and Japan - Comparison of Japanese and Australian cultures using a Venn Diagram - Complete Philosophical discussion on questions of culture and stereotypes - LOTE lessons continue introducing students to Japanese culture - Japanese Guest speaker - ICT lessons – Microsoft Excel*, Microsoft Word* and Microsoft PowerPoint* - Persuasive text writing - Excursion to Koala Park - Concept map about culture and use this information to complete ‘Cultural Iceberg’ - Building awareness of culture. - Maths activities on 24-hour time scale & coordinate, interpreting graphs, budgeting, etc. - Research into popular tourist destinations in local area and throughout Australia. - Pair and share of persuasive justifications for travel itineraries. - Constructing T Chart listing facts and opinions for Australian and Japanese youth activities. - Writing a personal profile. - Building and writing a profile of Japanese exchange student. - Writing a local travel itinerary (1 to 2 days). - Writing a national travel itinerary (7 to 8 days). - Construct a spreadsheet. - Write a persuasive justification stating why one travel itinerary is better. Self Paced Learning Provide Microsoft PowerPoints* and a student checklist for students to be able to pace their own learning Teaching and Learning Activities Introducing the Unit Teachers use the curriculum framing questions to encourage students to explore their identity. Teachers identify learning activities to help students understand the Australian culture for example: - Teachers ask students to draw a picture of a typical Australian man or woman and attach some adjectives to describe their drawing. Suggestions collected by teacher and organized using a word chart. Teacher asks students “What is culture?” and “What is a stereotype?” - Class philosophical discussion on questions of culture and stereotypes. Children identify their own cultural heritage based on their parent/grand-parent country of origin. Teacher poses the question “What in our lives reflects our culture?” Students respond to this question through creating a blog about their daily life and the Australian culture. - Stimulus pictures and “When Cultures Meet” cartoons can be used to progress discussion about cultures. - Students bring a tin can to school to complete “Can of Aussie Icons” activity. Children design a can label to describe and “sell” a product that encapsulated all the ingredients that make up a common representation of Australia. - Teacher asks students again “What in our lives reflects our culture?” Students participate in a brainstorming activity that calls for suggestions about types of food we have seen over time. Can it reflect our culture? Suggestions organized in Y Chart. - Students view popular images of youth activities through Australian sports and music. Students contribute to a T Chart listing facts and opinions. - Adding to the previous blog the students revisit the question “What in our lives reflects our culture?” - Students create a multimedia presentation of their personal profile (PPT 188KB) including general information, their favourite food, interests and hobbies, where they go to school, where they live including a description of their house and their family. - In pairs or groups the students will review other students’ multimedia presentations and provide written or verbal feedback. - Teacher poses the question “How can we ascertain the interests of other people from other cultures?” In collaboration with the LOTE (Language Other Than English) teacher at your school, identify learning activities to help students understand the Japanese culture and identity for example: Begin by asking the students “What do we know about the Japanese culture?” - The LOTE teacher provides students with cultural information about Japan (conversational Japanese). - Use a map of the world to colour in the countries of Australia and Japan to identify a geographical position and investigate an economic relationship by drawing lines to indicate import/export commodities. - Participate in a research activity about types of food in Japan. Can it reflect the culture? Suggestions organized in Y Chart. Students bring a cushion and placemat to school to participate in a typical Japanese meal using chop sticks. Students compare types of food in Japan to types of food in Australia. Students reflect on what they like/dislike about both types of food and styles of eating. - Research popular images of youth activities including Japanese sports and music. Students contribute to a T Chart listing facts and opinions. - Optional: If possible have a Japanese person create a blog about the daily life and culture in Japan and share the blog on Australian life and culture. - Listen to a guest speaker from a local Japanese cultural association to share daily routine, hobbies & cultural experiences. Students reflect on presentation, and how Japanese values are reflected in their lifestyle. - Complete concept map about culture and then use this information to complete a culture iceberg (JPG 272KB) considering what can be seen culturally and what underpins this with regard to values etc. For more information on this type of activity and graphic organiser refer to: unity in diversity*. - Create a multimedia presentation on Japanese culture (PPT 509KB) to share with the class. - Students work in pairs to review others’ work and provide feedback. - Invent the character of an exchange student from Japan. Children consider name, age, gender, city of origin, family names, work, interests, background, etc and share through creating amultimedia presentation (PPT 186KB). Students revisit both blogs and consider the similarities and differences between the daily life of a Japanese student and their own, and why Japanese children may find certain aspects of our culture and lifestyle interesting. The teacher leads discussions on the comparison of Australian and Japanese cultures through the students: - Identifying common Japanese brand names that can be found in their daily experiences – cars, electrical equipment, etc. Advertisements can be cut out and pasted on a class chart. - Comparing the Can of Aussie Icons and the Culture Iceberg of Japan. - Comparing the Y Chart listings on the type of food identified in each culture. - Comparing the T Chart listings on popular images of youth activities for Australian and Japanese including sports and music. - Creating a Venn diagram comparing the cultures of Japan and Australia finding commonalities and differences. The Y and T charts previously created will scaffold this Venn Diagram activity. Students research tourist attractions and destinations in Australia. Teacher prepares support materials (PPT 549KB) to scaffold student research. Students investigate popular tourist attractions by: - Writing a day itinerary to a tourist attraction and participating in an excursion. Students observe tourists and their behaviours in relation to the attraction they are interacting with. Teacher asks students to reflect during their visit on “Why do people visit Australia?” - Identifying the unique attractions on offer in regional, State or Australia wide areas that would be of interest to a Japanese exchange student. Students cut & paste pictures or write the name of the attraction on a large map of Australia. Students consider the question “What is special about Australia?” Students investigate popular tourist destinations by: - Brainstorming: teacher poses the question “What places within Australia have cultural or historical significance?” - Collecting tourist brochures to gather information about the attractions on offer in different towns and cities in Australia. They consider the descriptive and persuasive language within the text of the brochures. Students use pictures for itineraries. - Teacher presents 6 possible tourist venues and directs a whole class lesson using criteria to rate the appropriateness of local tourist venue selections: uniqueness, age appropriate, culture appropriate, cost, transport ease, personal interest. Students return to the content question “Which activities, attractions and destinations will appeal to people from other countries, and why?” Students rate each criteria (1 to 5) relating it to their exchange student’s profile. - Students should be matched with a peer tutor if having difficulty matching tourist venues to student profiles. - Research rubrics (DOC 38KB) can be created to assist in assessing student work. - Students will compile a publication to describe and promote their tourist attraction and destination. Examples could include a brochure (PDF 224KB) or postcard (PDF 123KB). - Students compile their multimedia presentation (PPT 176KB) of their local attractions and national destinations and present it to the class. - All presentations are burnt to CD and placed in the Class Tourist Information Library. Students compile a local and national itinerary for people travelling from Japan to Australia. - Prior to this stage of the unit, teachers may create support materials (PPT 648KB) to guide students through the process and purpose of their research and activities. - Teacher poses the question “Where can we obtain further necessary information for a travel itinerary?” Students discuss sources of travel and accommodation information. - Students view teacher presentation (PPT 648KB) which covers all components of project. Students participate in the following learning activities. - Engage in explicit and focused mathematics lessons to cover all numeracy skills required for the completion of all aspects of the travel itineraries: 24 hour time (scheduling, timelines), measurement (distance, travel time, speed), mathematical formula for Microsoft Excel, scale & coordinates for mapping, money (exchange rates), budgeting constraints interpreting graphs. - Participate in ongoing directed lesson with the Key Teacher ICT’s in computer lab in constructing an itinerary budget using Excel spreadsheets – activity, details, quantity, cost per item, total cost. - Students will consider “What is a travel itinerary and a reasonable budget?” They may return at later times to this question as they gather more information to support their itinerary and budget, and need to compare prices and options. - Look at examples of travel itineraries collected from travel agents and/or actual personal examples. - Become familiar with and use as a guide a schedule of travelling times within Queensland and within Australia (by plane, train, bus and car) - Will undertake a selection process for their local and nationwide itinerary venues. Students reflect on which activities and attractions/destinations may appeal to their travellers. They may also consider if it is appropriate to use knowledge of their own needs and values at times when planning the itinerary. Are some needs and interests universal? Students will also consider the question “How can we accommodate the needs and interests of people from other countries?” They will consider the personal interest profile of their traveller and rate different categories - city vs country, adventure vs culture, depth vs breadth. - Students present their itinerary proposal to teacher for approval and conference. The local itinerary is to reflect 2 days travelling from Brisbane by car, bus or train. - Discuss the structure of the itinerary format. Students should include destination, arrival & departure timing, schedule - time (24 hour), transport – company name, accommodation, activities, cultural/recreational experiences, costs. - Compile and write their itinerary for the local and/or national tour using Word and/or Word table formatting. - Create a website (PDF 352.6KB) or multimedia presentation about their local and/or national itinerary. - In pairs students review each other’s work and provide feedback. - Teacher and students discuss whether the research and thinking has enabled them to compile an entertaining and informative itinerary for their Japanese exchange student. Teacher encourages students to expand and justify their responses. Students revisit and use persuasive writing genres to compare their local and national itineraries and make a recommendation giving a brief justification in terms of what they were designed to achieve. Students use the research and thinking they have done throughout the unit to justify why they developed their particular local and national itineraries for their Japanese exchange student, and what influenced their choices.class=bodytext> Students will evaluate the Rich Task using a PMI. Students celebrate their learning by inviting parents and the school community to a Travel Expo. Accommodations for Diverse Needs Students with special needs - Modified requirements – local travel itinerary (1 day) based on excursion to Lone Pine Koala Sanctuary. - Itinerary constructed with extra resources from SEU teacher aides. - Students receive extended working time and templates for activities. - Provide self paced learning materials eg: Microsoft PowerPoint’s* of step-by-step instructions or a student checklist in a sequential order to follow. English as a second language (ESL) students - ESL teacher provides extra assistance and adapted program - Peer assistance - Provide with self-paced or extension learning activities. Shannon Bryant and Kate Jones participated in the Intel Teach Program, which resulted in this idea for a classroom project. A team of teachers expanded the plan into the example you see here. © The State of Queensland (Department of Education and the Arts) 2006. * Other names and brands may be claimed as the property of others.
Lesson 37, Quadratic equations: Section 2 IN LESSON 18 we saw a technique called completing the square. We will now see how to apply it to solving a quadratic equation. Completing the square If we try to solve this quadratic equation by factoring, This technique is valid only when the coefficient of x2 is 1. 1) Transpose the constant term to the right x2 + 6x = −2. x2 + 6x + 9 = −2 + 9. The left-hand side is now the perfect square of (x + 3). (x + 3)2 = 7. 3 is half of the coefficient 6. That equation has the form That is, the solutions to x2 + 6x + 2 = 0 are the conjugate pair, −3 + , −3 − . For a method of checking these roots, see the theorem of the sum and product of the roots: Lesson 10 of Topics in Precalculus, Problem 6. Solve each quadratic equation by completing the square. To see the answer, pass your cursor from left to right Problem 7. Find two numbers whose sum is 10 and whose product is 20. x = 5 ± The quadratic formula ax2 + bx + c = 0, If we call those two roots r1 and r2 , then the quadratic can be factored as (x − r1)(x − r2). We will prove the quadratic formula below. Example 4. Use the quadratic formula to solve this quadratic equation: 3x2 + 5x − 8 = 0 Solution. We have: a = 3, b = 5, c = −8. Therefore, according to the formula: Those are the two roots. And they are rational. When the roots are rational, we could have solved the equation by factoring, which is always the simplest method. Problem 8. Use the quadratic formula to find the roots of each quadratic. a) x2 − 5x + 5 a = 1, b = −5, c = 5. b) 2x2 − 8x + 5 a = 2, b = −8, c = 5. c) 5x2 − 2x + 2 a = 5, b = −2, c = 2. The radicand b2 − 4ac is called the discriminant. If the discriminant is Problem 9. Show: If the roots of ax2 + bx + c are complex, and a, b, c are positive, then 2a − b + c > 0. Since the roots are complex, then the discriminant b2 − 4ac < 0. That implies b2 < 4ac. Now, 2a − b + c > 0 if and only if b < 2a + c if and only if b2 < 4a2 + 4ac + c2, which is true. For, since b2 < 4ac, it is less than more than 4ac. 4a2 and c2 are positive. The student should be familiar with the logical expression Proof of the quadratic formula To prove the quadratic formula, we complete the square. But to do that, the coefficient of x2 must be 1. Therefore, we will divide both sides of the original equation by a: on multiplying both c and a by 4a, thus making the denominators the same (Lesson 23), This is the quadratic formula. Please make a donation to keep TheMathPage online. Copyright © 2016 Lawrence Spector Questions or comments?
Interstellar space travel is manned or unmanned travel between stars. Interstellar travel is much more difficult than interplanetary travel: the distances between the planets in the Solar System are typically measured in standard astronomical units (AU)—whereas the distances between stars are typically hundreds of thousands of AU, and usually expressed in light-years. Because of the vastness of those distances, interstellar travel would require either great speed (some percentage of the speed of light) or huge travel time (lasting from decades to millennia). The required speeds for interstellar travel in a human lifespan are far beyond what current methods of spacecraft propulsion can provide. The energy required to propel a spacecraft to these speeds, regardless of the propulsion system used, is enormous by today's standards of energy production. At these speeds, collisions by the spacecraft with interstellar dust and gas can produce very dangerous effects both to any passengers and the spacecraft itself. A number of widely differing strategies have been proposed to deal with these problems, ranging from giant arks that would carry entire societies and ecosystems very slowly, to microscopic space probes. Many different propulsion systems have been proposed to give spacecraft the required speeds: these range from different forms of nuclear propulsion, to beamed energy methods that would require megascale engineering projects, to methods based on speculative physics. For both unmanned and manned interstellar travel, considerable technological and economic challenges would need to be met. Even the most optimistic views about interstellar travel are that it might happen decades in the future due to the exponential advances in technology; the more common view is that it is a century or more away. - 1 Challenges - 2 Prime targets for interstellar travel - 3 Proposed methods - 3.1 Slow, uncrewed probes - 3.2 Fast, uncrewed probes - 3.3 Slow, manned missions - 3.4 Island hopping through interstellar space - 3.5 Fast missions - 3.6 By transmission - 4 Propulsion - 4.1 Rocket concepts - 4.2 Non-rocket concepts - 4.3 Speculative methods - 5 Designs and studies - 6 Non-profit organizations - 7 Skepticism - 8 See also - 9 Notes - 10 Further reading - 11 External links The basic challenge facing interstellar travel is the immense distances between the stars. Astronomical distances are measured using different units of length, depending on the scale of the distances involved. Between the planets in the Solar System they are often measured in astronomical units (AU), defined as the average distance between the Sun and Earth, some 150 million kilometers (93 million miles). Venus, the closest other planet to Earth is (at closest approach) 0.28 AU away. Neptune, the farthest planet from the Sun, is 29.8 AU away. Voyager 1, the farthest man-made object from Earth, is 130.83 AU away. The closest known star Proxima Centauri, however, is some 268,332 AU away, or 9000 times farther away than even the farthest planet in the Solar System. |The Moon||0.0026||1.3 seconds| |Venus (nearest planet)||0.28||2.41 minutes| |Neptune (farthest planet)||29.8||4.1 hours| |Voyager 1||130.83||18.1 hours| |Proxima Centauri (nearest star)||268,332||4.24 years| Because of this, distances between stars are usually expressed in light-years, defined as the distance that a ray of light travels in a year. Light in a vacuum travels around 300,000 kilometers (186,000 miles) per second, so this is some 9.46 trillion kilometers (5.87 trillion miles) or 63,241 AU. Proxima Centauri is 4.243 light-years away. Another way of understanding the vastness of interstellar distances is by scaling: one of the closest stars to the Sun, Alpha Centauri A (a Sun-like star), can be pictured by scaling down the Earth–Sun distance to one meter (~3.3 ft). On this scale, the distance to Alpha Centauri A would be 271 kilometers (169 miles). The fastest outward-bound spacecraft yet sent, Voyager 1, has covered 1/600th of a light-year in 30 years and is currently moving at 1/18,000th the speed of light. At this rate, a journey to Proxima Centauri would take 80,000 years. Some combination of great speed and long travel time are required. The time required by propulsion methods based on currently known physical principles would require years to millennia. A significant factor contributing to the difficulty is the energy that must be supplied to obtain a reasonable travel time. A lower bound for the required energy is the kinetic energy K = ½ mv2 where m is the final mass. If deceleration on arrival is desired and cannot be achieved by any means other than the engines of the ship, then the lower bound for the required energy is doubled to mv2. The velocity for a manned round trip of a few decades to even the nearest star is several thousand times greater than those of present space vehicles. This means that due to the v2 term in the kinetic energy formula, millions of times as much energy is required. Accelerating one ton to one-tenth of the speed of light requires at least 450 PJ or 4.5 ×1017 J or 125 billion kWh (world energy consumption 2008 was 143,851 twh (1 terawatt-hour (TWh) = 1 billion kilowatt-hours (kWh)), without factoring in efficiency of the propulsion mechanism. This energy has to be generated on-board from stored fuel, harvested from the interstellar medium, or projected over immense distances. The mass of any craft capable of carrying humans would inevitably be substantially larger than that necessary for an unmanned interstellar probe. For instance, the first space probe, Sputnik 1, had a payload of 83.6 kg, whereas the first spacecraft carrying a living passenger (the dog Laika), Sputnik 2, had a payload six times that at 508.3 kg. This underestimates the difference in the case of interstellar missions, given the vastly greater travel times involved and the resulting necessity of a closed-cycle life support system. As technology continues to advance, combined with the aggregate risks and support requirements of manned interstellar travel, the first interstellar missions are unlikely to carry life forms. A major issue with traveling at extremely high speeds is that interstellar dust and gas may cause considerable damage to the craft, due to the high relative speeds and large kinetic energies involved. Various shielding methods to mitigate this problem have been proposed. Larger objects (such as macroscopic dust grains) are far less common, but would be much more destructive. The risks of impacting such objects, and methods of mitigating these risks, have been discussed in the literature, but many unknowns remain. An interstellar ship would face manifold hazards found in interplanetary travel, including vacuum, radiation, weightlessness, and micrometeoroids. Even the minimum multi-year travel times to the nearest stars are beyond current manned space mission design experience. More speculative approaches to interstellar travel offer the possibility of circumventing these difficulties. Special relativity offers the possibility of shortening the travel time through relativistic time dilation: if a starship could reach velocities approaching the speed of light, the journey time as experienced by the traveler would be greatly reduced (see time dilation section). General relativity offers the theoretical possibility that faster-than-light travel could greatly shorten travel times, both for the traveler and those on Earth (see Faster-than-light travel section). It has been argued that an interstellar mission that cannot be completed within 50 years should not be started at all. Instead, assuming that a civilization is still on an increasing curve of propulsion system velocity, not yet having reached the limit, the resources should be invested in designing a better propulsion system. This is because a slow spacecraft would probably be passed by another mission sent later with more-advanced propulsion (the incessant obsolescence postulate). On the other hand, Andrew Kennedy has shown that if one calculates the journey time to a given destination as the rate of travel speed derived from growth (even exponential growth) increases, there is a clear minimum in the total time to that destination from now (see wait calculation). Voyages undertaken before the minimum will be overtaken by those who leave at the minimum, whereas those who leave after the minimum will never overtake those who left at the minimum. One argument against the stance of delaying a start until reaching fast propulsion system velocity is that the various other non-technical problems that are specific to long-distance travel at considerably higher speed (such as interstellar particle impact, possible dramatic shortening of average human life span during extended space residence, etc.) may remain obstacles that take much longer time to resolve than the propulsion issue alone, assuming that they can even be solved eventually at all. A case can therefore be made for starting a mission without delay, based on the concept of an achievable and dedicated but relatively slow interstellar mission using the current technological state-of-the-art and at relatively low cost, rather than banking on being able to solve all problems associated with a faster mission without having a reliable time frame for achievability of such. The round-trip delay time is the minimum time between an observation by the probe and the moment the probe can receive instructions from Earth reacting to the observation. Given that information can travel no faster than the speed of light, this is for the Voyager 1 about 36 hours, and near Proxima Centauri it would be 8 years. Faster reaction would have to be programmed to be carried out automatically. Of course, in the case of a manned flight the crew can respond immediately to their observations. However, the round-trip delay time makes them not only extremely distant from, but, in terms of communication, also extremely isolated from Earth (analogous to how past long distance explorers were similarly isolated before the invention of the electrical telegraph). Interstellar communication is still problematic – even if a probe could reach the nearest star, its ability to communicate back to Earth would be difficult given the extreme distance. See Interstellar communication. Prime targets for interstellar travel |Stellar system||Distance (ly)||Remarks| |Alpha Centauri||4.3||Closest system. Three stars (G2, K1, M5). Component A is similar to the Sun (a G2 star). Alpha Centauri B has one confirmed planet.| |Barnard's Star||6||Small, low-luminosity M5 red dwarf. Second closest to Solar System.| |Sirius||8.7||Large, very bright A1 star with a white dwarf companion.| |Epsilon Eridani||10.8||Single K2 star slightly smaller and colder than the Sun. Has two asteroid belts, might have a giant and one much smaller planet, and may possess a Solar-System-type planetary system.| |Tau Ceti||11.8||Single G8 star similar to the Sun. High probability of possessing a Solar-System-type planetary system: current evidence shows 5 planets with potentially two in the habitable zone.| |Gliese 581||20.3||Multiple planet system. The unconfirmed exoplanet Gliese 581 g and the confirmed exoplanet Gliese 581 d are in the star's habitable zone.| |Gliese 667C||22||A system with at least six planets. A record-breaking three of these planets are super-Earths lying in the zone around the star where liquid water could exist, making them possible candidates for the presence of life.| |Vega||25||At least one planet, and of a suitable age to have evolved primitive life | Existing and near-term astronomical technology is capable of finding planetary systems around these objects, increasing their potential for exploration. Slow, uncrewed probes Slow interstellar missions based on current and near-future propulsion technologies are associated with trip times starting from about one hundred years to thousands of years. These missions consist of sending a robotic probe to a nearby star for exploration, similar to interplanetary probes such as used in the voyager program. By taking along no crew, the cost and complexity of the mission is significantly reduced although technology lifetime is still a significant issue next to obtaining a reasonable speed of travel. Proposed concepts include Project Daedalus, Project Icarus and Project Longshot. Fast, uncrewed probes Near-lightspeed nanospacecraft might be possible within the near future built on existing microchip technology with a newly developed nanoscale thruster. Researchers at the University of Michigan are developing thrusters that use nanoparticles as propellant. Their technology is called “nanoparticle field extraction thruster”, or nanoFET. These devices act like small particle accelerators shooting conductive nanoparticles out into space. Michio Kaku, a theoretical physicist, has suggested that clouds of "smart dust" be sent to the stars, which may become possible with advances in nanotechnology. Kaku also notes that a large amount of nanoprobes would need to be sent due to the vulnerability of very small probes to be easily deflected by magnetic fields, micrometeorites and other dangers to ensure the chances that at least one nanoprobe will survive the journey and reach the destination. Given the light weight of these probes, it would take much less energy to accelerate them. With on board solar cells they could continually accelerate using solar power. One can envision a day when a fleet of millions or even billions of these particles swarm to distant stars at nearly the speed of light and relay signals back to Earth through a vast interstellar communication network. Slow, manned missions In crewed missions, the duration of a slow interstellar journey presents a major obstacle and existing concepts deal with this problem in different ways. They can be distinguished by the "state" in which humans are transported on-board of the spacecraft. A generation ship (or world ship) is a type of interstellar ark in which the crew that arrives at the destination is descended from those who started the journey. Generation ships are not currently feasible because of the difficulty of constructing a ship of the enormous required scale and the great biological and sociological problems that life aboard such a ship raises. Scientists and writers have postulated various techniques for suspended animation. These include human hibernation and cryonic preservation. Although neither is currently practical, they offer the possibility of sleeper ships in which the passengers lie inert for the long duration of the voyage. Extended human lifespan A variant on this possibility is based on the development of substantial human life extension, such as the "Strategies for Engineered Negligible Senescence" proposed by Dr. Aubrey de Grey. If a ship crew had lifespans of some thousands of years, or had artificial bodies, they could traverse interstellar distances without the need to replace the crew in generations. The psychological effects of such an extended period of travel would potentially still pose a problem. A robotic space mission carrying some number of frozen early stage human embryos is another theoretical possibility. This method of space colonization requires, among other things, the development of an artificial uterus, the prior detection of a habitable terrestrial planet, and advances in the field of fully autonomous mobile robots and educational robots that would replace human parents. A more speculative method of transporting humans to the stars is by using mind uploading or also called brain emulation. Frank J. Tipler speculates about the colonization of the universe by starships transporting uploaded astronauts. Hein presents a range of concepts how such missions could be conducted, using more or less speculative technologies, for example self-replicating machines, wormholes, and teleportation. One of the major challenges besides mind uploading itself are the means for downloading the uploads into physical entities, which can be biological or artificial or both. Island hopping through interstellar space Interstellar space is not completely empty; it contains trillions of icy bodies ranging from small asteroids (Oort cloud) to possible rogue planets. There may be ways to take advantage of these resources for a good part of an interstellar trip, slowly hopping from body to body or setting up waystations along the way. If a spaceship could average 10 percent of light speed (and decelerate at the destination, for manned missions), this would be enough to reach Proxima Centauri in forty years. Several propulsion concepts are proposed that might be eventually developed to accomplish this (see section below on propulsion methods), but none of them are ready for near-term (few decades) development at acceptable cost. Assuming one cannot travel faster than light one might conclude that a human can never make a round-trip farther from Earth than 20 light years if the traveler is active between the ages of 20 and 60. A traveler would never be able to reach more than the very few star systems that exist within the limit of 20 light years from Earth. This, however, fails to take into account time dilation. Clocks aboard an interstellar ship would run slower than Earth clocks, so if a ship's engines were powerful enough the ship could reach mostly anywhere in the galaxy and return to Earth within 40 years ship-time. Upon return, there would be a difference between the time elapsed on the astronaut's ship and the time elapsed on Earth. A spaceship could travel to a star 32 light-years away, initially accelerating at a constant 1.03g (i.e. 10.1 m/s2) for 1.32 years (ship time), then stopping its engines and coasting for the next 17.3 years (ship time) at a constant speed, then decelerating again for 1.32 ship-years, and coming to a stop at the destination. After a short visit the astronaut could return to Earth the same way. After the full round-trip, the clocks on board the ship show that 40 years have passed, but according to those on Earth, the ship comes back 76 years after launch. From the viewpoint of the astronaut, on-board clocks seem to be running normally. The star ahead seems to be approaching at a speed of 0.87 lightyears per ship-year. The universe would appear contracted along the direction of travel to half the size it had when the ship was at rest; the distance between that star and the Sun would seem to be 16 light years as measured by the astronaut. At higher speeds, the time onboard will run even slower, so the astronaut could travel to the center of the Milky Way (30 kly from Earth) and back in 40 years ship-time. But the speed according to Earth clocks will always be less than 1 lightyear per Earth year, so, when back home, the astronaut will find that 60 thousand years will have passed on Earth. Regardless of how it is achieved, if a propulsion system can produce acceleration continuously from departure to destination, then this will be the fastest method of travel. If the propulsion system drives the ship faster and faster for the first half of the journey, then turns around and brakes the craft so that it arrives at the destination at a standstill, this is a constant acceleration journey. If this were performed at nearly 1g, this would have the added advantage of producing artificial "gravity". This is, however, prohibitively expensive with current technology. From the planetary observer perspective the ship will appear to steadily accelerate but more slowly as it approaches the speed of light. The ship will be close to the speed of light after about a year of accelerating and remain at that speed until it brakes for the end of the journey. From the ship perspective there will be no top limit on speed – the ship keeps going faster and faster the whole first half. This happens because the ship's time sense slows down – relative to the planetary observer – the more it approaches the speed of light. The result is an impressively fast journey if you are in the ship. If physical entities could be transmitted as information and reconstructed at a destination, travel at nearly the speed of light would be possible, which for the "travelers" would be instantaneous. However, sending an atom-by-atom description of (say) a human body would be a daunting task. Extracting and sending only a computer brain simulation is a significant part of that problem. "Journey" time would be the light-travel time plus the time needed to encode, send and reconstruct the whole transmission. All rocket concepts are limited by the rocket equation, which sets the characteristic velocity available as a function of exhaust velocity and mass ratio, the ratio of initial (M0, including fuel) to final (M1, fuel depleted) mass. Very high specific power, the ratio of thrust to total vehicle mass, is required to reach interstellar targets within sub-century time-frames. Some heat transfer is inevitable and a tremendous heating load must be adequately handled. Thus, for interstellar rocket concepts of all technologies, a key engineering problem (seldom explicitly discussed) is limiting the heat transfer from the exhaust stream back into the vehicle. Nuclear fission powered Nuclear-electric or plasma engines, operating for long periods at low thrust and powered by fission reactors, have the potential to reach speeds much greater than chemically powered vehicles or nuclear-thermal rockets. Such vehicles probably have the potential to power Solar System exploration with reasonable trip times within the current century. Because of their low-thrust propulsion, they would be limited to off-planet, deep-space operation. Electrically powered spacecraft propulsion powered by a portable power-source, say a nuclear reactor, producing only small accelerations, would take centuries to reach for example 15% of the velocity of light, thus unsuitable for interstellar flight during a single human lifetime. Fission-fragment rockets use nuclear fission to create high-speed jets of fission fragments, which are ejected at speeds of up to 12,000 km/s. With fission, the energy output is approximately 0.1% of the total mass-energy of the reactor fuel and limits the effective exhaust velocity to about 5% of the velocity of light. For maximum velocity, the reaction mass should optimally consist of fission products, the "ash" of the primary energy source, in order that no extra reaction mass need be book-kept in the mass ratio. This is known as a fission-fragment rocket. thermal-propulsion engines such as NERVA produce sufficient thrust, but can only achieve relatively low-velocity exhaust jets, so to accelerate to the desired speed would require an enormous amount of fuel. Based on work in the late 1950s to the early 1960s, it has been technically possible to build spaceships with nuclear pulse propulsion engines, i.e. driven by a series of nuclear explosions. This propulsion system contains the prospect of very high specific impulse (space travel's equivalent of fuel economy) and high specific power. Project Orion team member, Freeman Dyson, proposed in 1968 an interstellar spacecraft using nuclear pulse propulsion that used pure deuterium fusion detonations with a very high fuel-burnup fraction. He computed an exhaust velocity of 15,000 km/s and a 100,000-tonne space vehicle able to achieve a 20,000 km/s delta-v allowing a flight-time to Alpha Centauri of 130 years. Later studies indicate that the top cruise velocity that can theoretically be achieved by a Teller-Ulam thermonuclear unit powered Orion starship, assuming no fuel is saved for slowing back down, is about 8% to 10% of the speed of light (0.08-0.1c). An atomic (fission) Orion can achieve perhaps 3%-5% of the speed of light. A nuclear pulse drive starship powered by Fusion-antimatter catalyzed nuclear pulse propulsion units would be similarly in the 10% range and pure Matter-antimatter annihilation rockets would be theoretically capable of obtaining a velocity between 50% to 80% of the speed of light. In each case saving fuel for slowing down halves the maximum speed. The concept of using a magnetic sail to decelerate the spacecraft as it approaches its destination has been discussed as an alternative to using propellant, this would allow the ship to travel near the maximum theoretical velocity. Alternative designs utilizing similar principles include Project Longshot, Project Daedalus, and Mini-Mag Orion. The principle of external nuclear pulse propulsion to maximize survivable power has remained common among serious concepts for interstellar flight without external power beaming and for very high-performance interplanetary flight. In the 1970s the Nuclear Pulse Propulsion concept further was refined by Project Daedalus by use of externally triggered inertial confinement fusion, in this case producing fusion explosions via compressing fusion fuel pellets with high-powered electron beams. Since then, lasers, ion beams, neutral particle beams and hyper-kinetic projectiles have been suggested to produce nuclear pulses for propulsion purposes. A current impediment to the development of any nuclear-explosion-powered spacecraft is the 1963 Partial Test Ban Treaty, which includes a prohibition on the detonation of any nuclear devices (even non-weapon based) in outer space. This treaty would therefore need to be renegotiated, although a project on the scale of an interstellar mission using currently foreseeable technology would probably require international cooperation on at least the scale of the International Space Station. Nuclear fusion rockets Fusion rocket starships, powered by nuclear fusion reactions, should conceivably be able to reach speeds of the order of 10% of that of light, based on energy considerations alone. In theory, a large number of stages could push a vehicle arbitrarily close to the speed of light. These would "burn" such light element fuels as deuterium, tritium, 3He, 11B, and 7Li. Because fusion yields about 0.3–0.9% of the mass of the nuclear fuel as released energy, it is energetically more favorable than fission, which releases <0.1% of the fuel's mass-energy. The maximum exhaust velocities potentially energetically available are correspondingly higher than for fission, typically 4–10% of c. However, the most easily achievable fusion reactions release a large fraction of their energy as high-energy neutrons, which are a significant source of energy loss. Thus, although these concepts seem to offer the best (nearest-term) prospects for travel to the nearest stars within a (long) human lifetime, they still involve massive technological and engineering difficulties, which may turn out to be intractable for decades or centuries. Early studies include Project Daedalus, performed by the British Interplanetary Society in 1973–1978, and Project Longshot, a student project sponsored by NASA and the US Naval Academy, completed in 1988. Another fairly detailed vehicle system, "Discovery II", designed and optimized for crewed Solar System exploration, based on the D3He reaction but using hydrogen as reaction mass, has been described by a team from NASA's Glenn Research Center. It achieves characteristic velocities of >300 km/s with an acceleration of ~1.7•10−3 g, with a ship initial mass of ~1700 metric tons, and payload fraction above 10%. Although these are still far short of the requirements for interstellar travel on human timescales, the study seems to represent a reasonable benchmark towards what may be approachable within several decades, which is not impossibly beyond the current state-of-the-art. Based on the concept's 2.2% burnup fraction it could achieve a pure fusion product exhaust velocity of ~3,000 km/s. An antimatter rocket would have a far higher energy density and specific impulse than any other proposed class of rocket. If energy resources and efficient production methods are found to make antimatter in the quantities required and store it safely, it would be theoretically possible to reach speeds approaching that of light. Then relativistic time dilation would become more noticeable, thus making time pass at a slower rate for the travelers as perceived by an outside observer, reducing the trip time experienced by human travelers. Supposing the production and storage of antimatter should become practical, two further problems would present and need to be solved. First, in the annihilation of antimatter, much of the energy is lost in very penetrating high-energy gamma radiation, and especially also in neutrinos, so that substantially less than mc2 would actually be available if the antimatter were simply allowed to annihilate into radiations thermally. Even so, the energy available for propulsion would probably be substantially higher than the ~1% of mc2 yield of nuclear fusion, the next-best rival candidate. Second, once again heat transfer from exhaust to vehicle seems likely to deposit enormous wasted energy into the ship, considering the large fraction of the energy that goes into penetrating gamma rays. Even assuming biological shielding were provided to protect the passengers, some of the energy would inevitably heat the vehicle, and may thereby prove limiting. This requires consideration for serious proposals if useful accelerations are to be achieved, because the energies involved (e.g. for 0.1g ship acceleration, approaching 0.3 trillion watts per ton of ship mass) are very large. Rockets with an external energy source Rockets deriving their power from external sources, such as a laser, could replace their internal energy source with an energy collector, potentially reducing the mass of the ship greatly and allowing much higher travel speeds. Geoffrey A. Landis has proposed for an interstellar probe, with energy supplied by an external laser from a base station powering an Ion thruster. A problem with all traditional rocket propulsion methods is that the spacecraft would need to carry its fuel with it, thus making it very massive, in accordance with the rocket equation. Some concepts attempt to escape from this problem (): In 1960, Robert W. Bussard proposed the Bussard ramjet, a fusion rocket in which a huge scoop would collect the diffuse hydrogen in interstellar space, "burn" it on the fly using a proton–proton fusion reaction, and expel it out of the back. Later calculations with more accurate estimates suggest that the thrust generated would be less than the drag caused by any conceivable scoop design. Yet the idea is attractive because the fuel would be collected en route (commensurate with the concept of energy harvesting), so the craft could theoretically accelerate to near the speed of light. A light sail or magnetic sail powered by a massive laser or particle accelerator in the home star system could potentially reach even greater speeds than rocket- or pulse propulsion methods, because it would not need to carry its own reaction mass and therefore would only need to accelerate the craft's payload. Robert L. Forward proposed a means for decelerating an interstellar light sail in the destination star system without requiring a laser array to be present in that system. In this scheme, a smaller secondary sail is deployed to the rear of the spacecraft, whereas the large primary sail is detached from the craft to keep moving forward on its own. Light is reflected from the large primary sail to the secondary sail, which is used to decelerate the secondary sail and the spacecraft payload. A magnetic sail could also decelerate at its destination without depending on carried fuel or a driving beam in the destination system, by interacting with the plasma found in the solar wind of the destination star and the interstellar medium. |Mission||Laser Power||Vehicle Mass||Acceleration||Sail Diameter||Maximum Velocity (% of the speed of light)| |1. Flyby - Alpha Centauri, 40 years| |outbound stage||65 GW||1 t||0.036 g||3.6 km||11% @ 0.17 ly| |2. Rendezvous - Alpha Centauri, 41 years| |outbound stage||7,200 GW||785 t||0.005 g||100 km||21% @ 4.29 ly| |deceleration stage||26,000 GW||71 t||0.2 g||30 km||21% @ 4.29 ly| |3. Manned - Epsilon Eridani, 51 years (including 5 years exploring star system)| |outbound stage||75,000,000 GW||78,500 t||0.3 g||1000 km||50% @ 0.4 ly| |deceleration stage||21,500,000 GW||7,850 t||0.3 g||320 km||50% @ 10.4 ly| |return stage||710,000 GW||785 t||0.3 g||100 km||50% @ 10.4 ly| |deceleration stage||60,000 GW||785 t||0.3 g||100 km||50% @ 0.4 ly| Achieving start-stop interstellar trip times of less than a human lifetime require mass-ratios of between 1,000 and 1,000,000, even for the nearer stars. This could be achieved by multi-staged vehicles on a vast scale. Alternatively large linear accelerators could propel fuel to fission propelled space-vehicles, avoiding the limitations of the Rocket equation. Scientist T. Marshall Eubanks thinks that nuggets of condensed quark matter may exist at the centers of some asteroids, created during the Big Bang and each nugget with a mass of 1010 to 1011 kg. If so these could be an enormous source of energy, as the nuggets could be used to generate huge quantities of antimatter—about a million tonnes of antimatter per nugget. This would be enough to propel a spacecraft close to the speed of light. Hawking radiation rockets In a black hole starship, a parabolic reflector would reflect Hawking radiation from an artificial black hole. In 2009, Louis Crane and Shawn Westmoreland of Kansas State University published a paper investigating the feasibility of this idea. Their conclusion was that it was on the edge of possibility, but that quantum gravity effects that are presently unknown may make it easier or make it impossible. Magnetic monopole rockets If some of the Grand unification models are correct, e.g. 't Hooft–Polyakov, it would be possible to construct a photonic engine that uses no antimatter thanks to the magnetic monopole that hypothetically can catalyze the decay of a proton to a positron and π0-meson: π0 decays rapidly to two photons, and the positron annihilates with an electron to give two more photons. As a result, a hydrogen atom turns into four photons and only the problem of a mirror remains unresolved. A magnetic monopole engine could also work on a once-through scheme such as the Bussard ramjet (see below). Scientists and authors have postulated a number of ways by which it might be possible to surpass the speed of light. Even the most serious-minded of these are speculative. It is also debated whether this is possible, in part, because of causality concerns, because in essence travel faster than light is equivalent to going back in time. Proposed mechanisms for faster-than-light travel within the theory of general relativity require the existence of exotic matter. General relativity may permit the travel of an object faster than light in curved spacetime. One could imagine exploiting the curvature to take a "shortcut" from one point to another. This is one form of the warp drive concept. In physics, the Alcubierre drive is based on an argument that the curvature could take the form of a wave in which a spaceship might be carried in a "bubble". Space would be collapsing at one end of the bubble and expanding at the other end. The motion of the wave would carry a spaceship from one space point to another in less time than light would take through unwarped space. Nevertheless, the spaceship would not be moving faster than light within the bubble. This concept would require the spaceship to incorporate a region of exotic matter, or "negative mass". Artificial gravity control Scientist Lance Williams thinks that gravity can be controlled artificially through electromagnetic control. Wormholes are conjectural distortions in spacetime that theorists postulate could connect two arbitrary points in the universe, across an Einstein–Rosen Bridge. It is not known whether wormholes are possible in practice. Although there are solutions to the Einstein equation of general relativity that allow for wormholes, all of the currently known solutions involve some assumption, for example the existence of negative mass, which may be unphysical. However, Cramer et al. argue that such wormholes might have been created in the early universe, stabilized by cosmic string. The general theory of wormholes is discussed by Visser in the book Lorentzian Wormholes. Designs and studies The Enzmann starship, as detailed by G. Harry Stine in the October 1973 issue of Analog, was a design for a future starship, based on the ideas of Dr. Robert Duncan-Enzmann. The spacecraft itself as proposed used a 12,000,000 ton ball of frozen deuterium to power 12–24 thermonuclear pulse propulsion units. Twice as long as the Empire State Building and assembled in-orbit, the spacecraft was part of a larger project preceded by interstellar probes and telescopic observation of target star systems. NASA has been researching interstellar travel since its formation, translating important foreign language papers and conducting early studies on applying fusion propulsion, in the 1960s, and laser propulsion, in the 1970s, to interstellar travel. The NASA Breakthrough Propulsion Physics Program (terminated in FY 2003 after a 6-year, $1.2-million study, because "No breakthroughs appear imminent.") identified some breakthroughs that are needed for interstellar travel to be possible. Geoffrey A. Landis of NASA's Glenn Research Center states that a laser-powered interstellar sail ship could possibly be launched within 50 years, using new methods of space travel. "I think that ultimately we're going to do it, it's just a question of when and who," Landis said in an interview. Rockets are too slow to send humans on interstellar missions. Instead, he envisions interstellar craft with extensive sails, propelled by laser light to about one-tenth the speed of light. It would take such a ship about 43 years to reach Alpha Centauri, if it passed through the system. Slowing down to stop at Alpha Centauri could increase the trip to 100 years, whereas a journey without slowing down raises the issue of making sufficiently accurate and useful observations and measurements during a fly-by. 100 Year Starship study The 100 Year Starship (100YSS) is the name of the overall effort that will, over the next century, work toward achieving interstellar travel. The effort will also go by the moniker 100YSS. The 100 Year Starship study is the name of a one year project to assess the attributes of and lay the groundwork for an organization that can carry forward the 100 Year Starship vision. Dr. Harold ("Sonny") White from NASA's Johnson Space Center is a member of Icarus Interstellar, the nonprofit foundation whose mission is to realize interstellar flight before the year 2100. At the 2012 meeting of 100YSS, he reported using a laser to try to warp spacetime by 1 part in 10 million with the aim of helping to make interstellar travel possible. - Project Orion, manned interstellar ship (1958–1968). - Project Daedalus, unmanned interstellar probe (1973–1978). - Starwisp, unmanned interstellar probe (1985). - Project Longshot, unmanned interstellar probe (1987–1988). - Starseed/launcher, fleet of unmanned interstellar probes (1996) - Project Valkyrie, manned interstellar ship (2009) - Project Icarus, unmanned interstellar probe (2009–2014). - Sun-diver, unmanned interstellar probe A few organisations dedicated to interstellar propulsion research and advocacy for the case exist worldwide. These are still in their infancy, but are already backed up by a membership of a wide variety of scientists, students and professionals. - Icarus Interstellar - Tau Zero Foundation (USA) - Initiative for Interstellar Studies (UK) - Fourth Millennium Foundation (Belgium) - Space Development Cooperative (Canada) The energy requirements make interstellar travel very difficult. It has been reported that at the 2008 Joint Propulsion Conference, multiple experts opined that it was improbable that humans would ever explore beyond the Solar System. Brice N. Cassenti, an associate professor with the Department of Engineering and Science at Rensselaer Polytechnic Institute, stated at least the total energy output of the entire world [in a given year] would be required to send a probe to the nearest star. - Effect of spaceflight on the human body - Health threat from cosmic rays - Human spaceflight - Intergalactic travel - Interstellar communication - Interstellar travel in fiction - List of nearest terrestrial exoplanet candidates - Uploaded astronaut - Nuclear pulse propulsion - "A Look at the Scaling". nasa.gov. NASA Glenn Research Center. - Westover, Shayne (27 March 2012). Active Radiation Shielding Utilizing High Temperature Superconductors. NIAC Symposium. - Garrett, Henry (30 July 2012). "There and Back Again: A Layman’s Guide to Ultra-Reliability for Interstellar Missions". - E. Bock, F. Lambrou Jr., M. Simon (1979). "Effect of Environmental Parameters on Habitat Structural Weight and Cost". pp. 33–60. - "Space Resources and Space Settlements" John Billingham, William Gilbreath, and Brian O’Leary, 1979 - Astronaut's Energy Requirements for Long-Term Space Flight (Energy) Stephane Blanc, ISS Program Science Office, NASA, June 19, 2014 - Kondo, Yoji (2003). Interstellar Travel and Multi-generational Spaceships. Collector's Guide Publishing, Inc. p. 31. ISBN 1-896522-99-8. - Kennedy, Andrew (July 2006). "Interstellar Travel: The Wait Calculation and the Incentive Trap of Progress". Journal of the British Interplanetary Society 59 (7): 239–246. - Forward, Robert L. (1996). "Ad Astra!". Journal of the British Interplanetary Society 49 (1): 23–32. - Hand, Eric (16 October 2012). "The exoplanet next door". Nature 490 (7420): 309–440. doi:10.1038/nature11572. - "Planet eps Eridani b". exoplanet.eu. Retrieved 2011-01-15. - "Three Planets in Habitable Zone of Nearby Star". eso.org. - Croswell, Ken (3 December 2012). "ScienceShot: Older Vega Mature Enough to Nurture Life". sciencemag.org. - Daniel H. Wilson. Near-lightspeed nano spacecraft might be close. msnbc.msn.com. - Kaku, Michio. The Physics of the Impossible. Anchor Books. - Hein, A. M. "How Will Humans Fly to the Stars?". Retrieved 12 April 2013. - Hein, A. M.; et al. (2012). "World Ships: Architectures & Feasibility Revisited". Journal of the British Interplanetary Society 65: 119–133. Bibcode:2012JBIS...65..119H. - Bond, A.; Martin, A.R. (1984). "World Ships – An Assessment of the Engineering Feasibility". Journal of the British Interplanetary Society 37: 254–266. Bibcode:1984JBIS...37..254B. - Frisbee, R.H. (2009). Limits of Interstellar Flight Technology in Frontiers of Propulsion Science. Progress in Astronautics and Aeronautics. - Hein, Andreas M. "Project Hyperion: The Hollow Asteroid Starship – Dissemination of an Idea". Retrieved 12 April 2013. - "Various articles on hibernation". Journal of the British Interplanetary Society 59: 81–144. 2006. - Crowl, A.; Hunt, J.; Hein, A.M. (2012). "Embryo Space Colonisation to Overcome the Interstellar Time Distance Bottleneck". Journal of the British Interplanetary Society 65: 283–285. Bibcode:2012JBIS...65..283C. - Hein, A.M., Transcendence Going Interstellar: How the Singularity Might Revolutionize Interstellar Travel - Sandberg, A., & Bostrom, N. (2008). Whole brain emulation: A roadmap. Future of Humanity Institute Technical Report, 3. - Tipler, F., The Physics of Immortality, Chapter 2, Doubleday, New York, 1994. - Hein, A.M. (2014) The Greatest Challenge: Manned Interstellar Travel, in Long, K.F. Beyond the Boundary, Initiative for Interstellar Studies, lulu - "Clock paradox III". Taylor, Edwin F.; Wheeler, John Archibald (1966). "Chapter 1 Exercise 51". Spacetime Physics. W.H. Freeman, San Francisco. pp. 97–98. ISBN 0-7167-0336-X. - Crowell, Benjamin (2011), Light and Matter Section 4.3 - Tipler, F. (1994). The Physics of Immortality. New York: Doubleday. ISBN 0-19-851949-4. - Orth, C. D. (16 May 2003). "VISTA – A Vehicle for Interplanetary Space Transport Application Powered by Inertial Confinement Fusion". Lawrence Livermore National Laboratory. - Clarke, Arthur C. (1951). The Exploration of Space. New York: Harper. - Project Daedalus: The Propulsion System Part 1; Theoretical considerations and calculations. 2. REVIEW OF ADVANCED PROPULSION SYSTEMS - General Dynamics Corp. (January 1964). "Nuclear Pulse Vehicle Study Condensed Summary Report (General Dynamics Corp.)" (PDF). U.S. Department of Commerce National Technical Information Service. - Freeman J. Dyson (October 1968). "Interstellar Transport" (PDF). American Institute of Physics. - Cosmos by Carl Sagan - Lenard, Roger X.; Andrews, Dana G. (June 2007). "Use of Mini-Mag Orion and superconducting coils for near-term interstellar transportation". Acta Astonautica 61 (1-6 pages= 450-458). doi:10.1016/j.actaastro.2007.01.052. - Friedwardt Winterberg (2010). "The Release of Thermonuclear Energy by Inertial Confinement" (PDF). World Scientific. - D.F. Spencer and L.D. Jaffe. "Feasibility of Interstellar Travel." Astronautica Acta. Vol. IX, 1963, pp. 49–58. - PDF C. R. Williams et al., 'Realizing “2001: A Space Odyssey”: Piloted Spherical Torus Nuclear Fusion Propulsion', 2001, 52 pages, NASA Glenn Research Center - Landis, Geoffrey A. (29 August 1994). Laser-powered Interstellar Probe. Conference on Practical Robotic Interstellar Flight. NY University, New York, NY. - see also:“Non Rocket Space Launch and Flight”. by a. Bolonkin, Elsevier, 2005. 488 pgs. ISBN 978-0-08044-731-5, http://www.scribd.com/doc/24056182. - Forward, R.L. (1984). "Roundtrip Interstellar Travel Using Laser-Pushed Lightsails". J Spacecraft 21 (2): 187–195. Bibcode:1984JSpRo..21..187F. doi:10.2514/3.8632. - Andrews, Dana G.; Zubrin, Robert M. (1990). "Magnetic Sails and Interstellar Travel" (PDF). Journal of The British Interplanetary Society 43: 265–272. Retrieved 2014-10-08. - Zubrin, Robert; Martin, Andrew (1999-08-11). "NIAC Study of the Magnetic Sail" (PDF). Retrieved 2014-10-08. - Landis, Geoffrey A. (2003). "The Ultimate Exploration: A Review of Propulsion Concepts for Interstellar Flight". In Kondo, Bruhweiller, Moore and Sheffield. Interstellar Travel and Multi-Generation Space Ships. Apogee Books. ISBN 1-896522-99-8. - Roger X. Lenard & Ronald J. Lipinski "Interstellar rendezvous missions employing fission propulsion systems" (2000), AIP Conf. Proc. 504, pp. 1544-1555. - Eubanks, T. Marshall (9 January 2014). "Quark Matter in the Solar System: Evidence for a Game-Changing Space Resource". Clifton, Virginia: Asteroid Initiatives LLC. - Eubanks, T.M. (7 November 2013). "Powering Starships with Compact Condensed Quark Matter". Clifton, Virginia: Asteroid Initiatives LLC. - "Are Black Hole Starships Possible?", Louis Crane, Shawn Westmoreland, 2009 - Chown, Marcus (25 November 2009). "Dark power: Grand designs for interstellar travel". New Scientist (2736). (subscription required) - Curtis G. Callan, Jr. (1982). "Dyon-fermion dynamics". Phys. Rev. D 26 (8): 2058–2068. Bibcode:1982PhRvD..26.2058C. doi:10.1103/PhysRevD.26.2058. - B. V. Sreekantan (1984). "Searches for Proton Decay and Superheavy Magnetic Monopoles". Journal of Astrophysics and Astronomy 5: 251–271. Bibcode:1984JApA....5..251S. doi:10.1007/BF02714542. - Remote Sensing Tutorial Page A-10 - Williams, Lance (6 April 2012). "Electromagnetic Control of Spacetime and Gravity: The Hard Problem of Interstellar Travel". Astronomical Review 7 (2). Bibcode:2012AstRv...7b...5W. - "Ideas Based On What We’d Like To Achieve: Worm Hole transportation". NASA Glenn Research Center. - John G. Cramer, Robert L. Forward, Michael S. Morris, Matt Visser, Gregory Benford, and Geoffrey A. Landis (15 March 1995). "Natural Wormholes as Gravitational Lenses". Phys. Rev. D 51 (3117): 3117–3120. arXiv:ph/9409051. doi:10.1103/PhysRevD.51.3117. - Visser, M. (1995). Lorentzian Wormholes: from Einstein to Hawking. AIP Press, Woodbury NY. ISBN 1-56396-394-9. - Enzmann Starship - Gilster, Paul (April 1, 2007). "A Note on the Enzmann Starship". Centauri Dreams. - "Icarus Interstellar - Project Hyperion". Retrieved 13 April 2013. - http://www.grc.nasa.gov/WWW/bpp "Breakthrough Propulsion Physics" project at NASA Glenn Research Center, Nov 19, 2008 - http://www.nasa.gov/centers/glenn/technology/warp/warp.html Warp Drive, When? Breakthrough Technologies January 26, 2009 - Malik, Tariq, "Sex and Society Aboard the First Starships." Science Tuesday, Space.com March 19, 2002. - Moskowitz, Clara (17 September 2012). "Warp Drive May Be More Feasible Than Thought, Scientists Say". space.com. - Forward, R. L. (May–June 1985). "Starwisp - An ultra-light interstellar probe". Journal of Spacecraft and Rockets 22 (3): 345–350. Bibcode:1985JSpRo..22..345F. doi:10.2514/3.25754. - Benford, James; Benford, Gregory. "Near-Term Beamed Sail Propulsion Missions: Cosmos-1 and Sun-Diver". Department of Physics, University of California, Irvine. - O’Neill, Ian (Aug 19, 2008). "Interstellar travel may remain in science fiction". Universe Today. - Hein, A.M. (September 2012). "Evaluation of Technological-Social and Political Projections for the Next 100-300 Years and the Implications for an Interstellar Mission". Journal of the British Interplanetary Society 33 (09/10): 330–340. - Long, Kelvin (2012). Deep Space Propulsion: A Roadmap to Interstellar Flight. Springer. ISBN 978-1461406068. - Mallove, Eugene (1989). The Starflight Handbook. John Wiley & Sons, Inc. ISBN 0-471-61912-4. - Woodward, James (2013). Making Starships and Stargates: The Science of Interstellar Transport and Absurdly Benign Wormholes. Springer. ISBN 978-1461456223. - Zubrin, Robert (1999). Entering Space: Creating a Spacefaring Civilization. Tarcher / Putnam. ISBN 1-58542-036-0. - Leonard David – Reaching for interstellar flight (2003) – MSNBC (MSNBC Webpage) - NASA Breakthrough Propulsion Physics Program (NASA Webpage) - Bibliography of Interstellar Flight (source list) - DARPA seeks help for interstellar starship
The observable universe is a spherical region of the Universe comprising all matter that can be observed from Earth or its space-based telescopes and exploratory probes at the present time, because electromagnetic radiation from these objects has had time to reach the Solar System and Earth since the beginning of the cosmological expansion. There are at least 2 trillion galaxies in the observable universe. Assuming the Universe is isotropic, the distance to the edge of the observable universe is roughly the same in every direction. That is, the observable universe has a spherical volume (a ball) centered on the observer. Every location in the Universe has its own observable universe, which may or may not overlap with the one centered on Earth. The word observable in this sense does not refer to the capability of modern technology to detect light or other information from an object, or whether there is anything to be detected. It refers to the physical limit created by the speed of light itself. Because no signals can travel faster than light, any object farther away from us than light could travel in the age of the Universe (estimated as of 2015 around 13.799±0.021 billion years) simply cannot be detected, as the signals could not have reached us yet. Sometimes astrophysicists distinguish between the visible universe, which includes only signals emitted since recombination (when hydrogen atoms were formed from protons and electrons and photons were emitted)—and the observable universe, which includes signals since the beginning of the cosmological expansion (the Big Bang in traditional physical cosmology, the end of the inflationary epoch in modern cosmology). According to calculations, the current comoving distance—proper distance, which takes into account that the universe has expanded since the light was emitted—to particles from which the cosmic microwave background radiation (CMBR) was emitted, which represent the radius of the visible universe, is about 14.0 billion parsecs (about 45.7 billion light-years), while the comoving distance to the edge of the observable universe is about 14.3 billion parsecs (about 46.6 billion light-years), about 2% larger. The radius of the observable universe is therefore estimated to be about 46.5 billion light-years and its diameter about 28.5 gigaparsecs (93 billion light-years, 8.8×1023 kilometres or 5.5×1023 miles). The total mass of ordinary matter in the universe can be calculated using the critical density and the diameter of the observable universe to be about 1.5 × 1053 kg. In November 2018, astronomers reported that the extragalactic background light (EBL) amounted to 4 × 1084 photons. Since the expansion of the universe is known to accelerate and will become exponential in the future, the light emitted from all distant objects, past some time dependent on their current redshift, will never reach the Earth. In the future all currently observable objects will slowly freeze in time while emitting progressively redder and fainter light. For instance, objects with the current redshift z from 5 to 10 will remain observable for no more than 4–6 billion years. In addition, light emitted by objects currently situated beyond a certain comoving distance (currently about 19 billion parsecs) will never reach Earth. Visualization of the whole observable universe. The scale is such that the fine grains represent collections of large numbers of superclusters. The Virgo Supercluster—home of Milky Way—is marked at the center, but is too small to be seen. |Diameter||8.8×1026 m (28.5 Gpc or 93 Gly)| |Mass (ordinary matter)||4.5 x 10 51 kg | |Density (of total energy)||9.9×10−27 kg/m3 (equivalent to 6 protons per cubic meter of space)| |Age||13.799±0.021 billion years| |Average temperature||2.72548 K| Some parts of the universe are too far away for the light emitted since the Big Bang to have had enough time to reach Earth or its scientific space-based instruments, and so lie outside the observable universe. In the future, light from distant galaxies will have had more time to travel, so additional regions will become observable. However, due to Hubble's law, regions sufficiently distant from the Earth are expanding away from it faster than the speed of light (special relativity prevents nearby objects in the same local region from moving faster than the speed of light with respect to each other, but there is no such constraint for distant objects when the space between them is expanding; see uses of the proper distance for a discussion) and furthermore the expansion rate appears to be accelerating due to dark energy. Assuming dark energy remains constant (an unchanging cosmological constant), so that the expansion rate of the universe continues to accelerate, there is a "future visibility limit" beyond which objects will never enter our observable universe at any time in the infinite future, because light emitted by objects outside that limit would never reach the Earth. (A subtlety is that, because the Hubble parameter is decreasing with time, there can be cases where a galaxy that is receding from the Earth just a bit faster than light does emit a signal that reaches the Earth eventually.) This future visibility limit is calculated at a comoving distance of 19 billion parsecs (62 billion light-years), assuming the universe will keep expanding forever, which implies the number of galaxies that we can ever theoretically observe in the infinite future (leaving aside the issue that some may be impossible to observe in practice due to redshift, as discussed in the following paragraph) is only larger than the number currently observable by a factor of 2.36. Though in principle more galaxies will become observable in the future, in practice an increasing number of galaxies will become extremely redshifted due to ongoing expansion, so much so that they will seem to disappear from view and become invisible. An additional subtlety is that a galaxy at a given comoving distance is defined to lie within the "observable universe" if we can receive signals emitted by the galaxy at any age in its past history (say, a signal sent from the galaxy only 500 million years after the Big Bang), but because of the universe's expansion, there may be some later age at which a signal sent from the same galaxy can never reach the Earth at any point in the infinite future (so, for example, we might never see what the galaxy looked like 10 billion years after the Big Bang), even though it remains at the same comoving distance (comoving distance is defined to be constant with time—unlike proper distance, which is used to define recession velocity due to the expansion of space), which is less than the comoving radius of the observable universe. This fact can be used to define a type of cosmic event horizon whose distance from the Earth changes over time. For example, the current distance to this horizon is about 16 billion light-years, meaning that a signal from an event happening at present can eventually reach the Earth in the future if the event is less than 16 billion light-years away, but the signal will never reach the Earth if the event is more than 16 billion light-years away. Both popular and professional research articles in cosmology often use the term "universe" to mean "observable universe". This can be justified on the grounds that we can never know anything by direct experimentation about any part of the universe that is causally disconnected from the Earth, although many credible theories require a total universe much larger than the observable universe. No evidence exists to suggest that the boundary of the observable universe constitutes a boundary on the universe as a whole, nor do any of the mainstream cosmological models propose that the universe has any physical boundary in the first place, though some models propose it could be finite but unbounded, like a higher-dimensional analogue of the 2D surface of a sphere that is finite in area but has no edge. It is plausible that the galaxies within our observable universe represent only a minuscule fraction of the galaxies in the universe. According to the theory of cosmic inflation initially introduced by its founder, Alan Guth (and by D. Kazanas), if it is assumed that inflation began about 10−37 seconds after the Big Bang, then with the plausible assumption that the size of the universe before the inflation occurred was approximately equal to the speed of light times its age, that would suggest that at present the entire universe's size is at least 3×1023 times the radius of the observable universe. There are also lower estimates claiming that the entire universe is in excess of 250 times larger (by volume, not by radius) than the observable universe and also higher estimates implying that the universe could have the size of at least 101010122 Mpc. If the universe is finite but unbounded, it is also possible that the universe is smaller than the observable universe. In this case, what we take to be very distant galaxies may actually be duplicate images of nearby galaxies, formed by light that has circumnavigated the universe. It is difficult to test this hypothesis experimentally because different images of a galaxy would show different eras in its history, and consequently might appear quite different. Bielewicz et al. claims to establish a lower bound of 27.9 gigaparsecs (91 billion light-years) on the diameter of the last scattering surface (since this is only a lower bound, the paper leaves open the possibility that the whole universe is much larger, even infinite). This value is based on matching-circle analysis of the WMAP 7 year data. This approach has been disputed. The comoving distance from Earth to the edge of the observable universe is about 14.26 gigaparsecs (46.5 billion light-years or 4.40×1026 meters) in any direction. The observable universe is thus a sphere with a diameter of about 28.5 gigaparsecs (93 billion light-years or 8.8×1026 meters). Assuming that space is roughly flat (in the sense of being a Euclidean space), this size corresponds to a comoving volume of about 1.22×104 Gpc3 (4.22×105 Gly3 or 3.57×1080 m3). The figures quoted above are distances now (in cosmological time), not distances at the time the light was emitted. For example, the cosmic microwave background radiation that we see right now was emitted at the time of photon decoupling, estimated to have occurred about 380,000 years after the Big Bang, which occurred around 13.8 billion years ago. This radiation was emitted by matter that has, in the intervening time, mostly condensed into galaxies, and those galaxies are now calculated to be about 46 billion light-years from us. To estimate the distance to that matter at the time the light was emitted, we may first note that according to the Friedmann–Lemaître–Robertson–Walker metric, which is used to model the expanding universe, if at the present time we receive light with a redshift of z, then the scale factor at the time the light was originally emitted is given by WMAP nine-year results combined with other measurements give the redshift of photon decoupling as z = 1091.64±0.47, which implies that the scale factor at the time of photon decoupling would be 1⁄1092.64. So if the matter that originally emitted the oldest CMBR photons has a present distance of 46 billion light-years, then at the time of decoupling when the photons were originally emitted, the distance would have been only about 42 million light-years. Many secondary sources have reported a wide variety of incorrect figures for the size of the visible universe. Some of these figures are listed below, with brief descriptions of possible reasons for misconceptions about them. Sky surveys and mappings of the various wavelength bands of electromagnetic radiation (in particular 21-cm emission) have yielded much information on the content and character of the universe's structure. The organization of structure appears to follow as a hierarchical model with organization up to the scale of superclusters and filaments. Larger than this (at scales between 30 and 200 megaparsecs), there seems to be no continued structure, a phenomenon that has been referred to as the End of Greatness. The organization of structure arguably begins at the stellar level, though most cosmologists rarely address astrophysics on that scale. Stars are organized into galaxies, which in turn form galaxy groups, galaxy clusters, superclusters, sheets, walls and filaments, which are separated by immense voids, creating a vast foam-like structure sometimes called the "cosmic web". Prior to 1989, it was commonly assumed that virialized galaxy clusters were the largest structures in existence, and that they were distributed more or less uniformly throughout the universe in every direction. However, since the early 1980s, more and more structures have been discovered. In 1983, Adrian Webster identified the Webster LQG, a large quasar group consisting of 5 quasars. The discovery was the first identification of a large-scale structure, and has expanded the information about the known grouping of matter in the universe. In 1987, Robert Brent Tully identified the Pisces–Cetus Supercluster Complex, the galaxy filament in which the Milky Way resides. It is about 1 billion light-years across. That same year, an unusually large region with a much lower than average distribution of galaxies was discovered, the Giant Void, which measures 1.3 billion light-years across. Based on redshift survey data, in 1989 Margaret Geller and John Huchra discovered the "Great Wall", a sheet of galaxies more than 500 million light-years long and 200 million light-years wide, but only 15 million light-years thick. The existence of this structure escaped notice for so long because it requires locating the position of galaxies in three dimensions, which involves combining location information about the galaxies with distance information from redshifts. Two years later, astronomers Roger G. Clowes and Luis E. Campusano discovered the Clowes–Campusano LQG, a large quasar group measuring two billion light-years at its widest point which was the largest known structure in the universe at the time of its announcement. In April 2003, another large-scale structure was discovered, the Sloan Great Wall. In August 2007, a possible supervoid was detected in the constellation Eridanus. It coincides with the 'CMB cold spot', a cold region in the microwave sky that is highly improbable under the currently favored cosmological model. This supervoid could cause the cold spot, but to do so it would have to be improbably big, possibly a billion light-years across, almost as big as the Giant Void mentioned above. Another large-scale structure is the SSA22 Protocluster, a collection of galaxies and enormous gas bubbles that measures about 200 million light-years across. In 2011, a large quasar group was discovered, U1.11, measuring about 2.5 billion light-years across. On January 11, 2013, another large quasar group, the Huge-LQG, was discovered, which was measured to be four billion light-years across, the largest known structure in the universe at that time. In November 2013, astronomers discovered the Hercules–Corona Borealis Great Wall, an even bigger structure twice as large as the former. It was defined by the mapping of gamma-ray bursts. The End of Greatness is an observational scale discovered at roughly 100 Mpc (roughly 300 million light-years) where the lumpiness seen in the large-scale structure of the universe is homogenized and isotropized in accordance with the Cosmological Principle. At this scale, no pseudo-random fractalness is apparent. The superclusters and filaments seen in smaller surveys are randomized to the extent that the smooth distribution of the universe is visually apparent. It was not until the redshift surveys of the 1990s were completed that this scale could accurately be observed. Another indicator of large-scale structure is the 'Lyman-alpha forest'. This is a collection of absorption lines that appear in the spectra of light from quasars, which are interpreted as indicating the existence of huge thin sheets of intergalactic (mostly hydrogen) gas. These sheets appear to be associated with the formation of new galaxies. Caution is required in describing structures on a cosmic scale because things are often different from how they appear. Gravitational lensing (bending of light by gravitation) can make an image appear to originate in a different direction from its real source. This is caused when foreground objects (such as galaxies) curve surrounding spacetime (as predicted by general relativity), and deflect passing light rays. Rather usefully, strong gravitational lensing can sometimes magnify distant galaxies, making them easier to detect. Weak lensing (gravitational shear) by the intervening universe in general also subtly changes the observed large-scale structure. The large-scale structure of the universe also looks different if one only uses redshift to measure distances to galaxies. For example, galaxies behind a galaxy cluster are attracted to it, and so fall towards it, and so are slightly blueshifted (compared to how they would be if there were no cluster) On the near side, things are slightly redshifted. Thus, the environment of the cluster looks a bit squashed if using redshifts to measure distance. An opposite effect works on the galaxies already within a cluster: the galaxies have some random motion around the cluster center, and when these random motions are converted to redshifts, the cluster appears elongated. This creates a "finger of God"—the illusion of a long chain of galaxies pointed at the Earth. At the centre of the Hydra-Centaurus Supercluster, a gravitational anomaly called the Great Attractor affects the motion of galaxies over a region hundreds of millions of light-years across. These galaxies are all redshifted, in accordance with Hubble's law. This indicates that they are receding from us and from each other, but the variations in their redshift are sufficient to reveal the existence of a concentration of mass equivalent to tens of thousands of galaxies. The Great Attractor, discovered in 1986, lies at a distance of between 150 million and 250 million light-years (250 million is the most recent estimate), in the direction of the Hydra and Centaurus constellations. In its vicinity there is a preponderance of large old galaxies, many of which are colliding with their neighbours, or radiating large amounts of radio waves. In 1987, astronomer R. Brent Tully of the University of Hawaii's Institute of Astronomy identified what he called the Pisces–Cetus Supercluster Complex, a structure one billion light-years long and 150 million light-years across in which, he claimed, the Local Supercluster was embedded. The mass of the observable universe is often quoted as 1050 tonnes or 1053 kg. In this context, mass refers to ordinary matter and includes the interstellar medium (ISM) and the intergalactic medium (IGM). However, it excludes dark matter and dark energy. This quoted value for the mass of ordinary matter in the universe can be estimated based on critical density. The calculations are for the observable universe only as the volume of the whole is unknown and may be infinite. Critical density is the energy density for which the universe is flat. If there is no dark energy, it is also the density for which the expansion of the universe is poised between continued expansion and collapse. From the Friedmann equations, the value for critical density, is: where G is the gravitational constant and H = H0 is the present value of the Hubble constant. The current value for H0, due to the European Space Agency's Planck Telescope, is H0 = 67.15 kilometers per second per mega parsec. This gives a critical density of 0.85×10−26 kg/m3 (commonly quoted as about 5 hydrogen atoms per cubic meter). This density includes four significant types of energy/mass: ordinary matter (4.8%), neutrinos (0.1%), cold dark matter (26.8%), and dark energy (68.3%). Note that although neutrinos are Standard Model particles, they are listed separately because they are difficult to detect and so different from ordinary matter. The density of ordinary matter, as measured by Planck, is 4.8% of the total critical density or 4.08×10−28 kg/m3. To convert this density to mass we must multiply by volume, a value based on the radius of the "observable universe". Since the universe has been expanding for 13.8 billion years, the comoving distance (radius) is now about 46.6 billion light-years. Thus, volume (4/πr3) equals 3.58×1080 m3 and the mass of ordinary matter equals density (4.08×10−28 kg/m3) times volume (3.58×1080 m3) or 1.46×1053 kg. Assuming the mass of ordinary matter is about 1.45×1053 kg (refer to previous section) and assuming all atoms are hydrogen atoms (which are about 74% of all atoms in our galaxy by mass, see Abundance of the chemical elements), calculating the estimated total number of atoms in the observable universe is straightforward. Divide the mass of ordinary matter by the mass of a hydrogen atom (1.45×1053 kg divided by 1.67×10−27 kg). The result is approximately 1080 hydrogen atoms. The most distant astronomical object yet announced as of 2016 is a galaxy classified GN-z11. In 2009, a gamma ray burst, GRB 090423, was found to have a redshift of 8.2, which indicates that the collapsing star that caused it exploded when the universe was only 630 million years old. The burst happened approximately 13 billion years ago, so a distance of about 13 billion light-years was widely quoted in the media (or sometimes a more precise figure of 13.035 billion light-years), though this would be the "light travel distance" (see Distance measures (cosmology)) rather than the "proper distance" used in both Hubble's law and in defining the size of the observable universe (cosmologist Ned Wright argues against the common use of light travel distance in astronomical press releases on this page, and at the bottom of the page offers online calculators that can be used to calculate the current proper distance to a distant object in a flat universe based on either the redshift z or the light travel time). The proper distance for a redshift of 8.2 would be about 9.2 Gpc, or about 30 billion light-years. Another record-holder for most distant object is a galaxy observed through and located beyond Abell 2218, also with a light travel distance of approximately 13 billion light-years from Earth, with observations from the Hubble telescope indicating a redshift between 6.6 and 7.1, and observations from Keck telescopes indicating a redshift towards the upper end of this range, around 7. The galaxy's light now observable on Earth would have begun to emanate from its source about 750 million years after the Big Bang. The limit of observability in our universe is set by a set of cosmological horizons which limit—based on various physical constraints—the extent to which we can obtain information about various events in the universe. The most famous horizon is the particle horizon which sets a limit on the precise distance that can be seen due to the finite age of the universe. Additional horizons are associated with the possible future extent of observations (larger than the particle horizon owing to the expansion of space), an "optical horizon" at the surface of last scattering, and associated horizons with the surface of last scattering for neutrinos and gravitational waves. An astronomical object or celestial object is a naturally occurring physical entity, association, or structures that exists in the observable universe. In astronomy, the terms object and body are often used interchangeably. However, an astronomical body or celestial body is a single, tightly bound, contiguous entity, while an astronomical or celestial object is a complex, less cohesively bound structure, which may consist of multiple bodies or even other objects with substructures. Examples of astronomical objects include planetary systems, star clusters, nebulae, and galaxies, while asteroids, moons, planets, and stars are astronomical bodies. A comet may be identified as both body and object: It is a body when referring to the frozen nucleus of ice and dust, and an object when describing the entire comet with its diffuse coma and tail.Black hole cosmology A black hole cosmology (also called Schwarzschild cosmology or black hole cosmological model) is a cosmological model in which the observable universe is the interior of a black hole. Such models were originally proposed by theoretical physicist Raj Pathria, and concurrently by mathematician I. J. Good.Any such model requires that the Hubble radius of the observable universe is equal to its Schwarzschild radius, that is, the product of its mass and the Schwarzschild proportionality constant. This is indeed known to be nearly the case; however, most cosmologists consider this close match a coincidence.In the version as originally proposed by Pathria and Good, and studied more recently by, among others, Nikodem Popławski, the observable universe is the interior of a black hole existing as one of possibly many inside a larger parent universe, or multiverse. According to general relativity, the gravitational collapse of a sufficiently compact mass forms a singular Schwarzschild black hole. In the Einstein–Cartan–Sciama–Kibble theory of gravity, however, it forms a regular Einstein–Rosen bridge, or wormhole. Schwarzschild wormholes and Schwarzschild black holes are different mathematical solutions of general relativity and the Einstein–Cartan theory. Yet for observers, the exteriors of both solutions with the same mass are indistinguishable. The Einstein–Cartan theory extends general relativity by removing a constraint of the symmetry of the affine connection and regarding its antisymmetric part, the torsion tensor, as a dynamical variable. Torsion naturally accounts for the quantum-mechanical, intrinsic angular momentum (spin) of matter. The minimal coupling between torsion and Dirac spinors generates a repulsive spin-spin interaction which is significant in fermionic matter at extremely high densities. Such an interaction prevents the formation of a gravitational singularity. Instead, the collapsing matter reaches an enormous but finite density and rebounds, forming the other side of an Einstein-Rosen bridge, which grows as a new universe. Accordingly, the Big Bang was a nonsingular Big Bounce at which the universe had a finite, minimum scale factor. Or, the Big Bang was a supermassive white hole that was the result of a supermassive black hole at the heart of a galaxy in our parent universe.CfA2 Great Wall The Great Wall (also called Coma Wall), sometimes specifically referred to as the CfA2 Great Wall, is an immense galaxy filament. It is one of the largest known superstructures in the observable universe. This structure was discovered c. 1989 by a team of American astronomers led by Margaret J. Geller and John Huchra while analyzing data gathered by the second CfA Redshift Survey of the Harvard-Smithsonian Center for Astrophysics (CfA).Cosmological horizon A cosmological horizon is a measure of the distance from which one could possibly retrieve information. This observable constraint is due to various properties of general relativity, the expanding universe, and the physics of Big Bang cosmology. Cosmological horizons set the size and scale of the observable universe. This article explains a number of these horizons.Extragalactic astronomy Extragalactic astronomy is the branch of astronomy concerned with objects outside the Milky Way galaxy. In other words, it is the study of all astronomical objects which are not covered by galactic astronomy. As instrumentation has improved, distant objects can now be examined in more detail. It is therefore useful to sub-divide this branch into Near-Extragalactic Astronomy and Far-Extragalactic Astronomy. The former deals with objects such as the galaxies of the Local Group, which are close enough to allow very detailed analyses of their contents (e.g. supernova remnants, stellar associations). Some topics include: Galaxy clusters, Superclusters Active galactic nuclei, Quasars the observable universeGN-z11 GN-z11 is a high-redshift galaxy found in the constellation Ursa Major. GN-z11 is currently the oldest and most distant known galaxy in the observable universe. GN-z11 has a spectroscopic redshift of z = 11.09, which corresponds to a proper distance of approximately 32 billion light-years (9.8 billion parsecs).The object's name is derived from its location in the GOODS-North field of galaxies and its high cosmological redshift number (GN + z11). GN-z11 is observed as it existed 13.4 billion years ago, just 400 million years after the Big Bang; as a result, GN-z11's distance is sometimes inappropriately reported as 13.4 billion light years, its light travel distance measurement.Graham's number Graham's number is an immense number that arises as an upper bound on the answer of a problem in the mathematical field of Ramsey theory. It is named after mathematician Ronald Graham, who used the number as a simplified explanation of the upper bounds of the problem he was working on in conversations with popular science writer Martin Gardner. Gardner later described the number in Scientific American in 1977, introducing it to the general public. At the time of its introduction, it was the largest specific positive integer ever to have been used in a published mathematical proof. The number was published in the 1980 Guinness Book of World Records, adding to its popular interest. Other specific integers (such as TREE(3)) known to be far larger than Graham's number have since appeared in many serious mathematical proofs, for example in connection with Harvey Friedman's various finite forms of Kruskal's theorem. Additionally, smaller upper bounds on the Ramsey theory problem from which Graham's number derived have since been proven to be valid. Graham's number is much larger than many other large numbers such as Skewes' number and Moser's number, both of which are in turn much larger than a googolplex. As with these, it is so large that the observable universe is far too small to contain an ordinary digital representation of Graham's number, assuming that each digit occupies one Planck volume, possibly the smallest measurable space. But even the number of digits in this digital representation of Graham's number would itself be a number so large that its digital representation cannot be represented in the observable universe. Nor even can the number of digits of that number—and so forth, for a number of times far exceeding the total number of Planck volumes in the observable universe. Thus Graham's number cannot even be expressed in this way by power towers of the form . However, Graham's number can be explicitly given by computable recursive formulas using Knuth's up-arrow notation or equivalent, as was done by Graham. As there is a recursive formula to define it, it is much smaller than typical busy beaver numbers. Though too large to be computed in full, the sequence of digits of Graham's number can be computed explicitly through simple algorithms. The last 12 digits are ...262464195387. With Knuth's up-arrow notation, Graham's number is , whereHubble volume In cosmology, a Hubble volume or Hubble sphere is a spherical region of the observable universe surrounding an observer beyond which objects recede from that observer at a rate greater than the speed of light due to the expansion of the Universe. The Hubble volume is approximately equal to 1031 cubic light years. The proper radius of a Hubble sphere (known as the Hubble radius or the Hubble length) is , where is the speed of light and is the Hubble constant. The surface of a Hubble sphere is called the microphysical horizon, the Hubble surface, or the Hubble limit. More generally, the term "Hubble volume" can be applied to any region of space with a volume of order . However, the term is also frequently (but mistakenly) used as a synonym for the observable universe; the latter is larger than the Hubble volume.Location of Earth Knowledge of the location of Earth has been shaped by 400 years of telescopic observations, and has expanded radically in the last century. Initially, Earth was believed to be the center of the Universe, which consisted only of those planets visible with the naked eye and an outlying sphere of fixed stars. After the acceptance of the heliocentric model in the 17th century, observations by William Herschel and others showed that the Sun lay within a vast, disc-shaped galaxy of stars. By the 20th century, observations of spiral nebulae revealed that our galaxy was one of billions in an expanding universe, grouped into clusters and superclusters. By the end of the 20th century, the overall structure of the visible universe was becoming clearer, with superclusters forming into a vast web of filaments and voids. Superclusters, filaments and voids are the largest coherent structures in the Universe that we can observe. At still larger scales (over 1000 megaparsecs) the Universe becomes homogeneous meaning that all its parts have on average the same density, composition and structure.Since there is believed to be no "center" or "edge" of the Universe, there is no particular reference point with which to plot the overall location of the Earth in the universe. Because the observable universe is defined as that region of the Universe visible to terrestrial observers, Earth is, because of the constancy of the speed of light, the center of Earth's observable universe. Reference can be made to the Earth's position with respect to specific structures, which exist at various scales. It is still undetermined whether the Universe is infinite. There have been numerous hypotheses that our universe may be only one such example within a higher multiverse; however, no direct evidence of any sort of multiverse has ever been observed, and some have argued that the hypothesis is not falsifiable.Mass-to-light ratio In astrophysics and physical cosmology the mass-to-light ratio, normally designated with the Greek letter upsilon, ϒ, is the quotient between the total mass of a spatial volume (typically on the scales of a galaxy or a cluster) and its luminosity. These ratios are often reported using the value calculated for the Sun as a baseline ratio which is a constant ϒ☉ = 5133 kg/W: equal to the solar mass M☉ divided by the solar luminosity L☉, M☉/L☉. The mass-to-light ratios of galaxies and clusters are all much greater than ϒ☉ due in part to the fact that most of the matter in these objects does not reside within stars and observations suggest that a large fraction is present in the form of dark matter. Luminosities are obtained from photometric observations, correcting the observed brightness of the object for the distance dimming and extinction effects. In general, unless a complete spectrum of the radiation emitted by the object is obtained, a model must be extrapolated through either power law or blackbody fits. The luminosity thus obtained is known as the bolometric luminosity. Masses are often calculated from the dynamics of the virialized system or from gravitational lensing. Typical mass-to-light ratios for galaxies range from 2 to 10 ϒ☉ while on the largest scales, the mass to light ratio of the observable universe is approximately 100 ϒ☉, in concordance with the current best fit cosmological model.NGC 1260 NGC 1260 is a spiral or lenticular galaxy in the constellation Perseus. It was discovered by astronomer Guillaume Bigourdan on October 19, 1884. NGC 1260 is a member of the Perseus Cluster and forms a tight pair with the galaxy PGC 12230. In 2006, it was home to the second brightest supernova in the observable universe, supernova SN 2006gy.Particle horizon The particle horizon (also called the cosmological horizon, the comoving horizon (in Dodelson's text), or the cosmic light horizon) is the maximum distance from which particles could have traveled to the observer in the age of the universe. Much like the concept of a terrestrial horizon, it represents the boundary between the observable and the unobservable regions of the universe, so its distance at the present epoch defines the size of the observable universe. Due to the expansion of the universe it is not simply the age of the universe times the speed of light (approximately 13.8 billion light-years), but rather the speed of light times the conformal time. The existence, properties, and significance of a cosmological horizon depend on the particular cosmological model.Pocket universe A pocket universe is a concept in inflationary theory, proposed by Alan Guth. It defines a realm like the one that contains the observable universe as only one of many inflationary zones.Astrophysicist Jean-Luc Lehners, of the Princeton Center for Theoretical Science, has argued that an inflationary universe does produce pockets. In his 2012 journal, Lehners wrote about how pocket universes can emerge as a result of eternal inflation. The mechanisms of inflation within these pocket universes could function in a variety of manner, such as slow-roll inflation, undergoing cycles of cosmological evolution, or resembling of the Galilean genesis or other 'emergent' universe scenarios. Lehners goes on to discuss which one of these types of universes we live in, and how that is dependent on the measurement of the regulation of infinities inherent in eternal inflation.But, Lehners continues, "the current leading measure proposals—namely, the global light-cone cutoff and its local counterpart, the causal diamond measure—as well as closely related proposals, all predict that we should live in a pocket universe that starts out with a small Hubble rate, thus favoring emergent and cyclic models." Lehners adds, deadpan, "Pocket universes which undergo cycles are further preferred, because they produce habitable conditions repeatedly inside each pocket."Shape of the universe The shape of the universe is the local and global geometry of the universe. The local features of the geometry of the universe are primarily described by its curvature, whereas the topology of the universe describes general global properties of its shape as of a continuous object. The shape of the universe is related to general relativity, which describes how spacetime is curved and bent by mass and energy. Cosmologists distinguish between the observable universe and the global universe. The observable universe consists of the part of the universe that can, in principle, be observed by light reaching Earth within the age of the universe. It encompasses a region of space that currently forms a ball centered at Earth of estimated radius 46.5 billion light-years (4.40×1026 m). This does not mean the universe is 46.5 billion years old; instead the universe is measured to be 13.8 billion years old, but space itself has also expanded, causing the size of the observable universe to be as stated. (However, it is possible to observe these distant areas only in their very distant past, when the distance light had to travel was much less). Assuming an isotropic nature, the observable universe is similar for all contemporary vantage points. The global shape of the universe can be described with three attributes: Finite or infinite Flat (no curvature), open (negative curvature), or closed (positive curvature) Connectivity, how the universe is put together, i.e., simply connected space or multiply connected.There are certain logical connections among these properties. For example, a universe with positive curvature is necessarily finite. Although it is usually assumed in the literature that a flat or negatively curved universe is infinite, this need not be the case if the topology is not the trivial one.The exact shape is still a matter of debate in physical cosmology, but experimental data from various independent sources (WMAP, BOOMERanG, and Planck for example) confirm that the observable universe is flat with only a 0.4% margin of error. Theorists have been trying to construct a formal mathematical model of the shape of the universe. In formal terms, this is a 3-manifold model corresponding to the spatial section (in comoving coordinates) of the 4-dimensional spacetime of the universe. The model most theorists currently use is the Friedmann–Lemaître–Robertson–Walker (FLRW) model. Arguments have been put forward that the observational data best fit with the conclusion that the shape of the global universe is infinite and flat, but the data are also consistent with other possible shapes, such as the so-called Poincaré dodecahedral space and the Sokolov–Starobinskii space (quotient of the upper half-space model of hyperbolic space by 2-dimensional lattice).Supercluster A supercluster is a large group of smaller galaxy clusters or galaxy groups; it is among the largest-known structures of the cosmos. The Milky Way is part of the Local Group galaxy group (which contains more than 54 galaxies), which in turn is part of the Virgo Cluster, which is part of the Laniakea Supercluster. This supercluster spans over 500 million light-years, while the Local Group spans over 10 million light-years. The large size and low density of superclusters means they, unlike clusters, expand with the Hubble expansion. The number of superclusters in the observable universe is estimated to be 10 million.Universal probability bound A universal probability bound is a probabilistic threshold whose existence is asserted by William A. Dembski and is used by him in his works promoting intelligent design. It is defined as A degree of improbability below which a specified event of that probability cannot reasonably be attributed to chance regardless of whatever probabilitistic resources from the known universe are factored in. Dembski asserts that one can effectively estimate a positive value which is a universal probability bound. The existence of such a bound would imply that certain kinds of random events whose probability lies below this value can be assumed not to have occurred in the observable universe, given the resources available in the entire history of the observable universe. Contrapositively, Dembski uses the threshold to argue that the occurrence of certain events cannot be attributed to chance alone. Universal probability bound is then used to argue against random evolution. However evolution is not based on random events only (genetic drift), but also on natural selection. The idea that events with fantastically small, but positive probabilities, are effectively negligible was discussed by the French mathematician Émile Borel primarily in the context of cosmology and statistical mechanics. However, there is no widely accepted scientific basis for claiming that certain positive values are universal cutoff points for effective negligibility of events. Borel, in particular, was careful to point out that negligibility was relative to a model of probability for a specific physical system.Dembski appeals to cryptographic practice in support of the concept of the universal probability bound, noting that cryptographers have sometimes compared the security of encryption algorithms against brute force attacks by the likelihood of success of an adversary utilizing computational resources bounded by very large physical constraints. An example of such a constraint might be obtained for example, by assuming that every atom in the observable universe is a computer of a certain type and these computers are running through and testing every possible key. Although universal measures of security are used much less frequently than asymptotic ones and the fact that a keyspace is very large may be less relevant if the cryptographic algorithm used has vulnerabilities which make it susceptible to other kinds of attacks, asymptotic approaches and directed attacks would, by definition, be unavailable under chance-based scenarios such as those relevant to Dembski's universal probability bound. As a result, Dembski's appeal to cryptography is best understood as referring to brute force attacks, rather than directed attacks.Universe The Universe is all of space and time and their contents, including planets, stars, galaxies, and all other forms of matter and energy. While the spatial size of the entire Universe is unknown, it is possible to measure the size of the observable universe, which is currently estimated to be 93 billion light years in diameter. In various multiverse hypotheses, a universe is one of many causally disconnected constituent parts of a larger multiverse, which itself comprises all of space and time and its contents.The earliest scientific models of the Universe were developed by ancient Greek and Indian philosophers and were geocentric, placing Earth at the center of the Universe. Over the centuries, more precise astronomical observations led Nicolaus Copernicus to develop the heliocentric model with the Sun at the center of the Solar System. In developing the law of universal gravitation, Isaac Newton built upon Copernicus' work as well as observations by Tycho Brahe and Johannes Kepler's laws of planetary motion. Further observational improvements led to the realization that the Sun is one of hundreds of billions of stars in the Milky Way, which is one of at least hundreds of billions of galaxies in the Universe. Many of the stars in our galaxy have planets. At the largest scale galaxies are distributed uniformly and the same in all directions, meaning that the Universe has neither an edge nor a center. At smaller scales, galaxies are distributed in clusters and superclusters which form immense filaments and voids in space, creating a vast foam-like structure. Discoveries in the early 20th century have suggested that the Universe had a beginning and that space has been expanding since then, and is currently still expanding at an increasing rate.The Big Bang theory is the prevailing cosmological description of the development of the Universe. Under this theory, space and time emerged together 13.799±0.021 billion years ago and the energy and matter initially present have become less dense as the Universe expanded. After an initial accelerated expansion called the inflationary epoch at around 10−32 seconds, and the separation of the four known fundamental forces, the Universe gradually cooled and continued to expand, allowing the first subatomic particles and simple atoms to form. Dark matter gradually gathered forming a foam-like structure of filaments and voids under the influence of gravity. Giant clouds of hydrogen and helium were gradually drawn to the places where dark matter was most dense, forming the first galaxies, stars, and everything else seen today. It is possible to see objects that are now further away than 13.799 billion light-years because space itself has expanded, and it is still expanding today. This means that objects which are now up to 46.5 billion light-years away can still be seen in their distant past, because in the past when their light was emitted, they were much closer to the Earth. From studying the movement of galaxies, it has been discovered that the universe contains much more matter than is accounted for by visible objects; stars, galaxies, nebulas and interstellar gas. This unseen matter is known as dark matter (dark means that there is a wide range of strong indirect evidence that it exists, but we have not yet detected it directly). The ΛCDM model is the most widely accepted model of our universe. It suggests that about 69.2%±1.2% of the mass and energy in the universe is a cosmological constant (or, in extensions to ΛCDM, other forms of dark energy such as a scalar field) which is responsible for the current expansion of space, and about 25.8%±1.1% is dark matter. Ordinary ("baryonic") matter is therefore only 4.9% of the physical universe. Stars, planets, and visible gas clouds only form about 6% of ordinary matter, or about 0.3% of the entire universe.There are many competing hypotheses about the ultimate fate of the universe and about what, if anything, preceded the Big Bang, while other physicists and philosophers refuse to speculate, doubting that information about prior states will ever be accessible. Some physicists have suggested various multiverse hypotheses, in which our universe might be one among many universes that likewise exist.Virgo Supercluster The Virgo Supercluster (Virgo SC) or the Local Supercluster (LSC or LS) is a mass concentration of galaxies containing the Virgo Cluster and Local Group, which in turn contains the Milky Way and Andromeda galaxies. At least 100 galaxy groups and clusters are located within its diameter of 33 megaparsecs (110 million light-years). The Virgo SC is one of about 10 million superclusters in the observable universe and is in the Pisces–Cetus Supercluster Complex, a galaxy filament. A 2014 study indicates that the Virgo Supercluster is only a lobe of an even greater supercluster, Laniakea, a larger, competing referent of Local Supercluster centered on the Great Attractor.Yotta- Yotta is the largest decimal unit prefix in the metric system, denoting a factor of 1024 or 1000000000000000000000000; that is, one million million million million, or one septillion. It has the unit symbol Y. The prefix name is derived from the Ancient Greek οκτώ (októ), meaning "eight", because it is equal to 1,0008. It was added as an SI prefix to the International System of Units (SI) in 1991.Usage examples: The mass of the Earth is 5,972.6 Yg. The mass of the oceans is about 1.4 Yg. The total power output of the Sun is approximately 385 YW. The observable universe is estimated to be 880 Ym in diameter. One yottabyte (YB) is a unit of digital information or information storage capacity that contains one septillion bytes or 1,000 zettabytes. The yobibyte (YiB) is a related unit that uses a binary prefix, and means 1,0248 bytes, which is approximately 1.2 septillion bytes. Astronomy portal - Cosmology portal
All materials are made up from atoms, and all atoms consist of protons, neutrons and electrons. Protons, have a positive electrical charge. Neutrons have no electrical charge (that is they are Neutral), while Electrons have a negative electrical charge. Atoms are bound together by powerful forces of attraction existing between the atoms nucleus and the electrons in its outer shell. When these protons, neutrons and electrons are together within the atom they are happy and stable. But if we separate them from each other they want to reform and start to exert a potential of attraction called a potential difference. Now if we create a closed circuit these loose electrons will start to move and drift back to the protons due to their attraction creating a flow of electrons. This flow of electrons is called an electrical current. The electrons do not flow freely through the circuit as the material they move through creates a restriction to the electron flow. This restriction is called resistance. Then all basic electrical or electronic circuits consist of three separate but very much related electrical quantities called: Voltage, ( v ), Current, ( i ) and Resistance, ( Ω ). Voltage, ( V ) is the potential energy of an electrical supply stored in the form of an electrical charge. Voltage can be thought of as the force that pushes electrons through a conductor and the greater the voltage the greater is its ability to “push” the electrons through a given circuit. As energy has the ability to do work this potential energy can be described as the work required in joules to move electrons in the form of an electrical current around a circuit from one point or node to another. Then the difference in voltage between any two points, connections or junctions (called nodes) in a circuit is known as the Potential Difference, ( p.d. ) commonly called the Voltage Drop. The Potential difference between two points is measured in Volts with the circuit symbol V, or lowercase “v“, although Energy, E lowercase “e” is sometimes used to indicate a generated emf (electromotive force). Then the greater the voltage, the greater is the pressure (or pushing force) and the greater is the capacity to do work. A constant voltage source is called a DC Voltage with a voltage that varies periodically with time is called an AC voltage. Voltage is measured in volts, with one volt being defined as the electrical pressure required to force an electrical current of one ampere through a resistance of one Ohm. Voltages are generally expressed in Volts with prefixes used to denote sub-multiples of the voltage such as microvolts ( μV = 10-6 V ), millivolts ( mV = 10-3 V ) or kilovolts ( kV = 103 V ). Voltage can be either positive or negative. Batteries or power supplies are mostly used to produce a steady D.C. (direct current) voltage source such as 5v, 12v, 24v etc in electronic circuits and systems. While A.C. (alternating current) voltage sources are available for domestic house and industrial power and lighting as well as power transmission. The mains voltage supply in the United Kingdom is currently 230 volts a.c. and 110 volts a.c. in the USA. General electronic circuits operate on low voltage DC battery supplies of between 1.5V and 24V dc The circuit symbol for a constant voltage source usually given as a battery symbol with a positive, + and negative, – sign indicating the direction of the polarity. The circuit symbol for an alternating voltage source is a circle with a sine wave inside. A simple relationship can be made between a tank of water and a voltage supply. The higher the water tank above the outlet the greater the pressure of the water as more energy is released, the higher the voltage the greater the potential energy as more electrons are released. Voltage is always measured as the difference between any two points in a circuit and the voltage between these two points is generally referred to as the “Voltage drop“. Note that voltage can exist across a circuit without current, but current cannot exist without voltage and as such any voltage source whether DC or AC likes an open or semi-open circuit condition but hates any short circuit condition as this can destroy it. Electrical Current, ( I ) is the movement or flow of electrical charge and is measured in Amperes, symbol i, for intensity). It is the continuous and uniform flow (called a drift) of electrons (the negative particles of an atom) around a circuit that are being “pushed” by the voltage source. In reality, electrons flow from the negative (–ve) terminal to the positive (+ve) terminal of the supply and for ease of circuit understanding conventional current flow assumes that the current flows from the positive to the negative terminal. Generally in circuit diagrams the flow of current through the circuit usually has an arrow associated with the symbol, I, or lowercase i to indicate the actual direction of the current flow. However, this arrow usually indicates the direction of conventional current flow and not necessarily the direction of the actual flow. Conventional Current Flow Conventionally this is the flow of positive charge around a circuit, being positive to negative. The diagram at the left shows the movement of the positive charge (holes) around a closed circuit flowing from the positive terminal of the battery, through the circuit and returns to the negative terminal of the battery. This flow of current from positive to negative is generally known as conventional current flow. This was the convention chosen during the discovery of electricity in which the direction of electric current was thought to flow in a circuit. To continue with this line of thought, in all circuit diagrams and schematics, the arrows shown on symbols for components such as diodes and transistors point in the direction of conventional current flow. Then Conventional Current Flow gives the flow of electrical current from positive to negative and which is the opposite in direction to the actual flow of electrons. The flow of electrons around the circuit is opposite to the direction of the conventional current flow being negative to positive.The actual current flowing in an electrical circuit is composed of electrons that flow from the negative pole of the battery (the cathode) and return back to the positive pole (the anode) of the battery. This is because the charge on an electron is negative by definition and so is attracted to the positive terminal. This flow of electrons is called Electron Current Flow. Therefore, electrons actually flow around a circuit from the negative terminal to the positive. Both conventional current flow and electron flow are used by many textbooks. In fact, it makes no difference which way the current is flowing around the circuit as long as the direction is used consistently. The direction of current flow does not affect what the current does within the circuit. Generally it is much easier to understand the conventional current flow – positive to negative. In electronic circuits, a current source is a circuit element that provides a specified amount of current for example, 1A, 5A 10 Amps etc, with the circuit symbol for a constant current source given as a circle with an arrow inside indicating its direction. Current is measured in Amps and an amp or ampere is defined as the number of electrons or charge (Q in Coulombs) passing a certain point in the circuit in one second, (t in Seconds). Electrical current is generally expressed in Amps with prefixes used to denote micro amps ( μA = 10-6A ) or milliamps ( mA = 10-3A ). Note that electrical current can be either positive in value or negative in value depending upon its direction of flow around the circuit. Current that flows in a single direction is called Direct Current, or D.C. and current that alternates back and forth through the circuit is known as Alternating Current, or A.C.. Whether AC or DC current only flows through a circuit when a voltage source is connected to it with its “flow” being limited to both the resistance of the circuit and the voltage source pushing it. Also, as alternating currents (and voltages) are periodic and vary with time the “effective” or “RMS”, (Root Mean Squared) value given as Irms produces the same average power loss equivalent to a DC current Iaverage . Current sources are the opposite to voltage sources in that they like short or closed circuit conditions but hate open circuit conditions as no current will flow. Using the tank of water relationship, current is the equivalent of the flow of water through the pipe with the flow being the same throughout the pipe. The faster the flow of water the greater the current. Note that current cannot exist without voltage so any current source whether DC or AC likes a short or semi-short circuit condition but hates any open circuit condition as this prevents it from flowing. Resistance, ( R ) is the capacity of a material to resist or prevent the flow of current or, more specifically, the flow of electric charge within a circuit. The circuit element which does this perfectly is called the “Resistor”. Resistance is a circuit element measured in Ohms, Greek symbol ( Ω, Omega ) with prefixes used to denote Kilo-ohms ( kΩ = 103Ω ) and Mega-ohms ( MΩ = 106Ω ). Note that resistance cannot be negative in value only positive. The amount of resistance a resistor has is determined by the relationship of the current through it to the voltage across it which determines whether the circuit element is a “good conductor” – low resistance, or a “bad conductor” – high resistance. Low resistance, for example 1Ω or less implies that the circuit is a good conductor made from materials such as copper, aluminium or carbon while a high resistance, 1MΩ or more implies the circuit is a bad conductor made from insulating materials such as glass, porcelain or plastic. A “semiconductor” on the other hand such as silicon or germanium, is a material whose resistance is half way between that of a good conductor and a good insulator. Hence the name “semi-conductor”. Semiconductors are used to make Diodes and Transistors etc. Resistance can be linear or non-linear in nature, but never negative. Linear resistance obeys Ohm’s Law as the voltage across the resistor is linearly proportional to the current through it. Non-linear resistance, does not obey Ohm’s Law but has a voltage drop across it that is proportional to some power of the current. Resistance is pure and is not affected by frequency with the AC impedance of a resistance being equal to its DC resistance and as a result can not be negative. Remember that resistance is always positive, and never negative. A resistor is classed as a passive circuit element and as such cannot deliver power or store energy. Instead resistors absorbed power that appears as heat and light. Power in a resistance is always positive regardless of voltage polarity and current direction. For very low values of resistance, for example milli-ohms, ( mΩ ) it is sometimes much easier to use the reciprocal of resistance ( 1/R ) rather than resistance ( R ) itself. The reciprocal of resistance is called Conductance, symbol ( G ) and represents the ability of a conductor or device to conduct electricity. In other words the ease by which current flows. High values of conductance implies a good conductor such as copper while low values of conductance implies a bad conductor such as wood. The standard unit of measurement given for conductance is the Siemen, symbol (S). The unit used for conductance is mho (ohm spelt backward), which is symbolized by an inverted Ohm sign ℧. Power can also be expressed using conductance as: p = i2/G = v2G. The relationship between Voltage, ( v ) and Current, ( i ) in a circuit of constant Resistance, ( R ) would produce a straight line i-v relationship with slope equal to the value of the resistance as shown. Voltage, Current and Resistance Summary Hopefully by now you should have some idea of how electrical Voltage, Current and Resistance are closely related together. The relationship between Voltage, Current and Resistance forms the basis of Ohm’s law. In a linear circuit of fixed resistance, if we increase the voltage, the current goes up, and similarly, if we decrease the voltage, the current goes down. This means that if the voltage is high the current is high, and if the voltage is low the current is low. Likewise, if we increase the resistance, the current goes down for a given voltage and if we decrease the resistance the current goes up. Which means that if resistance is high current is low and if resistance is low current is high. Then we can see that current flow around a circuit is directly proportional ( ∝ ) to voltage, ( V↑ causes I↑ ) but inversely proportional ( 1/∝ ) to resistance as, ( R↑ causes I↓ ). A basic summary of the three units is given below. - Voltage or potential difference is the measure of potential energy between two points in a circuit and is commonly referred to as its ” volt drop “. - When a voltage source is connected to a closed loop circuit the voltage will produce a current flowing around the circuit. - In DC voltage sources the symbols +ve (positive) and −ve (negative) are used to denote the polarity of the voltage supply. - Voltage is measured in Volts and has the symbol V for voltage or E for electrical energy. - Current flow is a combination of electron flow and hole flow through a circuit. - Current is the continuous and uniform flow of charge around the circuit and is measured in Amperes or Amps and has the symbol I. - Current is Directly Proportional to Voltage ( I ∝ V ) - The effective (rms) value of an alternating current has the same average power loss equivalent to a direct current flowing through a resistive element. - Resistance is the opposition to current flowing around a circuit. - Low values of resistance implies a conductor and high values of resistance implies an insulator. - Current is Inversely Proportional to Resistance ( I 1/∝ R ) - Resistance is measured in Ohms and has the Greek symbol Ω or the letter R. |Quantity||Symbol||Unit of Measure||Abbreviation| |Voltage||V or E||Volt||V| In the next tutorial about DC Circuits we will look at Ohms Law which is a mathematical equation explaining the relationship between Voltage, Current, and Resistance within electrical circuits and is the foundation of electronics and electrical engineering. Ohm’s Law is defined as: V = I*R.
Python Random Module: Generate Random Numbers and Data This lesson demonstrates how to generate random data in Python using a random module. In Python, a random module implements pseudo-random number generators for various distributions, including integer and float (real). Random Data Series This Python random data generation series contains the following in-depth tutorial. You can directly read those. - Python random intenger number: Generate random numbers using randint() and randrange(). - Python random choice: Select a random item from any sequence such as list, tuple, set. - Python random sample: Select multiple random items (k sized random samples) from a list or set. - Python weighted random choices: Select multiple random items with probability (weights) from a list or set. - Python random seed: Initialize the pseudorandom number generator with a seed value. - Python random shuffle: Shuffle or randomize the any sequence in-place. - Python random float number using uniform(): Generate random float number within a range. - Generate random string and passwords in Python: Generate a random string of letters. Also, create a random password with a combination of letters, digits, and symbols. - Cryptographically secure random generator in Python: Generate a cryptographically secure random number using synchronization methods to ensure that no two processes can obtain the same data simultaneously. - Python Secrets module: Use the secrets module to secure random data in Python 3.6 and above. - Python UUID Module: Generate random Universally unique IDs - Python Random data generation Quiz - Python Random data generation Exercise How to Use a random module You need to import the random module in your program, and you are ready to use this module. Use the following statement to import the random module in your code. import random print("Printing random number using random.random()") print(random.random()) # Output 0.5015127958234789 As you can see in the result, we have got 0.50. You may get a different number. random.random()is the most basic function of the random module. - Almost all functions of the random module depend on the basic function random(). random()return the next random floating-point number in the range [0.0, 1.0). Random module functions Now let see the different functions available in the random module and their usage. Click on each function to study it in detail. ||Generate a random integer number within a range ||Returns a random integer number within a range by specifying the ||Select a random item from a ||Initialize the pseudorandom number generator with a seed value ||Shuffle or randomize the sequence ||Returns a random floating-point number within a range| ||Generate a random floating-point number ||Returns a random floating-point number with the beta distribution in such a way that ||It returns random floating-point numbers, exponentially distributed. If ||Returns a random floating-point number N with gamma distribution such that import random # random number from 0 to 1 print(random.random()) # Output 0.16123124494385477 # random number from 10 to 20 print(random.randint(10, 20)) # Output 18 # random number from 10 to 20 with step 2 print(random.randrange(10, 20, 2)) # Output 14 # random float number within a range print(random.uniform(5.5, 25.5)) # Output 5.86390810771935 # random choice from sequence print(random.choice([10, 20, 30, 40, 50])) # Output 30 # random sample from sequence print(random.sample([10, 20, 30, 40, 50], k=3)) # Output [50, 10, 20] # random sample without replacement print(random.choices([10, 20, 30, 40, 50], k=3)) # Output [30, 10, 40] # random shuffle x = [10, 20, 30, 40, 50, 60] random.shuffle(x) print(x) # [60, 10, 30, 20, 50, 40] # random seed random.seed(2) print(random.randint(10, 20)) # 10 random.seed(2) print(random.randint(10, 20)) # 10 random.triangular(low, high, mode) random.triangular() function returns a random floating-point number N such that lower <= N <= upper and with the specified mode between those bounds. The default value of a lower bound is ZERO, and the upper bounds are one. Moreover, the peak argument defaults to the midpoint between the bounds, giving a symmetric distribution. random.triangular() function to generate random numbers for triangular distribution to use these numbers in a simulation. i.e., to generate value from a triangular probability distribution. import random print("floating point triangular") print(random.triangular(10.5, 25.5, 5.5)) # Output 16.114862085401924 Generate random String Refer to Generate the random string and passwords in Python. This guide includes the following things: - - Generate a random string of any length. - Generate the random password, which contains the letters, digits, and special symbols. Cryptographically secure random generator in Python Random Numbers and data generated by the random module are not cryptographically secure. The cryptographically secure random generator generates random data using synchronization methods to ensure that no two processes can obtain the same data simultaneously. A secure random generator is useful for security-sensitive applications such as OTP generation. We can use the following approaches to secure the random generator in Python. - The secrets module to secure random data in Python 3.6 and above. - Use the random.SystemRandom class in Python 2. Get and Set the state of random Generator The random module has two functions: random.setstate() to capture the random generator's current internal state. Using these functions, we can generate the same random numbers or sequence of data. getstate() function returns a tuple object by capturing the current internal state of the random generator. We can pass this state to the setstate() method to restore this state as a current state. setstate() function restores the random generator's internal state to the state object passed to it. Note: By changing the state to the previous state, we can get the same random data. For example, If you want to get the same sample items again, you can use these functions. If you get a previous state and restore it, you can reproduce the same random data repeatedly. Let see the example now to get and set the state of a random generator in Python. import random number_list = [3, 6, 9, 12, 15, 18, 21, 24, 27, 30] print("First Sample is ", random.sample(number_list, k=5)) # Output [3, 27, 21, 24, 18] # Get current state and store state = random.getstate() # set current state random.setstate(state) # Now it will print the same second sample list print("Second Sample is ", random.sample(number_list, k=5)) # Output [3, 27, 21, 24, 18] random.setstate(state) # again it will print the same sample list again print("Third Sample is ", random.sample(number_list, k=5)) # Output [3, 27, 21, 24, 18] # Without setstate # Gives new sample print("Fourth Sample is ", random.sample(number_list, k=5)) # output [27, 24, 12, 21, 15] As you can see in the output, we are getting the same sample list because we use the same state again and again Numpy random package for multidimensional array PRNG is an acronym for pseudorandom number generator. As you know, using the Python random module, we can generate scalar random numbers and data. Use a NumPy module to generate a multidimensional array of random numbers. NumPy has the numpy.random package has multiple functions to generate the random n-dimensional array for various distributions. Create an n-dimensional array of random float numbers - Use a random.rand(d0, d1, …, dn)function to generate an n-dimensional array of random float numbers in the range of - Use a random.uniform(low=0.0, high=1.0, size=None)function to generate an n-dimensional array of random float numbers in the range of import numpy as np random_array = np.random.rand(2, 2) print("2x2 array for random numbers", random_array, "\n") random_float_array = np.random.uniform(25.5, 99.5, size=(3, 2)) print("3 X 2 array of random float numbers in range [25.5, 99.5]", random_float_array) 2x2 array for random numbers [[0.47248707 0.44770557] [0.33280813 0.64284777]] 3 X 2 array of random float numbers in range [25.5, 99.5] [[52.27782303 49.67787027] [28.33494049 37.99789879] [27.19170587 76.69219575]] Generate an n-dimensional array of random integers Use the random.random_integers(low, high=None, size=None) to generate a random n-dimensional array of integers. import numpy as np random_integer_array = np.random.random_integers(5, size=(3, 2)) print("2-dimensional random integer array", random_integer_array) 2-dimensional random integer array [[2 3] [3 4] [3 2]] Generate random Universally unique IDs Python UUID Module provides immutable UUID objects. UUID is a Universally Unique Identifier. It has the functions to generate all versions of UUID. Using the uuid4() function of a UUID module, you can generate a 128 bit long random unique ID ad it's cryptographically safe. These unique ids are used to identify the documents, Users, resources, or information in computer systems. import uuid # get a random UUID safeId = uuid.uuid4() print("safe unique id is ", safeId) safe unique id is UUID('78mo4506-8btg-345b-52kn-8c7fraga847da') Dice Game Using a Random module I have created a simple dice game to understand random module functions. In this game, we have two players and two dice. - One by one, each Player shuffle both the dice and play. - The algorithm calculates the sum of two dice numbers and adds it to each Player's scoreboard. - The Player who scores high number is the winner. import random PlayerOne = "Eric" PlayerTwo = "Kelly" EricScore = 0 KellyScore = 0 # each dice contains six numbers diceOne = [1, 2, 3, 4, 5, 6] diceTwo = [1, 2, 3, 4, 5, 6] def shuffle_dice(): # Both Eric and Kelly will roll both the dices using shuffle method for i in range(5): # shuffle both the dice 5 times random.shuffle(diceOne) random.shuffle(diceTwo) # use choice method to pick one number randomly firstNumber = random.choice(diceOne) SecondNumber = random.choice(diceTwo) return firstNumber + SecondNumber print("Dice game using a random module\n") # Let's play Dice game three times for i in range(3): # let's do toss to determine who has the right to play first # generate random number from 1 to 100. including 100 EricTossNumber = random.randint(1, 100) # generate random number from 1 to 100. doesn't including 101 KellyTossNumber = random.randrange(1, 101, 1) if (EricTossNumber > KellyTossNumber): print("Eric won the toss") EricScore = shuffle_dice() KellyScore = shuffle_dice() else: print("Kelly won the toss") KellyScore = shuffle_dice() EricScore = shuffle_dice() if (EricScore > KellyScore): print("Eric is winner of dice game. Eric's Score is:", EricScore, "Kelly's score is:", KellyScore, "\n") else: print("Kelly is winner of dice game. Kelly's Score is:", KellyScore, "Eric's score is:", EricScore, "\n") Dice game using a random module Kelly won the toss Eric is the winner of a dice game. Eric's Score is: 9 Kelly's score is: 6 Kelly won the toss Eric is the winner of a dice game. Eric's Score is: 11 Kelly's score is: 9 Eric won the toss Kelly is the winner of a dice game. Kelly's Score is: 12 Eric's score is: 5 Exercise and Quiz To practice what you learned in this tutorial, I have created a Quiz and Exercise project. - Solve a Python Random data generation Quiz to test your random data generation concepts. - Solve the Python Random data generation Exercise to practice and master the random data generation techniques. All random data generation tutorials:
In the theory of relativity, time dilation is a difference of elapsed time between two events as measured by observers either moving relative to each other or differently situated from a gravitational mass or masses. A clock at rest with respect to one observer may be measured to tick at a different rate when compared to a second observer's clock. This effect arises neither from technical aspects of the clocks nor from the propagation time of signals, but from the nature of spacetime. - 1 Overview - 2 Simple inference of time dilation due to relative velocity - 3 Due to relative velocity symmetric between observers - 4 Overview of formulae - 5 Experimental confirmation - 6 Space flight - 7 See also - 8 Footnotes - 9 References - 10 Further reading - 11 External links Clocks on the Space Shuttle run slightly slower than reference clocks on Earth, while clocks on GPS and Galileo satellites run slightly faster. Such time dilation has been repeatedly demonstrated (see experimental confirmation below), for instance by small disparities in atomic clocks on Earth and in space, even though both clocks work perfectly (it is not a mechanical malfunction). The nature of spacetime is such that time measured along different trajectories is affected by differences in either gravity or velocity – each of which affects time in different ways. In theory, and to make a clearer example, time dilation could affect planned meetings for astronauts with advanced technologies and greater travel speeds. The astronauts would have to set their clocks to count exactly 80 years, whereas mission control – back on Earth – might need to count 81 years. The astronauts would return to Earth, after their mission, having aged one year less than the people staying on Earth. What is more, the local experience of time passing never actually changes for anyone. In other words, the astronauts on the ship as well as the mission control crew on Earth each feel normal, despite the effects of time dilation (i.e. to the traveling party, those stationary are living "faster"; while to those who stood still, their counterparts in motion live "slower" at any given moment). With technology limiting the velocities of astronauts, these differences are minuscule: after 6 months on the International Space Station (ISS), the astronaut crew has indeed aged less than those on Earth, but only by about 0.005 seconds (nowhere near the 1 year disparity from the theoretical example). The effects would be greater if the astronauts were traveling nearer to the speed of light (299,792,458 m/s), instead of their actual speed – which is the speed of the orbiting ISS, about 7,700 m/s. Time dilation is caused by differences in either gravity or relative velocity. In the case of ISS, time is slower due to the velocity in circular orbit; this effect is slightly reduced by the opposing effect of less gravitational potential. When two observers are in relative uniform motion and uninfluenced by any gravitational mass, the point of view of each will be that the other's (moving) clock is ticking at a slower rate than the local clock. The faster the relative velocity, the greater the magnitude of time dilation. This case is sometimes called special relativistic time dilation. For instance, two rocket ships (A and B) speeding past one another in space would experience time dilation. If they somehow had a clear view into each other's ships, each crew would see the others' clocks and movement as going more slowly. That is, inside the frame of reference of Ship A, everything is moving normally, but everything over on Ship B appears to be moving more slowly (and vice versa). From a local perspective, time registered by clocks that are at rest with respect to the local frame of reference (and far from any gravitational mass) always appears to pass at the same rate. In other words, if a new ship, Ship C, travels alongside Ship A, it is "at rest" relative to Ship A. From the point of view of Ship A, new Ship C's time would appear normal too. A question arises: If Ship A and Ship B both think each other's time is moving slower, who will have aged more if they decided to meet up? With a more sophisticated understanding of relative velocity time dilation, this seeming twin paradox turns out not to be a paradox at all (the resolution of the paradox involves a jump in time, as a result of the accelerated observer turning around). Similarly, understanding the twin paradox would help explain why astronauts on the ISS age slower (e.g. 0.007 seconds behind for every six months) even though they are experiencing relative velocity time dilation. The key is that both observers are differently situated in their distance from a significant gravitational mass. The general theory of relativity describes how, for both observers, the clock that is closer to the gravitational mass, i.e. deeper in its "gravity well", appears to go more slowly than the clock that is more distant from the mass. In the case of a satellite orbiting a planet, it has the opposite effect of the relative velocity time dilation. Gravitational time dilation is at play e.g. for ISS astronauts. With respect to ground observers the ISS astronauts's relative velocity slows down their time, whereas the reduced gravitational influence at their location speeds it up. The two opposing effects are not equally strong. At the ISS altitude the net effect is a slowing down of clocks, whereas in much higher orbits clocks run faster than on the ground. This effect is not restricted to astronauts in space; a climber's time is passing slightly faster at the top of a mountain (a high altitude, farther from the Earth's center of gravity) compared to people at sea level. It has also been calculated that due to time dilation, the core of the Earth is 2.5 years younger than the crust. As with all time dilation, the local experience of time is normal (nobody notices a difference within their own frame of reference). In the situations of velocity time dilation, both observers saw the other as moving slower (a reciprocal effect). Now, with gravitational time dilation, both observers – those at sea level, versus the climber – agree that the clock nearer the mass is slower in rate, and they agree on the ratio of the difference (time dilation from gravity is therefore not reciprocal). That is, the climber sees the sea level clocks as moving more slowly, and those living at sea level see the climber's clock as moving faster. - In special relativity (or, hypothetically far from all gravitational mass), clocks that are moving with respect to an inertial system of observation are measured to be running more slowly. This effect is described precisely by the Lorentz transformation. - In general relativity, clocks at a position with lower gravitational potential – such as in closer proximity to a planet – are found to be running more slowly. The articles on gravitational time dilation and gravitational redshift give a more detailed discussion. Special and general relativistic effects can combine (as seen with ISS astronauts). In special relativity, the time dilation effect is reciprocal: as observed from the point of view of either of two clocks which are in motion with respect to each other, it will be the other clock that is time dilated. (This presumes that the relative motion of both parties is uniform; that is, they do not accelerate with respect to one another during the course of the observations.) In contrast, gravitational time dilation (as treated in general relativity) is not reciprocal: an observer at the top of a tower will observe that clocks at ground level tick slower, and observers on the ground will agree about the direction and the magnitude of the difference. There is still some disagreement in a sense, because all the observers believe their own local clocks are correct, but the direction and ratio of gravitational time dilation is agreed by all observers, independent of their altitude. Science fiction enthusiasts have noted the implications time dilation has on forward time travel, technically making it possible. The Hafele and Keating experiment involved flying planes around the world with atomic clocks on board. Upon the trips' completion the clocks were compared to a static, ground based atomic clock. It was found that ±7 nanoseconds had been gained on the planes' clocks. 273 The current human time travel record holder is Russian cosmonaut Sergei Krikalev, who beat the previous record of about 20 milliseconds by cosmonaut Sergei Avdeyev. This constancy of the speed of light means, counter to intuition, that speeds of material objects and light are not additive. It is not possible to make the speed of light appear greater by approaching at speed towards the material source that is emitting light. It is not possible to make the speed of light appear less by receding from the source at speed. From one point of view, it is the implications of this unexpected constancy that take away from constancies expected elsewhere. Consider a simple clock consisting of two mirrors A and B, between which a light pulse is bouncing. The separation of the mirrors is L and the clock ticks once each time the light pulse hits a given mirror. In the frame where the clock is at rest (diagram at right), the light pulse traces out a path of length 2L and the period of the clock is 2L divided by the speed of light From the frame of reference of a moving observer traveling at the speed v relative to the rest frame of the clock (diagram at lower right), the light pulse traces out a longer, angled path. The second postulate of special relativity states that the speed of light in free space is constant for all inertial observers, which implies a lengthening of the period of this clock from the moving observer's perspective. That is to say, in a frame moving relative to the clock, the clock appears to be running more slowly. Straightforward application of the Pythagorean theorem leads to the well-known prediction of special relativity: The total time for the light pulse to trace its path is given by The length of the half path can be calculated as a function of known quantities as Substituting D from this equation into the previous and solving for Δt' gives: and thus, with the definition of Δt: which expresses the fact that for the moving observer the period of the clock is longer than in the frame of the clock itself. Common sense would dictate that if time passage has slowed for a moving object, the moving object would observe the external world to be correspondingly "sped up". Counterintuitively, special relativity predicts the opposite. A similar oddity occurs in everyday life. If Sam sees Abigail at a distance she appears small to him and at the same time Sam appears small to Abigail. Being very familiar with the effects of perspective, we see no mystery or a hint of a paradox in this situation. One is accustomed to the notion of relativity with respect to distance: the distance from Los Angeles to New York is by convention the same as the distance from New York to Los Angeles. On the other hand, when speeds are considered, one thinks of an object as "actually" moving, overlooking that its motion is always relative to something else – to the stars, the ground or to oneself. If one object is moving with respect to another, the latter is moving with respect to the former and with equal relative speed. In the special theory of relativity, a moving clock is found to be ticking slowly with respect to the observer's clock. If Sam and Abigail are on different trains in near-lightspeed relative motion, Sam measures (by all methods of measurement) clocks on Abigail's train to be running slowly and similarly, Abigail measures clocks on Sam's train to be running slowly. Note that in all such attempts to establish "synchronization" within the reference system, the question of whether something happening at one location is in fact happening simultaneously with something happening elsewhere, is of key importance. Calculations are ultimately based on determining which events are simultaneous. Furthermore, establishing simultaneity of events separated in space necessarily requires transmission of information between locations, which by itself is an indication that the speed of light will enter the determination of simultaneity. It is a natural and legitimate question to ask how, in detail, special relativity can be self-consistent if clock C is time-dilated with respect to clock B and clock B is also time-dilated with respect to clock C. It is by challenging the assumptions built into the common notion of simultaneity that logical consistency can be restored. Simultaneity is a relationship between an observer in a particular frame of reference and a set of events. By analogy, left and right are accepted to vary with the position of the observer, because they apply to a relationship. In a similar vein, Plato explained that up and down describe a relationship to the earth and one would not fall off at the antipodes. In relativity, temporal coordinate systems are set up using a procedure for synchronizing clocks. It is now usually called the Poincaré-Einstein synchronization procedure. An observer with a clock sends a light signal out at time t1 according to his clock. At a distant event, that light signal is reflected back, and arrives back at the observer at time t2 according to his clock. Since the light travels the same path at the same rate going both out and back for the observer in this scenario, the coordinate time of the event of the light signal being reflected for the observer tE is tE = (t1 + t2) / 2. In this way, a single observer's clock can be used to define temporal coordinates which are good anywhere in the universe. However, since those clocks are in motion in all other inertial frames, these clock indications are thus not synchronous in those frames, which is the basis of relativity of simultaneity. Because the pairs of putatively simultaneous moments are identified differently by different observers, each can treat the other clock as being the slow one without relativity being self-contradictory. Symmetric time dilation occurs with respect to coordinate systems set up in this manner. It is an effect where another clock is measured to run more slowly than one's own clock. Observers do not consider their own clock time to be affected, but may find that it is observed to be affected in another coordinate system. This symmetry can be demonstrated in a Minkowski diagram (second image on the right). Clock C resting in inertial frame S′ meets clock A at d and clock B at f (both resting in S). All three clocks simultaneously start to tick in S. The worldline of A is the ct-axis, the worldline of B intersecting f is parallel to the ct-axis, and the worldline of C is the ct′-axis. All events simultaneous with d in S are on the x-axis, in S′ on the x′-axis. The proper time between two events is indicated by a clock present at both events. It is invariant, i.e., in all inertial frames it is agreed that this time is indicated by that clock. Interval df is therefore the proper time of clock C, and is shorter with respect to the coordinate times ef=dg of clocks B and A in S. Conversely, also proper time ef of B is shorter with respect to time if in S′, because event e was measured in S′ already at time i due to relativity of simultaneity, long before C started to tick. From that it can be seen, that the proper time between two events indicated by an unaccelerated clock present at both events, compared with the synchronized coordinate time measured in all other inertial frames, is always the minimal time interval between those events. However, the interval between two events can also correspond to the proper time of accelerated clocks present at both events. Under all possible proper times between two events, the proper time of the unaccelerated clock is maximal, which is the solution to the twin paradox. The formula for determining time dilation in special relativity is: where Δt is the time interval between two co-local events (i.e. happening at the same place) for an observer in some inertial frame (e.g. ticks on his clock), known as the proper time, Δt′ is the time interval between those same events, as measured by another observer, inertially moving with velocity v with respect to the former observer, v is the relative velocity between the observer and the moving clock, c is the speed of light, and the Lorentz factor (conventionally denoted by the Greek letter gamma or γ) is Thus the duration of the clock cycle of a moving clock is found to be increased: it is measured to be "running slow". The range of such variances in ordinary life, where v ≪ c, even considering space travel, are not great enough to produce easily detectable time dilation effects and such vanishingly small effects can be safely ignored for most purposes. It is only when an object approaches speeds on the order of 30,000 km/s (1/10 the speed of light) that time dilation becomes important. Time dilation by the Lorentz factor was predicted by Joseph Larmor (1897), at least for electrons orbiting a nucleus. Thus "... individual electrons describe corresponding parts of their orbits in times shorter for the [rest] system in the ratio :" (Larmor 1897). Time dilation of magnitude corresponding to this (Lorentz) factor has been experimentally confirmed, as described below. High accuracy timekeeping, low earth orbit satellite tracking, and pulsar timing are applications that require the consideration of the combined effects of mass and motion in producing time dilation. Practical examples include the International Atomic Time standard and its relationship with the Barycentric Coordinate Time standard used for interplanetary objects. Relativistic time dilation effects for the solar system and the earth can be modeled very precisely by the Schwarzschild solution to the Einstein field equations. In the Schwarzschild metric, the interval dtE is given by - dtE is a small increment of proper time tE (an interval that could be recorded on an atomic clock); - dtc is a small increment in the coordinate tc (coordinate time); - dx, dy and dz are small increments in the three coordinates x, y, z of the clock's position; and - GM i/r i represents the sum of the Newtonian gravitational potentials due to the masses in the neighborhood, based on their distances r i from the clock. This sum GM i/r i includes any tidal potentials, and is represented as U (using the positive astronomical sign convention for gravitational potentials). The coordinate velocity of the clock is given by The coordinate time tc is the time that would be read on a hypothetical "coordinate clock" situated infinitely far from all gravitational masses (U = 0), and stationary in the system of coordinates (v = 0). The exact relation between the rate of proper time and the rate of coordinate time for a clock with a radial component of velocity is - v|| is the radial velocity, and - U = GM i/r i is the Newtonian potential, equivalent to half of the escape velocity squared. The above equation is exact under the assumptions of the Schwarzschild solution. Time dilation has been tested a number of times. The routine work carried on in particle accelerators since the 1950s, such as those at CERN, is a continuously running test of the time dilation of special relativity. The specific experiments include: - Ives and Stilwell (1938, 1941). The stated purpose of these experiments was to verify the time dilation effect, predicted by Larmor–Lorentz ether theory, due to motion through the ether using Einstein's suggestion that Doppler effect in canal rays would provide a suitable experiment. These experiments measured the Doppler shift of the radiation emitted from cathode rays, when viewed from directly in front and from directly behind. The high and low frequencies detected were not the classically predicted values - The high and low frequencies of the radiation from the moving sources were measured as - as deduced by Einstein (1905) from the Lorentz transformation, when the source is running slow by the Lorentz factor. - Rossi and Hall (1941) compared the population of cosmic-ray-produced muons at the top of a mountain to that observed at sea level. Although the travel time for the muons from the top of the mountain to the base is several muon half-lives, the muon sample at the base was only moderately reduced. This is explained by the time dilation attributed to their high speed relative to the experimenters. That is to say, the muons were decaying about 10 times slower than if they were at rest with respect to the experimenters. - Hasselkamp, Mondry, and Scharmann (1979) measured the Doppler shift from a source moving at right angles to the line of sight. The most general relationship between frequencies of the radiation from the moving sources is given by: - as deduced by Einstein (1905). For ϕ = 90° (cosϕ = 0) this reduces to fdetected = frestγ. This lower frequency from the moving source can be attributed to the time dilation effect and is often called the transverse Doppler effect and was predicted by relativity. - In 2010 time dilation was observed at speeds of less than 10 meters per second using optical atomic clocks connected by 75 meters of optical fiber. - In 1959 Robert Pound and Glen A. Rebka measured the very slight gravitational red shift in the frequency of light emitted at a lower height, where Earth's gravitational field is relatively more intense. The results were within 10% of the predictions of general relativity. In 1964, Pound and J. L. Snider measured a result within 1% of the value predicted by gravitational time dilation. (See Pound–Rebka experiment) - In 2010 gravitational time dilation was measured at the earth's surface with a height difference of only one meter, using optical atomic clocks. - Hafele and Keating, in 1971, flew caesium atomic clocks east and west around the earth in commercial airliners, to compare the elapsed time against that of a clock that remained at the U.S. Naval Observatory. Two opposite effects came into play. The clocks were expected to age more quickly (show a larger elapsed time) than the reference clock, since they were in a higher (weaker) gravitational potential for most of the trip (c.f. Pound–Rebka experiment). But also, contrastingly, the moving clocks were expected to age more slowly because of the speed of their travel. From the actual flight paths of each trip, the theory predicted that the flying clocks, compared with reference clocks at the U.S. Naval Observatory, should have lost 40±23 nanoseconds during the eastward trip and should have gained 275±21 nanoseconds during the westward trip. Relative to the atomic time scale of the U.S. Naval Observatory, the flying clocks lost 59±10 nanoseconds during the eastward trip and gained 273±7 nanoseconds during the westward trip (where the error bars represent standard deviation). In 2005, the National Physical Laboratory in the United Kingdom reported their limited replication of this experiment. The NPL experiment differed from the original in that the caesium clocks were sent on a shorter trip (London–Washington, D.C. return), but the clocks were more accurate. The reported results are within 4% of the predictions of relativity, within the uncertainty of the measurements. - The Global Positioning System can be considered a continuously operating experiment in both special and general relativity. The in-orbit clocks are corrected for both special and general relativistic time dilation effects as described above, so that (as observed from the earth's surface) they run at the same rate as clocks on the surface of the Earth. A comparison of muon lifetimes at different speeds is possible. In the laboratory, slow muons are produced; and in the atmosphere, very fast moving muons are introduced by cosmic rays. Taking the muon lifetime at rest as the laboratory value of 2.197 μs, the lifetime of a cosmic ray produced muon traveling at 98% of the speed of light is about five times longer, in agreement with observations. In the muon storage ring at CERN the lifetime of muons circulating with γ = 29.327 was found to be dilated to 64.378 μs, confirming time dilation to an accuracy of 0.9 ± 0.4 parts per thousand. In this experiment the "clock" is the time taken by processes leading to muon decay, and these processes take place in the moving muon at its own "clock rate", which is much slower than the laboratory clock. Time dilation would make it possible for passengers in a fast-moving vehicle to travel further into the future while aging very little, in that their great speed slows down the passage of on-board time relative to that of an observer. That is, the ship's clock (and according to relativity, any human traveling with it) shows less elapsed time than the clocks of observers on earth. For sufficiently high speeds the effect is dramatic. For example, one year of travel might correspond to ten years at home. Indeed, a constant 1 g acceleration would permit humans to travel through the entire known Universe in one human lifetime. The space travelers could return to Earth billions of years in the future. A scenario based on this idea was presented in the novel Planet of the Apes by Pierre Boulle. A more likely use of this effect would be to enable humans to travel to nearby stars without spending their entire lives aboard a ship. However, any such application of time dilation during interstellar travel would require the use of some new, advanced method of propulsion. The Orion Project has been the only major attempt toward this idea. Current space flight technology has fundamental theoretical limits based on the practical problem that an increasing amount of energy is required for propulsion as a craft approaches the speed of light. The likelihood of collision with small space debris and other particulate material is another practical limitation. At the velocities presently attained, however, time dilation occurs but is too small to be a factor in space travel. Travel to regions of spacetime where gravitational time dilation is taking place, such as within the gravitational field of a black hole but outside the event horizon (perhaps on a hyperbolic trajectory exiting the field), could also yield results consistent with present theory. In special relativity, time dilation is most simply described in circumstances where relative velocity is unchanging. Nevertheless, the Lorentz equations allow one to calculate proper time and movement in space for the simple case of a spaceship which is applied with a force per unit mass, relative to some reference object in uniform (i.e. constant velocity) motion, equal to g throughout the period of measurement. Let t be the time in an inertial frame subsequently called the rest frame. Let x be a spatial coordinate, and let the direction of the constant acceleration as well as the spaceship's velocity (relative to the rest frame) be parallel to the x-axis. Assuming the spaceship's position at time t = 0 being x = 0 and the velocity being v0 and defining the following abbreviation the following formulas hold: In the case where v(0) = v0 = 0 and τ(0) = τ0 = 0 the integral can be expressed as a logarithmic function or, equivalently, as an inverse hyperbolic function: The green dots and red dots in the animation represent spaceships. The ships of a given fleet (color) have no velocity relative to each other, so for the clocks on board the individual ships within a given fleet, the same amount of time elapses relative to each other. Therefore, ships within a given fleet can set up a procedure to maintain a synchronized standard fleet time. The ships of the red fleet are moving with a velocity of 0.866c with respect to the green fleet. The blue dots represent pulses of light. One cycle of light-pulses between two green ships takes two seconds of "green time", one second for each leg. As seen from the perspective of the reds, the transit time of the light pulses they exchange among each other is one second of "red time" for each leg. As seen from the perspective of the greens, the red ships' cycle of exchanging light pulses travels a diagonal path that is two light-seconds long. (As seen from the green perspective the reds travel 1.73 () light-seconds of distance for every two seconds of green time.) One of the red ships emits a light pulse towards the greens every second of red time. These pulses are received by ships of the green fleet with two-second intervals as measured in green time. Not shown in the animation is that all aspects of physics are proportionally involved. The light pulses that are emitted by the reds at a particular frequency as measured in red time are received at a lower frequency as measured by the detectors of the green fleet that measure against green time, and vice versa. The animation cycles between the green perspective and the red perspective, to emphasize the symmetry. As there is no such thing as absolute motion in relativity (as is also the case for Newtonian mechanics), both the green and the red fleet are entitled to consider themselves motionless in their own frame of reference. Again, it is vital to understand that the results of these interactions and calculations reflect the real state of the ships as it emerges from their situation of relative motion. It is not a mere quirk of the method of measurement or communication. - Average time dilation has a weak dependence on the orbital inclination angle (Ashby 2003, p.32). The r ≈ 1.497 result corresponds to the orbital inclination of modern GPS satellites, which is 55 degrees. - Ashby, Neil (2003). "Relativity in the Global Positioning System" (PDF). Living Reviews in Relativity. 6: 16. Bibcode:2003LRR.....6....1A. doi:10.12942/lrr-2003-1. - Toothman, Jessika. "How Do Humans age in space?". HowStuffWorks. Retrieved 2012-04-24. - Lu, Ed. "Expedition 7 – Relativity". Ed's Musing from Space. NASA. Retrieved 2012-04-24. - Lu, Ed. "Expedition 7 – Relativity". Ed's Musing from Space. NASA. Retrieved 2015-01-20. In fact it mentions as result 0.007 seconds, but it is easily seen that this is due to crude intermediate rounding. - For sources on special relativistic time dilation, see Albert Einstein's own popular exposition, published in English translation (1920) as Einstein, Albert (1920). "On the Idea of Time in Physics". Relativity: The Special and General Theory. Henri Holt. ISBN 1-58734-092-5. and also in sections 9–12. See also the articles Special relativity, Lorentz transformation and Relativity of simultaneity. - "New calculations show Earth's core is much younger than thought". Phys.org. 26 May 2016. - "Hafele and Keating Experiment". NA. Retrieved 2015-02-04. - Overbye, Dennis (2005-06-28). "A Trip Forward in Time. Your Travel Agent: Einstein.". The New York Times. Retrieved 2015-12-08. - Gott, J., Richard (2002). Time Travel in Einstein's Universe. p. 75. - Cassidy, David C.; Holton, Gerald James; Rutherford, Floyd James (2002). Understanding Physics. Springer-Verlag. p. 422. ISBN 0-387-98756-8. - Cutner, Mark Leslie (2003). Astronomy, A Physical Perspective. Cambridge University Press. p. 128. ISBN 0-521-82196-7. - Lerner, Lawrence S. (1996). Physics for Scientists and Engineers, Volume 2. Jones and Bartlett. pp. 1051–1052. ISBN 0-7637-0460-1. - Ellis, George F. R.; Williams, Ruth M. (2000). Flat and Curved Space-times (2n ed.). Oxford University Press. pp. 28–29. ISBN 0-19-850657-0. - Adams, Steve (1997). Relativity: An introduction to space-time physics. CRC Press. p. 54. ISBN 0-7484-0621-2. - Edwin F. Taylor, John Archibald Wheeler (1992). Spacetime Physics: Introduction to Special Relativity. New York: W. H. Freeman. ISBN 0-7167-2327-1. - Ashby, Neil (2002). "Relativity in the Global Positioning System". Physics Today. 55 (5): 45. Bibcode:2002PhT....55e..41A. doi:10.1063/1.1485583. - See equations 2 & 3 (combined here and divided throughout by c2) at pp. 35–36 in Moyer, T. D. (1981). "Transformation from proper time on Earth to coordinate time in solar system barycentric space-time frame of reference". Celestial Mechanics. 23: 33–56. Bibcode:1981CeMec..23...33M. doi:10.1007/BF01228543. - A version of the same relationship can also be seen at equation 2 in Ashbey, Neil (2002). "Relativity and the Global Positioning System" (PDF). Physics Today. 55 (5): 45. Bibcode:2002PhT....55e..41A. doi:10.1063/1.1485583. - Blaszczak, Z. (2007). Laser 2006. Springer. p. 59. ISBN 3540711139. - Hasselkamp, D.; Mondry, E.; Scharmann, A. (1979). "Direct observation of the transversal Doppler-shift". Zeitschrift für Physik A. 289 (2): 151–155. Bibcode:1979ZPhyA.289..151H. doi:10.1007/BF01435932. - Einstein, A. (1905). "On the electrodynamics of moving bodies". Fourmilab. - Chou, C. W.; Hume, D. B.; Rosenband, T.; Wineland, D. J. (2010). "Optical Clocks and Relativity". Science. 329 (5999): 1630–1633. Bibcode:2010Sci...329.1630C. doi:10.1126/science.1192720. PMID 20929843. - Pound, R. V.; Snider J. L. (November 2, 1964). "Effect of Gravity on Nuclear Resonance". Physical Review Letters. 13 (18): 539–540. Bibcode:1964PhRvL..13..539P. doi:10.1103/PhysRevLett.13.539. - Nave, C. R. (22 August 2005). "Hafele and Keating Experiment". HyperPhysics. Retrieved 2013-08-05. - "Einstein" (PDF). Metromnia. National Physical Laboratory. 2005. pp. 1–4. - Kaplan, Elliott; Hegarty, Christopher (2005). Understanding GPS: Principles and Applications. Artech House. p. 306. ISBN 1-58053-895-9. Extract of page 306 - Stewart, J. V. (2001). Intermediate electromagnetic theory. World Scientific. p. 705. ISBN 981-02-4470-3. - Bailey, J. et al. Nature 268, 301 (1977) - Calder, Nigel (2006). Magic Universe: A grand tour of modern science. Oxford University Press. p. 378. ISBN 0-19-280669-6. - See equations 3, 4, 6 and 9 of Iorio, Lorenzo (2004). "An analytical treatment of the Clock Paradox in the framework of the Special and General Theories of Relativity". Foundations of Physics Letters. 18: 1–19. arXiv:. Bibcode:2005FoPhL..18....1I. doi:10.1007/s10702-005-2466-8. - Callender, C.; Edney, R. (2001). Introducing Time. Icon Books. ISBN 1-84046-592-1. - Einstein, A. (1905). "Zur Elektrodynamik bewegter Körper". Annalen der Physik. 322 (10): 891. Bibcode:1905AnP...322..891E. doi:10.1002/andp.19053221004. - Einstein, A. (1907). "Über die Möglichkeit einer neuen Prüfung des Relativitätsprinzips". Annalen der Physik. 328 (6): 197–198. Bibcode:1907AnP...328..197E. doi:10.1002/andp.19073280613. - Hasselkamp, D.; Mondry, E.; Scharmann, A. (1979). "Direct Observation of the Transversal Doppler-Shift". Zeitschrift für Physik A. 289 (2): 151–155. Bibcode:1979ZPhyA.289..151H. doi:10.1007/BF01435932. - Ives, H. E.; Stilwell, G. R. (1938). "An experimental study of the rate of a moving clock". Journal of the Optical Society of America. 28 (7): 215–226. doi:10.1364/JOSA.28.000215. - Ives, H. E.; Stilwell, G. R. (1941). "An experimental study of the rate of a moving clock. II". Journal of the Optical Society of America. 31 (5): 369–374. doi:10.1364/JOSA.31.000369. - Joos, G. (1959). "Lehrbuch der Theoretischen Physik, Zweites Buch" (11th ed.). - Larmor, J. (1897). "On a dynamical theory of the electric and luminiferous medium". Philosophical Transactions of the Royal Society. 190: 205–300. Bibcode:1897RSPTA.190..205L. doi:10.1098/rsta.1897.0020. (third and last in a series of papers with the same name). - Poincaré, H. (1900). "La théorie de Lorentz et le principe de Réaction". Archives Néerlandaises. 5: 253–78. - Puri, A. (2015). "Einstein versus the simple pendulum formula: does gravity slow all clocks?". Physics Education. 50 (4): 431. Bibcode:2015PhyEd..50..431P. doi:10.1088/0031-9120/50/4/431. - Reinhardt, S.; et al. (2007). "Test of relativistic time dilation with fast optical atomic clocks at different velocities" (PDF). Nature Physics. 3 (12): 861–864. Bibcode:2007NatPh...3..861R. doi:10.1038/nphys778. - Rossi, B.; Hall, D. B. (1941). "Variation of the Rate of Decay of Mesotrons with Momentum". Physical Review. 59 (3): 223. Bibcode:1941PhRv...59..223R. doi:10.1103/PhysRev.59.223. - Weiss, M. "Two way time transfer for satellites". National Institute of Standards and Technology. - Voigt, W. (1887). "Über das Doppler'sche princip". Nachrichten von der Königlicher Gesellschaft der Wissenschaften zu Göttingen. 2: 41–51. Return to Fuhz Home - This article covering Time dilation is enhanced for the visually impaired. The text of this Fuhz article is released under the GNU Free Documentation License
In cryptography, plaintext usually means unencrypted information pending input into cryptographic algorithms, usually encryption algorithms. Cleartext usually refers to data that is transmitted or stored unencrypted ('in the clear'). With the advent of computing, the term plaintext expanded beyond human-readable documents to mean any data, including binary files, in a form that can be viewed or used without requiring a key or other decryption device. Information—a message, document, file, etc.—if to be communicated or stored in encrypted form is referred to as plaintext. Plaintext is used as input to an encryption algorithm; the output is usually termed ciphertext, particularly when the algorithm is a cipher. Codetext is less often used, and almost always only when the algorithm involved is actually a code. Some systems use multiple layers of encryption, with the output of one encryption algorithm becoming "plaintext" input for the next. Insecure handling of plaintext can introduce weaknesses into a cryptosystem by letting an attacker bypass the cryptography altogether. Plaintext is vulnerable in use and in storage, whether in electronic or paper format. Physical security means the securing of information and its storage media from physical, attack—for instance by someone entering a building to access papers, storage media, or computers. Discarded material, if not disposed of securely, may be a security risk. Even shredded documents and erased magnetic media might be reconstructed with sufficient effort. If plaintext is stored in a computer file, the storage media, the computer and its components, and all backups must be secure. Sensitive data is sometimes processed on computers whose mass storage is removable, in which case physical security of the removed disk is vital. In the case of securing a computer, useful (as opposed to handwaving) security must be physical (e.g., against burglary, brazen removal under cover of supposed repair, installation of covert monitoring devices, etc.), as well as virtual (e.g., operating system modification, illicit network access, Trojan programs). Wide availability of keydrives, which can plug into most modern computers and store large quantities of data, poses another severe security headache. A spy (perhaps posing as a cleaning person) could easily conceal one, and even swallow it if necessary. Discarded computers, disk drives and media are also a potential source of plaintexts. Most operating systems do not actually erase anything - they simply mark the disk space occupied by a deleted file as 'available for use', and remove its entry from the file system directory. The information in a file deleted in this way remains fully present until overwritten at some later time when the operating system reuses the disk space. With even low-end computers commonly sold with many gigabytes of disk space and rising monthly, this 'later time' may be months later, or never. Even overwriting the portion of a disk surface occupied by a deleted file is insufficient in many cases. Peter Gutmann of the University of Auckland wrote a celebrated 1996 paper on the recovery of overwritten information from magnetic disks; areal storage densities have gotten much higher since then, so this sort of recovery is likely to be more difficult than it was when Gutmann wrote. Modern hard drives automatically remap failing sectors, moving data to good sectors. This process makes information on those failing, excluded sectors invisible to the file system and normal applications. Special software, however, can still extract information from them. Some government agencies (e.g., US NSA) require that personnel physically pulverize discarded disk drives and, in some cases, treat them with chemical corrosives. This practice is not widespread outside government, however. Garfinkel and Shelat (2003) analyzed 158 second-hand hard drives they acquired at garage sales and the like, and found that less than 10% had been sufficiently sanitized. The others contained a wide variety of readable personal and confidential information. See data remanence. Physical loss is a serious problem. The US State Department, Department of Defense, and the British Secret Service have all had laptops with secret information, including in plaintext, lost or stolen. Appropriate disk encryption techniques can safeguard data on misappropriated computers or media. On occasion, even when data on host systems is encrypted, media that personnel use to transfer data between systems is plaintext because of poorly designed data policy. For example, in October 2007, the HM Revenue and Customs lost CDs that contained the unencryped records of 25 million child benefit recipients in the United Kingdom. Modern cryptographic systems resist known plaintext or even chosen plaintext attacks, and so may not be entirely compromised when plaintext is lost or stolen. Older systems resisted the effects of plaintext data loss on security with less effective techniques—such as padding and Russian copulation to obscure information in plaintext that could be easily guessed.
Measurement in Standard Units Students choose the proper measurement unit and the corresponding measurement tool needed for a particular measurement in length, volume, and weight/mass. 6 Views 35 Downloads - Activities & Projects - Graphics & Images - Handouts & References - Lab Resources - Learning Games - Lesson Plans - Primary Sources - Printables & Templates - Professional Documents - Study Guides - Graphic Organizers - Writing Prompts - Constructed Response Items - AP Test Preps - Lesson Planet Articles - Interactive Whiteboards - All Resource Types - Show All See similar resources: Measuring Mass Using Non-Standard Units of Measurement Students explore mass measurements. In this non-standard unit measurement lesson plan, students use non-standard units of measurement to find the weight of different objects. They work in small groups and complete a variety of measuring... 1st - 2nd Math How Tall is the Gingerbread Man? The gingerbread man has finally been caught, now find out how tall he is with this measurement activity. Using Unifix® cubes, marshmallows, buttons, and Cheerios, young mathematicians measure and record the height of the gingerbread man... Pre-K - 2nd Math CCSS: Adaptable Convert Between Units: Using a T-Chart Learning to convert between different units of measurement can be a challenge for many young mathematicians. The final video in this series supports learners with this topic by clearly modeling how T-charts and repeated addition are used... 5 mins 2nd - 6th Math CCSS: Designed Cover the Area of a Shape Using Square Units This is the first of a series of four lessons designed to examine and understand the concept of area. A review of area starts the lesson, along with a discussion of different attributes that can describe a rectangle. The term square unit... 3 mins 2nd - 4th Math CCSS: Designed
||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (December 2014) (Learn how and when to remove this template message)| An eyepiece, or ocular lens, is a type of lens that is attached to a variety of optical devices such as telescopes and microscopes. It is so named because it is usually the lens that is closest to the eye when someone looks through the device. The objective lens or mirror collects light and brings it to focus creating an image. The eyepiece is placed near the focal point of the objective to magnify this image. The amount of magnification depends on the focal length of the eyepiece. An eyepiece consists of several "lens elements" in a housing, with a "barrel" on one end. The barrel is shaped to fit in a special opening of the instrument to which it is attached. The image can be focused by moving the eyepiece nearer and further from the objective. Most instruments have a focusing mechanism to allow movement of the shaft in which the eyepiece is mounted, without needing to manipulate the eyepiece directly. The eyepieces of binoculars are usually permanently mounted in the binoculars, causing them to have a pre-determined magnification and field of view. With telescopes and microscopes, however, eyepieces are usually interchangeable. By switching the eyepiece, the user can adjust what is viewed. For instance, eyepieces will often be interchanged to increase or decrease the magnification of a telescope. Eyepieces also offer varying fields of view, and differing degrees of eye relief for the person who looks through them. - 1 Eyepiece properties - 2 Eyepiece designs - 3 See also - 4 References - 5 External links Several properties of an eyepiece are likely to be of interest to a user of an optical instrument, when comparing eyepieces and deciding which eyepiece suits their needs. Design distance to entrance pupil Eyepieces are optical systems where the entrance pupil is invariably located outside of the system. They must be designed for optimal performance for a specific distance to this entrance pupil (i.e. with minimum aberrations for this distance). In a refracting astronomical telescope the entrance pupil is identical with the objective. This may be several feet distant from the eyepiece; whereas with a microscope eyepiece the entrance pupil is close to the back focal plane of the objective, mere inches from the eyepiece. Microscope eyepieces may be corrected differently from telescope eyepieces; however, most are also suitable for telescope use. Elements and groups Elements are the individual lenses, which may come as simple lenses or "singlets" and cemented doublets or (rarely) triplets. When lenses are cemented together in pairs or triples, the combined elements are called groups (of lenses). The first eyepieces had only a single lens element, which delivered highly distorted images. Two and three-element designs were invented soon after, and quickly became standard due to the improved image quality. Today, engineers assisted by computer-aided drafting software have designed eyepieces with seven or eight elements that deliver exceptionally large, sharp views. Internal reflection and scatter Internal reflections, sometimes called "scatter", cause the light passing through an eyepiece to disperse and reduce the contrast of the image projected by the eyepiece. When the effect is particularly bad, "ghost images" are seen, called "ghosting". For many years, simple eyepiece designs with a minimum number of internal air-to-glass surfaces were preferred to avoid this problem. One solution to scatter is to use thin film coatings over the surface of the element. These thin coatings are only one or two wavelengths deep, and work to reduce reflections and scattering by changing the refraction of the light passing through the element. Some coatings may also absorb light that is not being passed through the lens in a process called total internal reflection where the light incident on the film is at a shallow angle. Lateral or transverse chromatic aberration is caused because the refraction at glass surfaces differs for light of different wavelengths. Blue light, seen through an eyepiece element, will not focus to the same point but along the same axis as red light. The effect can create a ring of false colour around point sources of light and results in a general blurriness to the image. One solution is to reduce the aberration by using multiple elements of different types of glass. Achromats are lens groups that bring two different wavelengths of light to the same focus and exhibit greatly reduced false colour. Low dispersion glass may also be used to reduce chromatic aberration. Longitudinal chromatic aberration is a pronounced effect of optical telescope objectives, because the focal lengths are so long. Microscopes, whose focal lengths are generally shorter, do not tend to suffer from this effect. The focal length of an eyepiece is the distance from the principal plane of the eyepiece where parallel rays of light converge to a single point. When in use, the focal length of an eyepiece, combined with the focal length of the telescope or microscope objective, to which it is attached, determines the magnification. It is usually expressed in millimetres when referring to the eyepiece alone. When interchanging a set of eyepieces on a single instrument, however, some users prefer to refer to identify each eyepiece by the magnification produced. For a telescope, the angular magnification MA produced by the combination of a particular eyepiece and objective can be calculated with the following formula: - is the focal length of the objective, - is the focal length of the eyepiece. Magnification increases, therefore, when the focal length of the eyepiece is shorter or the focal length of the objective is longer. For example, a 25 mm eyepiece in a telescope with a 1200 mm focal length would magnify objects 48 times. A 4 mm eyepiece in the same telescope would magnify 300 times. Amateur astronomers tend to refer to telescope eyepieces by their focal length in millimetres. These typically range from about 3 mm to 50 mm. Some astronomers, however, prefer to specify the resulting magnification power rather than the focal length. It is often more convenient to express magnification in observation reports, as it gives a more immediate impression of what view the observer actually saw. Due to its dependence on properties of the particular telescope in use, however, magnification power alone is meaningless for describing a telescope eyepiece. For a compound microscope the corresponding formula is - is the distance of closest distinct vision (usually 250 mm) - is the distance between the back focal plane of the objective and the back focal plane of the eyepiece (called tube length), typically 160 mm for a modern instrument. - is the objective focal length and is the eyepiece focal length. By convention, microscope eyepieces are usually specified by power instead of focal length. Microscope eyepiece power and objective power are defined by thus from the expression given earlier for the angular magnification of a compound microscope The total angular magnification of a microscope image is then simply calculated by multiplying the eyepiece power by the objective power. For example, a 10× eyepiece with a 40× objective will magnify the image 400 times. This definition of lens power relies upon an arbitrary decision to split the angular magnification of the instrument into separate factors for the eyepiece and the objective. Historically, Abbe described microscope eyepieces differently, in terms of angular magnification of the eyepiece and 'initial magnification' of the objective. While convenient for the optical designer, this turned out to be less convenient from the viewpoint of practical microscopy and was thus subsequently abandoned. The generally accepted visual distance of closest focus is 250 mm, and eyepiece power is normally specified assuming this value. Common eyepiece powers are 8×, 10×, 15×, and 20×. The focal length of the eyepiece (in mm) can thus be determined if required by dividing 250 mm by the eyepiece power. Modern instruments often use objectives optically corrected for an infinite tube length rather than 160 mm, and these require an auxiliary correction lens in the tube. Location of focal plane In some eyepiece types, such as Ramsden eyepieces (described in more detail below), the eyepiece behaves as a magnifier, and its focal plane is located outside of the eyepiece in front of the field lens. This plane is therefore accessible as a location for a graticule or micrometer crosswires. In the Huygenian eyepiece, the focal plane is located between the eye and field lenses, inside the eyepiece, and is hence not accessible. Field of view The field of view, often abbreviated FOV, describes the area of a target (measured as an angle from the location of viewing) that can be seen when looking through an eyepiece. The field of view seen through an eyepiece varies, depending on the magnification achieved when connected to a particular telescope or microscope, and also on properties of the eyepiece itself. Eyepieces are differentiated by their field stop, which is the narrowest aperture that light entering the eyepiece must pass through to reach the field lens of the eyepiece. Due to the effects of these variables, the term "field of view" nearly always refers to one of two meanings: - Actual field of view - the angular size of the amount of sky that can be seen through an eyepiece when used with a particular telescope, producing a specific magnification. It is typically between one tenth of a degree, and two degrees. - Apparent field of view - this is a measure of the angular size of the image viewed through the eyepiece, in other words, how large the image appears (as distinct from the magnification). This is constant for any given eyepiece of fixed focal length, and may be used to calculate what the actual field of view will be when the eyepiece is used with a given telescope. The measurement ranges from 30 to 110 degrees. It is common for users of an eyepiece to want to calculate the actual field of view, because it indicates how much of the sky will be visible when the eyepiece is used with their telescope. The most convenient method of calculating the actual field of view depends on whether the apparent field of view is known. If the apparent field of view is known, the actual field of view can be calculated from the following approximate formula: - is the actual field of view, calculated in the unit of angular measurement in which is provided. - is the apparent field of view. - is the magnification. - is the focal length of the telescope. - is the focal length of the eyepiece, expressed in the same units of measurement as . The focal length of the telescope objective is the diameter of the objective times the focal ratio. It represents the distance at which the mirror or objective lens will cause light to converge on a single point. The formula is accurate to 4% or better up to 40° apparent field of view, and has a 10% error for 60°. If the apparent field of view is unknown, the actual field of view can be approximately found using: - is the actual field of view, calculated in degrees. - is the diameter of the eyepiece field stop in mm. - is the focal length of the telescope, in mm. The second formula is actually more accurate, but field stop size is not usually specified by most manufacturers. The first formula will not be accurate if the field is not flat, or is higher than 60° which is common for most ultra-wide eyepiece design. The above formulae are approximations. The ISO 14132-1:2002 standard determines how the exact apparent angle of view (AAOV) is calculated from the real angle of view (AOV). If a diagonal or Barlow lens is used before the eyepiece, the eyepiece's field of view may be slightly restricted. This occurs when the preceding lens has a narrower field stop than the eyepiece's, causing the obstruction in the front to act as a smaller field stop in front of the eyepiece. The precise relationship is given by This formula also indicates that, for an eyepiece design with a given apparent field of view, the barrel diameter will determine the maximum focal length possible for that eyepiece, as no field stop can be larger than the barrel itself. For example, a Plössl with 45° apparent field of view in a 1.25 inch barrel would yield a maximum focal length of 35mm. Anything longer requires larger barrel or the view is restricted by the edge, effectively making the field of view less than 45°. Eyepieces for telescopes and microscopes are usually interchanged to increase or decrease the magnification, and to enable the user to select a type with certain performance characteristics. To allow this, eyepieces come in standardized "Barrel diameters". There are six standard barrel diameters for telescopes. The barrel sizes (usually expressed in inches) are: - 0.965 inch (24.5 mm) - This is the smallest standard barrel diameter and is usually found in toy store and shopping mall retail telescopes. Many of these eyepieces that come with such telescopes are plastic, and some even have plastic lenses. High-end telescope eyepieces with this barrel size are no longer manufactured, but you can still purchase Kellner types. - 1¼ inch (31.75 mm) - 1¼ inch is the most popular telescope eyepiece barrel diameter. The practical upper limit on focal lengths for eyepieces with 1¼ inch barrels is about 32 mm. With longer focal lengths, the edges of the barrel itself intrude into the view limiting its size. With focal lengths longer than 32 mm, the available field of view falls below 50°, which most amateurs consider to be the minimum acceptable width. These barrel sizes are threaded to take 30 mm filters. - 2 inch (50.8 mm) - The larger barrel size in 2 inch eyepieces helps alleviate the limit on focal lengths. The upper limit of focal length with 2 inch eyepieces is about 55 mm. The trade-off is that these eyepieces are usually more expensive, won't fit in some telescopes, and may be heavy enough to tip the telescope. These barrel sizes are threaded to take 48 mm filters (or rarely 49 mm). - 2.7 inch (68.58 mm) - 2.7 inch eyepieces are made by a few manufacturers. They allow for slightly larger fields of view. Many high-end focusers now accept these eyepieces. - 3 inch (76.2 mm) - The even larger barrel size in 3 inch eyepieces allows for extreme focal lengths and over 120° field of view eyepieces. The disadvantages are that these eyepieces are somewhat rare, extremely expensive, up to 5 lbs in weight, and that only a few telescopes have focusers large enough to accept them. Their huge weight causes balancing issues in Schmidt-Cassegrains under 10 inches, refractors under 5 inches, and reflectors under 16 inches. Also, due to their large field stops, without larger secondary mirrors most reflectors and Schmidt-Cassegrains will have severe vignetting with these eyepieces. Makers of these eyepieces include Explore Scientific and Siebert Optics. Telescopes that can accept these eyepieces are made by Explore Scientific and Orion Telescopes and Binoculars. - 4 inch (102 mm) - These eyepieces are rare and only commonly used in observatories. They are made by very few manufacturers, and demand for them is low. Eyepieces for microscopes have barrel diameters measured in millimeters such as 23.2 mm and 30 mm. The eye needs to be held at a certain distance behind the eye lens of an eyepiece to see images properly through it. This distance is called the eye relief. A larger eye relief means that the optimum position is farther from the eyepiece, making it easier to view an image. However, if the eye relief is too large it can be uncomfortable to hold the eye in the correct position for an extended period of time, for which reason some eyepieces with long eye relief have cups behind the eye lens to aid the observer in maintaining the correct observing position. The eye pupil should coincide with the exit pupil, the image of the entrance pupil, which in the case of an astronomical telescope corresponds to the object glass. Eye relief typically ranges from about 2 mm to 20 mm, depending on the construction of the eyepiece. Long focal-length eyepieces usually have ample eye relief, but short focal-length eyepieces are more problematic. Until recently, and still quite commonly, eyepieces of a short-focal length have had a short eye relief. Good design guidelines suggest a minimum of 5–6 mm to accommodate the eyelashes of the observer to avoid discomfort. Modern designs with many lens elements, however, can correct for this, and viewing at high power becomes more comfortable. This is especially the case for spectacle wearers, who may need up to 20 mm of eye relief to accommodate their glasses. Technology has developed over time and there are a variety of eyepiece designs for use with telescopes, microscopes, gun-sights, and other devices. Some of these designs are described in more detail below. Negative lens or "Galilean" The simple negative lens placed before the focus of the objective has the advantage of presenting an erect image but with limited field of view better suited to low magnification. It is suspected this type of lens was used in some of the first refracting telescopes that appeared in the Netherlands in about 1608. It was also used in Galileo Galilei's 1609 telescope design which gave this type of eyepiece arrangement the name "Galilean". This type of eyepiece is still used in very cheap telescopes, binoculars and in opera glasses. A simple convex lens placed after the focus of the objective lens presents the viewer with a magnified inverted image. This configuration may have been used in the first refracting telescopes from the Netherlands and was proposed as a way to have a much wider field of view and higher magnification in telescopes in Johannes Kepler's 1611 book Dioptrice. Since the lens is placed after the focal plane of the objective it also allowed for use of a micrometer at the focal plane (used for determining the angular size and/or distance between objects observed). Huygens eyepieces consist of two plano-convex lenses with the plane sides towards the eye separated by an air gap. The lenses are called the eye lens and the field lens. The focal plane is located between the two lenses. It was invented by Christiaan Huygens in the late 1660s and was the first compound (multi-lens) eyepiece. Huygens discovered that two air spaced lenses can be used to make an eyepiece with zero transverse chromatic aberration. If the lenses are made of glass of the same refractive index, to be used with a relaxed eye and a telescope with an infinitely distant objective then the separation is given by: where and are the focal lengths of the component lenses. These eyepieces work well with the very long focal length telescopes (in Huygens day they were used with single element long focal length non-achromatic refracting telescopes, including very long focal length aerial telescopes). This optical design is now considered obsolete since with today's shorter focal length telescopes the eyepiece suffers from short eye relief, high image distortion, chromatic aberration, and a very narrow apparent field of view. Since these eyepieces are cheap to make they can often be found on inexpensive telescopes and microscopes. Because Huygens eyepieces do not contain cement to hold the lens elements, telescope users sometimes use these eyepieces in the role of "solar projection", i.e. projecting an image of the Sun onto a screen. Other cemented eyepieces can be damaged by the intense, concentrated light of the Sun. The Ramsden eyepiece comprises two plano-convex lenses of the same glass and similar focal lengths, placed less than one eye-lens focal length apart, a design created by astronomical and scientific instrument maker Jesse Ramsden in 1782. The lens separation varies between different designs, but is typically somewhere between 7/10 and 7/8 of the focal length of the eye-lens, the choice being a trade off between residual transverse chromatic aberration (at low values) and at high values running the risk of the field lens touching the focal plane when used by an observer who works with a close virtual image such as a myopic observer, or a young person whose accommodation is able to cope with a close virtual image (this is a serious problem when used with a micrometer as it can result in damage to the instrument). A separation of exactly 1 focal length is also inadvisable since it renders the dust on the field lens disturbingly in focus. The two curved surfaces face inwards. The focal plane is thus located outside of the eyepiece and is hence accessible as a location where a graticule, or micrometer crosshairs may be placed. Because a separation of exactly one focal length would be required to correct transverse chromatic aberration, it is not possible to correct the Ramsden design completely for transverse chromatic aberration. The design is slightly better than Huygens but still not up to today’s standards. It remains highly suitable for use with instruments operating using near-monochromatic light sources e.g. polarimeters. Kellner or "Achromat" In a Kellner eyepiece an achromatic doublet is used in place of the simple plano-convex eye lens in the Ramsden design to correct the residual transverse chromatic aberration. Carl Kellner designed this first modern achromatic eyepiece in 1849, also called an "achromatized Ramsden". Kellner eyepieces are a 3-lens design. They are inexpensive and have fairly good image from low to medium power and are far superior to Huygenian or Ramsden design. The eye relief is better than the Huygenian and worse than the Ramsden eyepieces. The biggest problem of Kellner eyepieces was internal reflections. Today's anti-reflection coatings make these usable, economical choices for small to medium aperture telescopes with focal ratio f/6 or longer. The typical field of view is 40 to 50 degrees. Plössl or "Symmetrical" The Plössl is an eyepiece usually consisting of two sets of doublets, designed by Georg Simon Plössl in 1860. Since the two doublets can be identical this design is sometimes called a symmetrical eyepiece. The compound Plössl lens provides a large 50+ degree apparent field of view along with relatively large FOV. This makes this eyepiece ideal for a variety of observational purposes including deep sky and planetary viewing. The chief disadvantage of the Plössl optical design is short eye relief compared to an orthoscopic since the Plössl eye relief is restricted to about 70–80% of focal length. The short eye relief is more critical in short focal lengths below about 10 mm, when viewing can become uncomfortable especially for people wearing glasses. The Plössl eyepiece was an obscure design until the 1980s when astronomical equipment manufacturers started selling redesigned versions of it. Today it is a very popular design on the amateur astronomical market, where the name Plössl covers a range of eyepieces with at least four optical elements. This eyepiece is one of the more expensive to manufacture because of the quality of glass, and the need for well matched convex and concave lenses to prevent internal reflections. Due to this fact, the quality of different Plössl eyepieces varies. There are notable differences between cheap Plössls with simplest anti-reflection coatings and well made ones. Orthoscopic or "Abbe" The 4-element orthographic eyepiece consists of a plano-convex singlet eye lens and a cemented convex-convex triplet field lens achromatic field lens. This gives the eyepiece a nearly perfect image quality and good eye relief, but a narrow apparent field of view — about 40°–45°. It was invented by Ernst Abbe in 1880. It is called "orthoscopic" or "orthographic" because of its low degree of distortion and is also sometimes called an "ortho" or "Abbe". Until the advent of multicoatings and the popularity of the Plössl, orthoscopics were the most popular design for telescope eyepieces. Even today these eyepieces are considered good eyepieces for planetary and lunar viewing. Due to their low degree of distortion and the corresponding globe effect, they are less suitable for applications which require an excessive panning of the instrument. A Monocentric is an achromatic triplet lens with two pieces of crown glass cemented on both sides of a flint glass element. The elements are thick, strongly curved, and their surfaces have a common center giving it the name "monocentric". It was invented by Hugo Adolf Steinheil around 1883. This design, like the solid eyepiece designs of Robert Tolles, Charles S. Hastings, and E. Wilfred Taylor, is free from ghost reflections and gives a bright contrasty image, a desirable feature when it was invented (before anti-reflective coatings). It has a narrow field of view of around 25° and is a favorite amongst planetary observers. An erfle is a 5-element eyepiece consisting of two achromatic lenses with extra lenses in between. They were invented during the first world war for military purposes, described in US patent by Heinrich Erfle number 1,478,704 of August 1921 and are a logical extension to wider fields of four element eyepieces such as Plössls. Erfle eyepieces are designed to have wide field of view (about 60 degrees), but they are unusable at high powers because they suffer from astigmatism and ghost images. However, with lens coatings at low powers (focal lengths of 20 mm and up) they are acceptable, and at 40 mm they can be excellent. Erfles are very popular because they have large eye lenses, good eye relief and can be very comfortable to use. The König eyepiece has a concave-convex positive doublet and a plano-convex singlet. The strongly convex surfaces of the doublet and singlet face and (nearly) touch each other. The doublet has its concave surface facing the light source and the singlet has its almost flat (slightly convex) surface facing the eye. It was designed in 1915 by German optician Albert König (1871−1946) as a simplified Abbe. The design allows for high magnification with remarkably high eye relief — the highest eye relief proportional to focal length of any design before the Nagler, in 1979. The field of view of about 55° makes its performance similar to the Plössl, with the advantage of requiring one less lens. Modern versions of Königs can use improved glass, or add more lenses, grouped into various combinations of doublets and singlets. The most typical adaptation is to add a positive, concave-convex simple lens before the doublet, with the concave face towards the light source and the convex surface facing the doublet. Modern improvements typically have fields of view of 60°−70°. An RKE eyepiece has an achromatic field lens and double convex eye lens, a reversed adaptation of the Kellner eyepiece. It was designed by Dr. David Rank for the Edmund Scientific Corporation, who marketed it throughout the late 1960s and early 1970s. This design provides slightly wider field of view than classic Kellner design and makes its design similar to a widely spaced version of the König. According to Edmund Scientific Corporation, RKE stands for "Rank Kellner Eyepiece'". In an amendment to their trademark application on January 16, 1979 it was given as "Rank, Kaspereit, Erfle", the three designs from which the eyepiece was derived. Invented by Albert Nagler and patented in 1979, the Nagler eyepiece is a design optimized for astronomical telescopes to give an ultra-wide field of view (82°) that has good correction for astigmatism and other aberrations. Introduced in 2007, the Ethos is an enhanced ultra-wide field design developed principally by Paul Dellechiaie under Albert Nagler's guidance at Tele Vue Optics and claims a 100–110° AFOV. This is achieved using exotic high-index glass and up to eight optical elements in four or five groups; there are five similar designs called the Nagler, Nagler type 2, Nagler type 4, Nagler type 5, and Nagler type 6. The newer Delos design is a modified Ethos design with a FOV of 'only' 72 degrees but with a long 20mm eye relief. The number of elements in a Nagler makes them seem complex, but the idea of the design is fairly simple: every Nagler has a negative doublet field lens, which increases magnification, followed by several positive groups. The positive groups, considered separate from the first negative group, combine to have long focal length, and form a positive lens. That allows the design to take advantage of the many good qualities of low power lenses. In effect, a Nagler is a superior version of a Barlow lens combined with a long focal length eyepiece. This design has been widely copied in other wide field or long eye relief eyepieces. The main disadvantage to Naglers is in their weight. Long focal length versions exceed 0.5 kg (1.1 lb), which is enough to unbalance small telescopes. Another disadvantage is a high purchase cost, with large Naglers' prices comparable to the cost of a small telescope. Hence these eyepieces are regarded by many amateur astronomers as a luxury. - Barlow lens - List of telescope parts and construction - Optical microscope - Optical telescope - Pocket comparator - Clark, Roger N. (1990). Visual astronomy of the deep sky. Cambridge: Cambridge University Press. p. 29. ISBN 0521361559. - Philip S. Harrington, "Star Ware", page 181 - astro-tom.com -Huygens - Jack Kramer. "The Good Old Plossl Eyepiece". The Lake County Astronomical Society (Lake County, Illinois). Retrieved 2009-12-25. - "Military handbook MIL-HDBK-141", chapter 14 - Steven R. Coe, Nebulae and how to observe them, p. 9. - Philip S. Harrington, Star Ware: The Amateur Astronomer's Guide, page 183 - John W. McAnally, Jupiter and How to Observe It - Page 156 - astro-tom.com -Huygens - Comments on Gary Seronik's TMB Monocentric Eyepiece test report Sky & Telescope August 2004 pp98-102 by Chris Lord - Handbook of Optical Systems, Survey of Optical Instruments by Herbert Gross, Hannfried Zügge, Fritz Blechinger, Bertram Achtner, page 110 - "Demystifying Multicoatings" by Rodger Gordon (Originally appeared in TPO Volume 8, Issue 4. 1997) - Martin Mobberley, "Astronomical Equipment for Amateurs", page 71 - Gerald North, "Advanced Amateur Astronomy", page 36 - 17 January 2008 http://tdr.uspto.gov/search.action?sn=73173827 - Daniel Mounsey, Cloudynights review of Ethos, www.cloudynights.com, the 21 mm released in 2009 has a beer can size and weighs nearly a kilo - Martin C. Cohen . Televue: A Historical Perspective, company7.com - A. E. Conrady, Applied Optics and Optical Design, Volume I. Oxford 1929. - R. Kingslake, Lens Design Fundamentals. Academic Press 1978. - H. Rutten and M. van Venrooij, Telescope Optics. Willmann-Bell 1988, 1989. ISBN 0-943396-18-2. - P. S. Harrington, Star Ware: An Amateur Astronomer's Guide to Choosing, Buying, and Using Telescopes and Accessories: Fourth Edition. John Wiley & Sons, Inc. - EYEPIECE EVOLUTION - A. Nagler - United States Patent US4286844 - A. Nagler - United States Patent US4747675 - A. Nagler - United States Patent US4525035 - A. Nagler - Finder scope for use with astronomical telescopes - The evolution of the astronomical eyepiece, in-depth discussion of various design and theoretical background - John Savard's Eyepiece Page, a list of eyepieces with some details of their construction. - United States Patent Office: Ultra wide ocular NAGLER.
Hypothesis Testing in Research Hypothesis testing in research can be referred to as the act of testing an assumption related to a population. Researchers test this assumed truth by examining data collected from members within that population and determining whether or not there is enough evidence in favour of it being true – meaning, if the results show significant proof that supports what was originally hypothesized then it becomes accepted as fact; otherwise, any assumptions are dismissed until more information comes along to support them. Hypothesis testing is a formal process that includes the investigation of ideas about the real world by utilizing statistics. Testing of a hypothesis is mainly done by scientists for examining their predictions known as a hypothesis that originates from theories. How to Test a hypothesis in research? The 4 steps of the testing hypothesis in research are: Step 1: Defining the null and alternative hypotheses Before testing the hypothesis you need to create a hypothesis (it means develop predictions on which you want to perform an investigation. It is very much important for you to write the null (H0) and the alternative hypothesis (H1) so that they could be tested using mathematical formulas. An alternative hypothesis is considered to be an initial hypothesis that helps you in predicting the relationship between variables. For example: If you intend to perform for the investigation to test whether there is a relationship between height and gender. On the basis of your knowledge about human physiology you can design a hypothesis that men are taller than women. You for testing the hypothesis need to restate it in the following manner such as: H0, Men on average are taller than women. H1: Men are on average are taller than women. Step 2: Information gathering for hypothesis testing For making statistical hypothesis testing research to be valid, you must execute the sapling and collection of information in a manner that is formulated to test the hypothesis. If the information is not representative, then you cannot make statistical inferences about the population in which you are interested. Example: If you intend to test an average difference between the height of girls and boys, then you should select such a sample which consists of an equal proportion of girls and boys. While making a selection of samples you should cover all socio-economic classes. At the time of selecting a sample you should consider other variables that might influence average height. Step 3: Execution of statistical test There are several statistical tests available that you can utilize for testing hypotheses. The entire statistical tests are designed for making a comparison between groups of variance. If there is a large difference between groups then the value will represent that variation. There are only a few cases when the difference between groups takes place by chance. If in case there is great variance in a group and lower between different groups variance, then the statistical test will represent a high P-value. It means that any difference you measure between groups is due to chance. At the time of making the selection of statistical tests, you need to consider the type of information you intend to gather. Based on the information which you have gathered, you can perform a T-test to find whether the height of boys is taller than girls or not. T-Test enables you to get: - Assume the difference between the height of boys and girls. - It is a P-value that will help you in demonstrating the way which helps in demonstrating the way to test the hypothesis. - A t-Test is deciding whether the null hypothesis is supported or refused Based on the outcomes of statistical tests, you need to determine whether the null hypothesis which you design will be accepted or rejected. There are a few circumstances where you can utilize the p-value which you have generated by performing the statistical test for guiding your decision. In some situations, the chances of rejecting the null hypothesis would be 0.05. In simple words, if the null hypothesis is true, it means that there is less than a 5 per cent chance of seeing the research outcomes. For example: In an investigation performed for analyzing the difference between the height of boys and girls. The outcome of research findings reveals that there is an average difference between the height of boys and girls. P-value which has been generated through statistical test is 0.002 which is below maximum which is 0.05. If your hypothesis s less than 0.05 then you can make a decision related to the rejection of the hypothesis. Step 4: Presenting the results and facilitating discussion You need to present the outcomes of hypothesis testing in the outcomes and discussion sections of the research paper. In the result chapter of the research paper, you should provide a detailed summary of information. You should summarise the outcomes of the statistical test. In the discussion chapter, you should facilitate debate on whether the primary hypothesis should be accepted or rejected. Example of stating outcome in statistics assignment: By making the comparison been the average height of boys and girls. It has been found from the investigation that the average difference between the height of boys and girls is 14.3 cm and the p-value is 0.002. Therefore, you can refuse the null hypothesis that is boys are not as tall as compared to girls. It can be concluded by analyzing the outcome of an investigation that there is a little difference between the height of boys and girls. At the time of presenting the research outcomes UN research paper, you need to review your alternative hypothesis. After that, you need to write whether the outcome of the test is consistent or inconsistent about the alternative hypothesis. Example: It has been concluded from an assignment that the difference between the mean height of both girls and boys is 14.3 cm and the p-value is 0.002. Maintaining the consistency in hypothesis there is variation in the height of girls and boys. Such variation is considered to be superficial which means that these difference have some meaning. It has been concluded from the above that hypothesis testing helps you in testing your assumptions. Another thing that has been summarised from the above article is that by can utilize hypothesis testing for analyzing the relationship between different variables.
One hundred meters (or about 328 feet) underground, beneath the border between France and Switzerland, there's a circular machine that might reveal to us the secrets of the universe. Or, according to some people, it could destroy all life on Earth instead. One way or another, it's the world's largest machine and it will examine the universe's tiniest particles. It's the Large Hadron Collider (LHC). The LHC is part of a project helmed by the European Organization for Nuclear Research, also known as CERN. The LHC joins CERN's accelerator complex outside of Geneva, Switzerland. Once it's switched on, the LHC will hurl beams of protons and ions at a velocity approaching the speed of light. The LHC will cause the beams to collide with each other, and then record the resulting events caused by the collision. Scientists hope that these events will tell us more about how the universe began and what it's made of. The LHC is the most ambitious and powerful particle accelerator built to date. Thousands of scientists from hundreds of countries are working together -- and competing with one another -- to make new discoveries. Six sites along the LHC's circumference gather data for different experiments. Some of these experiments overlap, and scientists will be trying to be the first to uncover important new information. The purpose of the Large Hadron Collider is to increase our knowledge about the universe. While the discoveries scientists will make could lead to practical applications down the road, that's not the reason hundreds of scientists and engineers built the LHC. It's a machine built to further our understanding. Considering the LHC costs billions of dollars and requires the cooperation of numerous countries, the absence of a practical application may be surprising. What do scientists hope to find by using the LHC? Keep reading to find out. What Is the LHC Looking For? In an attempt to understand our universe, including how it works and its actual structure, scientists proposed a theory called the standard model. This theory tries to define and explain the fundamental particles that make the universe what it is. It combines elements from Einstein's theory of relativity with quantum theory. It also deals with three of the four basic forces of the universe: strong nuclear force, weak nuclear force and electromagnetic force. It does not address the effects of gravity, the fourth fundamental force. The Standard Model makes several predictions about the universe, many of which seem to be true according to various experiments. But there are other aspects of the model that remain unproven. One of those is a theoretical particle called the Higgs boson particle. The Higgs boson particle may answer questions about mass. Why does matter have mass? Scientists have identified particles that have no mass, such as neutrinos. Why should one kind of particle have mass and another lack it? Scientists have proposed many ideas to explain the existence of mass. The simplest of these is the Higgs mechanism. This theory says that there may be a particle and a corresponding mediating force that would explain why some particles have mass. The theoretical particle has never been observed and may not even exist. Some scientists hope the events created by the LHC will also uncover evidence for the existence of the Higgs boson particle. Others hope that the events will provide hints of new information we haven't even considered yet. Another question scientists have about matter deals with early conditions in the universe. During the earliest moments of the universe, matter and energy were coupled. Just after matter and energy separated, particles of matter and antimatter annihilated each other. If there had been an equal amount of matter and antimatter, the two kinds of particles would have canceled each other out. But fortunately for us, there was a bit more matter than antimatter in the universe. Scientists hope that they'll be able to observe antimatter during LHC events. That might help us understand why there was a miniscule difference in the amount of matter versus antimatter when the universe began. Dark matter might also play an important role in LHC research. Our current understanding of the universe suggests that the matter we can observe only accounts for about 4 percent of all the matter that must exist. When we look at the movement of galaxies and other celestial bodies, we see that their motions suggest there's much more matter in the universe than we can detect. Scientists named this undetectable material dark matter. Together, observable matter and dark matter could account for about 25 percent of the universe. The other three-quarters would come from a force called dark energy, a hypothetical energy that contributes to the expansion of the universe. Scientists hope that their experiments will either provide further evidence for the existence of dark matter and dark energy or provide evidence that could support an alternate theory. That's just the tip of the particle physics iceberg, though. There are even more exotic and counterintuitive things the LHC might turn up. Like what? Find out in the next section. LHC Research: The Strange Stuff If theoretical particles, antimatter and dark energy aren't unusual enough, some scientists believe that the LHC could uncover evidence of other dimensions. We're used to living in a world of four dimensions -- three spatial dimensions and time. But some physicists theorize that there may be other dimensions we can't perceive. Some theories only make sense if there are several more dimensions in the universe. For example, one version of string theory requires the existence of no fewer than 11 dimensions. String theorists hope the LHC will provide evidence to support their proposed model of the universe. String theory states that the fundamental building block of the universe isn't a particle, but a string. Strings can either be open ended or closed. They also can vibrate, similar to the way the strings on a guitar vibrate when plucked. Different vibrations make the strings appear to be different things. A string vibrating one way would appear as an electron. A different string vibrating another way would be a neutrino. Some scientists have criticized string theory, saying that there's no evidence to support the theory itself. String theory incorporates gravity into the standard model -- something scientists can't do without an additional theory. It reconciles Einstein's theory of general relativity with the Quantum Field Theory. But there's still no proof these strings exist. They are far too small to observe and currently there's no way to test for them. That has lead to some scientists to dismiss string theory as more of a philosophy than a science. String theorists hope that the LHC will change critics' minds. They are looking for signs of supersymmetry. According to the standard model, every particle has an anti-particle. For example, the anti-particle for an electron (a particle with a negative charge) is a positron. Supersymmetry proposes that particles also have superpartners, which in turn have their own counterparts. That means every particle has three counter-particles. Although we've not seen any indication of these superpartners in nature, theorists hope that the LHC will prove they actually exist. Potentially, superparticles could explain dark matter or help fit gravity into the overall standard model. How big is the LHC? How much power will it use? How much did it cost to build? Find out in the next section. LHC by the Numbers The Large Hadron Collider is a massive and powerful machine. It consists of eight sectors. Each sector is an arc bounded on each end by a section called an insertion. The LHC's circumference measures 27 kilometers (16.8 miles) around. The accelerator tubes and collision chambers are 100 meters (328 feet) underground. Scientists and engineers can access the service tunnel the machinery sits in by descending in elevators and stairways located at several points along the circumference of the LHC. CERN is building structures above ground where scientists can collect and analyze the data LHC generates. The LHC uses magnets to steer beams of protons as they travel at 99.99 percent the speed of light. The magnets are very large, many weighing several tons. There are about 9,600 magnets in the LHC. The magnets are cooled to a chilly 1.9 degrees Kelvin (-271.25 Celsius or -456.25 Fahrenheit). That's colder than the vacuum of outer space. Speaking of vacuums, the proton beams inside the LHC travel through pipes in what CERN calls an "ultra-high vacuum." The reason for creating such a vacuum is to avoid introducing particles the protons could collide with before they reach the proper collision points. Even a single molecule of gas could cause an experiment to fail. There are six areas along the circumference of the LHC where engineers will be able to perform experiments. Think of each area as if it were a microscope with a digital camera. Some of these microscopes are huge -- the ATLAS experiment is a device that is 45 meters (147.6 feet) long, 25 meters (82 feet) tall and weighs 7,000 tons (5,443 metric tons) [source: ATLAS]. The LHC and the experiments connected to it contain about 150 million sensors. Those sensors will collect data and send it to various computing systems. According to CERN, the amount of data collected during experiments will be about 700 megabytes per second (MB/s). On a yearly basis, this means the LHC will gather about 15 petabytes of data. A petabyte is a million gigabytes. That much data could fill 100,000 DVDs [source: CERN]. It takes a lot of energy to run the LHC. CERN estimates that the annual power consumption for the collider will be about 800,000 megawatt hours (MWh). It could have been much higher, but the facility will not operate during the winter months. According to CERN, the price for all this energy will be a cool 19 million Euros. That's almost $30 million per year in electricity bills for a facility that cost more than $6 billion to build [source: CERN]! What exactly happens during an experiment? Keep reading to find out. LHC: Smashing Protons The principle behind the LHC is pretty simple. First, you fire two beams of particles along two pathways, one going clockwise and the other going counterclockwise. You accelerate both beams to near the speed of light. Then, you direct both beams toward each other and watch what happens. The equipment necessary to achieve that goal is far more complex. The LHC is just one part of the overall CERN particle accelerator facility. Before any protons or ions enter the LHC, they've already gone through a series of steps. Let's take a look at the life of a proton as it goes through the LHC process. First, scientists must strip electrons from hydrogen atoms to produce protons. Then, the protons enter the LINAC2, a machine that fires beams of protons into an accelerator called the PS Booster. These machines use devices called radio frequency cavities to accelerate the protons. The cavities contain a radio-frequency electric field that pushes the proton beams to higher speeds. Giant magnets produce the magnetic fields necessary to keep the proton beams on track. In car terms, think of the radio frequency cavities as an accelerator and the magnets as a steering wheel Once a beam of protons reaches the right energy level, the PS Booster injects it into another accelerator called the Super Proton Synchotron (SPS). The beams continue to pick up speed. By now, beams have divided into bunches. Each bunch contains 1.1 x 1011 protons, and there are 2,808 bunches per beam [source: CERN]. The SPS injects beams into the LHC, with one beam traveling clockwise and the other going counterclockwise. Inside the LHC, the beams continue to accelerate. This takes about 20 minutes. At top speed, the beams make 11,245 trips around the LHC every second. The two beams converge at one of the six detector sites positioned along the LHC. At that position, there will be 600 million collisions per second [source: CERN]. When two protons collide, they break apart into even smaller particles. That includes subatomic particles called quarks and a mitigating force called gluon. Quarks are very unstable and will decay in a fraction of a second. The detectors collect information by tracking the path of subatomic particles. Then the detectors send data to a grid of computer systems. Not every proton will collide with another proton. Even with a machine as advanced as the LHC, it's impossible to direct beams of particles as small as protons so that every particle will collide with another one. Protons that fail to collide will continue in the beam to a beam dumping section. There, a section made of graphite will absorb the beam. The beam dumping sections are able to absorb beams if something goes wrong inside the LHC. To learn more about the mechanics behind particle accelerators, take a look at How Atom Smashers Work. The LHC has six detectors positioned along its circumference. What do these detectors do and how do they work? Find out in the next section. The LHC Detectors The six areas along the circumference of the LHC that will gather data and conduct experiments are simply known as detectors. Some of them will search for the same kind of information, though not in the same way. There are four major detector sites and two smaller ones. The detector known as A Toroidal LHC ApparatuS (ATLAS) is the largest of the bunch. It measures 46 meters (150.9 feet) long by 25 meters (82 feet) tall and 25 meters wide. At its core is a device called the inner tracker. The inner tracker detects and analyzes the momentum of particles passing through the ATLAS detector. Surrounding the inner tracker is a calorimeter. Calorimeters measure the energy of particles by absorbing them. Scientists can look at the path the particles took and extrapolate information about them. The ATLAS detector also has a muon spectrometer. Muons are negatively charged particles 200 times heavier than electrons. Muons can travel through a calorimeter without stopping -- it's the only kind of particle that can do that. The spectrometer measures the momentum of each muon with charged particle sensors. These sensors can detect fluctuations in the ATLAS detector's magnetic field. The Compact Muon Solenoid (CMS) is another large detector. Like the ATLAS detector, the CMS is a general-purpose detector that will detect and measure the subparticles released during collisions. The detector is inside in a giant solenoid magnet that can create a magnetic field nearly 100,000 times stronger than the Earth's magnetic field [source: CMS]. Then there's ALICE, which stands for A Large Ion Collider Experiment. Engineers designed ALICE to study collisions between ions of iron. By colliding iron ions at high energy, scientists hope to recreate conditions similar to those just after the big bang. They expect to see the ions break apart into a quark and gluon mixture. A main component of ALICE is the Time Projection Chamber (TPC), which will examine and reconstruct particle trajectories. Like the ATLAS and CMS detectors, ALICE also has a muon spectrometer. Next is the Large Hadron Collider beauty (LHCb) detector site. The purpose of the LHCb is to search for evidence of antimatter. It does this by searching for a particle called the beauty quark. A series of sub-detectors surrounding the collision point stretch 20 meters (65.6 feet) in length. The detectors can move in tiny, precise ways to catch beauty quark particles, which are very unstable and rapidly decay. The TOTal Elastic and diffractive cross section Measurement (TOTEM) experiment is one of the two smaller detectors in the LHC. It will measure the size of protons and the LHC's luminosity. In particle physics, luminosity refers to how precisely a particle accelerator produces collisions. Finally, there's the Large Hadron Collider forward (LHCf) detector site. This experiment simulates cosmic rays within a controlled environment. The goal of the experiment is to help scientists come up with ways to devise wide-area experiments to study naturally occurring cosmic ray collisions. Each detector site has a team of researchers ranging from a few dozen to more than a thousand scientists. In some cases, these scientists will be searching for the same information. For them, it's a race to make the next revolutionary discovery in physics. How will scientists handle all the data these detectors will gather? More on that in the next section. Computing the LHC Data With 15 petabytes of data (that's 15,000,000 gigabytes) gathered by the LHC detectors every year, scientists have an enormous task ahead of them. How do you process that much information? How do you know you're looking at something significant within such a large data set? Even using a supercomputer, processing that much information could take thousands of hours. Meanwhile, the LHC would continue accumulating even more data. CERN's solution to this problem is the LHC Computing Grid. The grid is a network of computers, each of which can analyze a chunk of data on its own. Once a computer completes its analysis, it can send the findings on to a centralized computer and accept a new chunk of data. As long as scientists can divide the data up into chunks, the system works well. Within the computer industry this approach is called grid computing. The scientists at CERN decided to focus on using relatively inexpensive equipment to perform their calculations. Instead of purchasing cutting-edge data servers and processors, CERN concentrates on off-the-shelf hardware that can work well in a network. Their approach is very similar to the strategy Google employs. It's more cost efficient to purchase lots of average hardware than a few advanced pieces of equipment. Using a special kind of software called midware, the network of computers will be able to store and analyze data for every experiment conducted at the LHC. The structure for the system is organized into tiers: - Tier 0 is CERN's computing system, which will first process information and divide it into chunks for the other tiers. - Twelve Tier 1 sites located in several countries will accept data from CERN over dedicated computer connections. These connections will be able to transmit data at 10 gigabytes per second. The Tier 1 sites will further process data and divide it up to send further down the grid. - More than 100 Tier 2 sites will connect with the Tier 1 sites. Most of these sites are universities or scientific institutions. Each site will have multiple computers available to process and analyze data. As each processing job completes, the sites will push data back up the tier system. The connection between Tier 1 and Tier 2 is a standard network connection. Any Tier 2 site can access any Tier 1 site. The reason for that is to allow research institutions and universities the chance to focus on specific information and research. One challenge with such a large network is data security. CERN determined that the network couldn't rely on firewalls because of the amount of data traffic on the system. Instead, the system relies on identification and authorization procedures to prevent unauthorized access to LHC data. Some people say that worrying about data security is a moot point. That's because they think the LHC will end up destroying the entire world. Is it really possible? Find out in the next section. Will the LHC Destroy the World? The LHC will allow scientists to observe particle collisions at an energy level far higher than any previous experiment. Some people worry that such powerful reactions could cause serious trouble for the Earth. In fact, a few people are so concerned that they filed a lawsuit against CERN in an attempt to delay the LHC's activation. In March 2008, former nuclear safety officer Walter Wagner and Luis Sancho spearheaded a lawsuit filed in Hawaii's U.S. District Court. They claim the LHC could potentially destroy the world [source: MSNBC]. What is the basis for their concerns? Could the LHC create something that could end all life as we know it? What exactly might happen? One fear is that the LHC could produce black holes. Black holes are regions in which matter collapses into a point of infinite density. CERN scientists admit that the LHC could produce black holes, but they also say those black holes would be on a subatomic scale and would collapse almost instantly. In contrast, the black holes astronomers study result from an entire star collapsing in on itself. There's a big difference between the mass of a star and that of a proton. Another concern is that the LHC will produce an exotic (and so far hypothetical) material called strangelets. One possible trait of strangelets is particularly worrisome. Cosmologists theorize that strangelets could possess a powerful gravitational field that might allow them to convert the entire planet into a lifeless hulk. Scientists at LHC dismiss this concern using multiple counterpoints. First, they point out that strangelets are hypothetical. No one has observed such material in the universe. Second, they say that the electromagnetic field around such material would repel normal matter rather than change it into something else. Third, they say that even if such matter exists, it would be highly unstable and would decay almost instantaneously. Fourth, the scientists say that high-energy cosmic rays should produce such material naturally. Since the Earth is still around, they theorize that strangelets are a non-issue. Another theoretical particle the LHC might generate is a magnetic monopole. Theorized by P.A.M. Dirac, a monopole is a particle that holds a single magnetic charge (north or south) instead of two. The concern Wagner and Sancho cited is that such particles could pull matter apart with their lopsided magnetic charges. CERN scientists disagree, saying that if monopoles exist, there's no reason to fear that such particles would cause such destruction. In fact, at least one team of researchers is actively looking for evidence of monopoles with the hopes that the LHC will produce some. Other concerns about the LHC include fears of radiation and the fact that it will produce the highest energy collisions of particles on Earth. CERN states that the LHC is extremely safe, with thick shielding that includes 100 meters (328 feet) of earth on top of it. In addition, personnel are not allowed underground during experiments. As for the concern about collisions, scientists point out that high-energy cosmic ray collisions happen all the time in nature. Rays collide with the sun, moon and other planets, all of which are still around with no sign of harm. With the LHC, those collisions will happen within a controlled environment. Otherwise, there's really no difference. Will the LHC succeed in furthering our knowledge about the universe? Will the data collected raise more questions than it answers? If past experiments are any indication, it's probably a safe bet to assume the answer to both of these questions is yes. To learn more about the Large Hadron Collider, particle accelerators and related topics, accelerate over to the links on the next page. Many people think Marie Antoinette's hair turned white overnight. Find out if this could really happen at HowStuffWorks. Related HowStuffWorks Articles More Great Links - "ALICE: A Large Ion Collider Experiment." CERN. http://aliceinfo.cern.ch/Public/index.html - Bos, Eric-Jan, Martelli, Edoardo and Moroni, Paolo. "LHC high-level network architecture." GÉANT2. June 17, 2005. http://www.geant2.net/upload/pdf/LHC_networking_v1-9_NC.pdf - Boyle, Alan. "Doomsday fears spark lawsuit over collider." MSNBC. March 28, 2008. http://www.msnbc.msn.com/id/23844529/ - CERN. http://public.web.cern.ch/Public/Welcome.html - "CERN LHC." GÉANT2. http://www.geant2.net/server/show/nav.00d00h001003 - "CERNPodcast." CERN. http://www.cernpodcast.com/ - Collins, Graham P. "Large Hadron Collider: The Discovery Machine." Scientific American. Jan. 2008. http://www.sciam.com/article.cfm?id=the-discovery-machine-hadron-collider - "Design flaw blamed for magnet failure at Cern." Professional Engineering. April 25, 2007. - Holden, Joshua. "The Story of Strangelets." Rutgers University. May 17, 1998. http://www.physics.rutgers.edu/~jholden/strange/strange.html - "Large Hadron Collider Beauty Experiment." CERN. http://lhcb-public.web.cern.ch/lhcb-public/Welcome.html - "LHC: The Guide." CERN. http://cdsweb.cern.ch/record/1092437/files/CERN-Brochure-2008-001-Eng.pdf - "M-theory, the theory formerly known as Strings." Cambridge University. http://www.damtp.cam.ac.uk/user/gr/public/qg_ss.html - Overbye, Dennis. "Will collider break ground -- or destroy the Earth?" The Seattle Times. March 29, 2008. http://seattletimes.nwsource.com/html/nationworld/2004314373_super29.html - "The Standard Model." Virtual Vistor Center, Stanford University. http://www2.slac.stanford.edu/vvc/theory/model.html - "TOTEM Experiment." CERN. http://totem.web.cern.ch/Totem/ - Wagner, Richard J. "The Strange Matter of Planetary Destruction." March 21, 2007. http://chess.captain.at/strangelets-matter.html
Pressure measurement is the measurement of an applied force by a fluid (liquid or gas) on a surface. Pressure is typically measured in units of force per unit of surface area. Many techniques have been developed for the measurement of pressure and vacuum. Instruments used to measure and display pressure mechanically are called pressure gauges, vacuum gauges or compound gauges (vacuum & pressure). The widely used Bourdon gauge is a mechanical device, which both measures and indicates and is probably the best known type of gauge. A vacuum gauge is used to measure pressures lower than the ambient atmospheric pressure, which is set as the zero point, in negative values (for instance, −1 bar or −760 mmHg equals total vacuum). Most gauges measure pressure relative to atmospheric pressure as the zero point, so this form of reading is simply referred to as "gauge pressure". However, anything greater than total vacuum is technically a form of pressure. For very low pressures, a gauge that uses total vacuum as the zero point reference must be used, giving pressure reading as an absolute pressure. Other methods of pressure measurement involve sensors that can transmit the pressure reading to a remote indicator or control system (telemetry). Absolute, gauge and differential pressures — zero reference Everyday pressure measurements, such as for vehicle tire pressure, are usually made relative to ambient air pressure. In other cases measurements are made relative to a vacuum or to some other specific reference. When distinguishing between these zero references, the following terms are used: - Absolute pressure is zero-referenced against a perfect vacuum, using an absolute scale, so it is equal to gauge pressure plus atmospheric pressure. - Gauge pressure is zero-referenced against ambient air pressure, so it is equal to absolute pressure minus atmospheric pressure. - Differential pressure is the difference in pressure between two points. The zero reference in use is usually implied by context, and these words are added only when clarification is needed. Tire pressure and blood pressure are gauge pressures by convention, while atmospheric pressures, deep vacuum pressures, and altimeter pressures must be absolute. For most working fluids where a fluid exists in a closed system, gauge pressure measurement prevails. Pressure instruments connected to the system will indicate pressures relative to the current atmospheric pressure. The situation changes when extreme vacuum pressures are measured, then absolute pressures are typically used instead and measuring instruments used will be different. Differential pressures are commonly used in industrial process systems. Differential pressure gauges have two inlet ports, each connected to one of the volumes whose pressure is to be monitored. In effect, such a gauge performs the mathematical operation of subtraction through mechanical means, obviating the need for an operator or control system to watch two separate gauges and determine the difference in readings. Moderate vacuum pressure readings can be ambiguous without the proper context, as they may represent absolute pressure or gauge pressure without a negative sign. Thus a vacuum of 26 inHg gauge is equivalent to an absolute pressure of 4 inHg, calculated as 30 inHg (typical atmospheric pressure) − 26 inHg (gauge pressure). Atmospheric pressure is typically about 100 kPa at sea level, but is variable with altitude and weather. If the absolute pressure of a fluid stays constant, the gauge pressure of the same fluid will vary as atmospheric pressure changes. For example, when a car drives up a mountain, the (gauge) tire pressure goes up because atmospheric pressure goes down. The absolute pressure in the tire is essentially unchanged. Using atmospheric pressure as reference is usually signified by a "g" for gauge after the pressure unit, e.g. 70 psig, which means that the pressure measured is the total pressure minus atmospheric pressure. There are two types of gauge reference pressure: vented gauge (vg) and sealed gauge (sg). A vented-gauge pressure transmitter, for example, allows the outside air pressure to be exposed to the negative side of the pressure-sensing diaphragm, through a vented cable or a hole on the side of the device, so that it always measures the pressure referred to ambient barometric pressure. Thus a vented-gauge reference pressure sensor should always read zero pressure when the process pressure connection is held open to the air. A sealed gauge reference is very similar, except that atmospheric pressure is sealed on the negative side of the diaphragm. This is usually adopted on high pressure ranges, such as hydraulics, where atmospheric pressure changes will have a negligible effect on the accuracy of the reading, so venting is not necessary. This also allows some manufacturers to provide secondary pressure containment as an extra precaution for pressure equipment safety if the burst pressure of the primary pressure sensing diaphragm is exceeded. There is another way of creating a sealed gauge reference, and this is to seal a high vacuum on the reverse side of the sensing diaphragm. Then the output signal is offset, so the pressure sensor reads close to zero when measuring atmospheric pressure. A sealed gauge reference pressure transducer will never read exactly zero because atmospheric pressure is always changing and the reference in this case is fixed at 1 bar. To produce an absolute pressure sensor, the manufacturer seals a high vacuum behind the sensing diaphragm. If the process-pressure connection of an absolute-pressure transmitter is open to the air, it will read the actual barometric pressure. For much of human history, the pressure of gases like air was ignored, denied, or taken for granted, but as early as the 6th century BC, Greek philosopher Anaximenes of Miletus claimed that all things are made of air that is simply changed by varying levels of pressure. He could observe water evaporating, changing to a gas, and felt that this applied even to solid matter. More condensed air made colder, heavier objects, and expanded air made lighter, hotter objects. This was akin to how gases really do become less dense when warmer, more dense when cooler. In the 17th century, Evangelista Torricelli conducted experiments with mercury that allowed him to measure the presence of air. He would dip a glass tube, closed at one end, into a bowl of mercury and raise the closed end up out of it, keeping the open end submerged. The weight of the mercury would pull it down, leaving a partial vacuum at the far end. This validated his belief that air/gas has mass, creating pressure on things around it. Previously, the more popular conclusion, even for Galileo, was that air was weightless and it is vacuum that provided force, as in a siphon. The discovery helped bring Torricelli to the conclusion: We live submerged at the bottom of an ocean of the element air, which by unquestioned experiments is known to have weight. This test, known as Torricelli's experiment, was essentially the first documented pressure gauge. Blaise Pascal went farther, having his brother-in-law try the experiment at different altitudes on a mountain, and finding indeed that the farther down in the ocean of atmosphere, the higher the pressure. |Pascal||Bar||Technical atmosphere||Standard atmosphere||Torr||Pound per square inch| |1 Pa||—||1 Pa = 10−5 bar||1 Pa = 1.0197×10−5 at||1 Pa = 9.8692×10−6 atm||1 Pa = 7.5006×10−3 Torr||1 Pa = 0.000145037737730 lbf/in2| |1 bar||105||—||= 1.0197||= 0.98692||= 750.06||= 14.503773773022| |1 atm||≡ 101325||≡ 1.01325||1.0332||—||760||14.6959487755142| |1 Torr||133.322368421||0.001333224||0.00135951||1/760 ≈ 0.001315789||—||0.019336775| The SI unit for pressure is the pascal (Pa), equal to one newton per square metre (N·m−2 or kg·m−1·s−2). This special name for the unit was added in 1971; before that, pressure in SI was expressed in units such as N·m−2. When indicated, the zero reference is stated in parenthesis following the unit, for example 101 kPa (abs). The pound per square inch (psi) is still in widespread use in the US and Canada, for measuring, for instance, tire pressure. A letter is often appended to the psi unit to indicate the measurement's zero reference; psia for absolute, psig for gauge, psid for differential, although this practice is discouraged by the NIST. Because pressure was once commonly measured by its ability to displace a column of liquid in a manometer, pressures are often expressed as a depth of a particular fluid (e.g., inches of water). Manometric measurement is the subject of pressure head calculations. The most common choices for a manometer's fluid are mercury (Hg) and water; water is nontoxic and readily available, while mercury's density allows for a shorter column (and so a smaller manometer) to measure a given pressure. The abbreviation "W.C." or the words "water column" are often printed on gauges and measurements that use water for the manometer. Fluid density and local gravity can vary from one reading to another depending on local factors, so the height of a fluid column does not define pressure precisely. So measurements in "millimetres of mercury" or "inches of mercury" can be converted to SI units as long as attention is paid to the local factors of fluid density and gravity. Temperature fluctuations change the value of fluid density, while location can affect gravity. Although no longer preferred, these manometric units are still encountered in many fields. Blood pressure is measured in millimetres of mercury (see torr) in most of the world, central venous pressure and lung pressures in centimeters of water are still common, as in settings for CPAP machines. Natural gas pipeline pressures are measured in inches of water, expressed as "inches W.C." Underwater divers use manometric units: the ambient pressure is measured in units of metres sea water (msw) which is defined as equal to one tenth of a bar. The unit used in the US is the foot sea water (fsw), based on standard gravity and a sea-water density of 64 lb/ft3. According to the US Navy Diving Manual, one fsw equals 0.30643 msw, 0.030643 bar, or 0.44444 psi, though elsewhere it states that 33 fsw is 14.7 psi (one atmosphere), which gives one fsw equal to about 0.445 psi. The msw and fsw are the conventional units for measurement of diver pressure exposure used in decompression tables and the unit of calibration for pneumofathometers and hyperbaric chamber pressure gauges. Both msw and fsw are measured relative to normal atmospheric pressure. In vacuum systems, the units torr (millimeter of mercury), micron (micrometer of mercury), and inch of mercury (inHg) are most commonly used. Torr and micron usually indicates an absolute pressure, while inHg usually indicates a gauge pressure. Atmospheric pressures are usually stated using hectopascal (hPa), kilopascal (kPa), millibar (mbar) or atmospheres (atm). In American and Canadian engineering, stress is often measured in kip. Note that stress is not a true pressure since it is not scalar. In the cgs system the unit of pressure was the barye (ba), equal to 1 dyn·cm−2. In the mts system, the unit of pressure was the pieze, equal to 1 sthene per square metre. Many other hybrid units are used such as mmHg/cm2 or grams-force/cm2 (sometimes as [[kg/cm2]] without properly identifying the force units). Using the names kilogram, gram, kilogram-force, or gram-force (or their symbols) as a unit of force is prohibited in SI; the unit of force in SI is the newton (N). Static and dynamic pressure Static pressure is uniform in all directions, so pressure measurements are independent of direction in an immovable (static) fluid. Flow, however, applies additional pressure on surfaces perpendicular to the flow direction, while having little impact on surfaces parallel to the flow direction. This directional component of pressure in a moving (dynamic) fluid is called dynamic pressure. An instrument facing the flow direction measures the sum of the static and dynamic pressures; this measurement is called the total pressure or stagnation pressure. Since dynamic pressure is referenced to static pressure, it is neither gauge nor absolute; it is a differential pressure. While static gauge pressure is of primary importance to determining net loads on pipe walls, dynamic pressure is used to measure flow rates and airspeed. Dynamic pressure can be measured by taking the differential pressure between instruments parallel and perpendicular to the flow. Pitot-static tubes, for example perform this measurement on airplanes to determine airspeed. The presence of the measuring instrument inevitably acts to divert flow and create turbulence, so its shape is critical to accuracy and the calibration curves are often non-linear. Many instruments have been invented to measure pressure, with different advantages and disadvantages. Pressure range, sensitivity, dynamic response and cost all vary by several orders of magnitude from one instrument design to the next. The oldest type is the liquid column (a vertical tube filled with mercury) manometer invented by Evangelista Torricelli in 1643. The U-Tube was invented by Christiaan Huygens in 1661. Hydrostatic gauges (such as the mercury column manometer) compare pressure to the hydrostatic force per unit area at the base of a column of fluid. Hydrostatic gauge measurements are independent of the type of gas being measured, and can be designed to have a very linear calibration. They have poor dynamic response. Piston-type gauges counterbalance the pressure of a fluid with a spring (for example tire-pressure gauges of comparatively low accuracy) or a solid weight, in which case it is known as a deadweight tester and may be used for calibration of other gauges. Liquid column (manometer) Liquid-column gauges consist of a column of liquid in a tube whose ends are exposed to different pressures. The column will rise or fall until its weight (a force applied due to gravity) is in equilibrium with the pressure differential between the two ends of the tube (a force applied due to fluid pressure). A very simple version is a U-shaped tube half-full of liquid, one side of which is connected to the region of interest while the reference pressure (which might be the atmospheric pressure or a vacuum) is applied to the other. The difference in liquid levels represents the applied pressure. The pressure exerted by a column of fluid of height h and density ρ is given by the hydrostatic pressure equation, P = hgρ. Therefore, the pressure difference between the applied pressure Pa and the reference pressure P0 in a U-tube manometer can be found by solving Pa − P0 = hgρ. In other words, the pressure on either end of the liquid (shown in blue in the figure) must be balanced (since the liquid is static), and so Pa = P0 + hgρ. In most liquid-column measurements, the result of the measurement is the height h, expressed typically in mm, cm, or inches. The h is also known as the pressure head. When expressed as a pressure head, pressure is specified in units of length and the measurement fluid must be specified. When accuracy is critical, the temperature of the measurement fluid must likewise be specified, because liquid density is a function of temperature. So, for example, pressure head might be written "742.2 mmHg" or "4.2 inH2O at 59 °F" for measurements taken with mercury or water as the manometric fluid respectively. The word "gauge" or "vacuum" may be added to such a measurement to distinguish between a pressure above or below the atmospheric pressure. Both mm of mercury and inches of water are common pressure heads, which can be converted to S.I. units of pressure using unit conversion and the above formulas. If the fluid being measured is significantly dense, hydrostatic corrections may have to be made for the height between the moving surface of the manometer working fluid and the location where the pressure measurement is desired, except when measuring differential pressure of a fluid (for example, across an orifice plate or venturi), in which case the density ρ should be corrected by subtracting the density of the fluid being measured. Although any fluid can be used, mercury is preferred for its high density (13.534 g/cm3) and low vapour pressure. Its convex meniscus is advantageous since this means there will be no pressure errors from wetting the glass, though under exceptionally clean circumstances, the mercury will stick to glass and the barometer may become stuck (the mercury can sustain a negative absolute pressure) even under a strong vacuum. For low pressure differences, light oil or water are commonly used (the latter giving rise to units of measurement such as inches water gauge and millimetres H2O). Liquid-column pressure gauges have a highly linear calibration. They have poor dynamic response because the fluid in the column may react slowly to a pressure change. When measuring vacuum, the working liquid may evaporate and contaminate the vacuum if its vapor pressure is too high. When measuring liquid pressure, a loop filled with gas or a light fluid can isolate the liquids to prevent them from mixing, but this can be unnecessary, for example, when mercury is used as the manometer fluid to measure differential pressure of a fluid such as water. Simple hydrostatic gauges can measure pressures ranging from a few torrs (a few 100 Pa) to a few atmospheres (approximately 1000000 Pa). A single-limb liquid-column manometer has a larger reservoir instead of one side of the U-tube and has a scale beside the narrower column. The column may be inclined to further amplify the liquid movement. Based on the use and structure, following types of manometers are used - Simple manometer - Differential manometer - Inverted differential manometer A McLeod gauge isolates a sample of gas and compresses it in a modified mercury manometer until the pressure is a few millimetres of mercury. The technique is very slow and unsuited to continual monitoring, but is capable of good accuracy. Unlike other manometer gauges, the McLeod gauge reading is dependent on the composition of the gas, since the interpretation relies on the sample compressing as an ideal gas. Due to the compression process, the McLeod gauge completely ignores partial pressures from non-ideal vapors that condense, such as pump oils, mercury, and even water if compressed enough. - Useful range: from around 10−4 Torr (roughly 10−2 Pa) to vacuums as high as 10−6 Torr (0.1 mPa), 0.1 mPa is the lowest direct measurement of pressure that is possible with current technology. Other vacuum gauges can measure lower pressures, but only indirectly by measurement of other pressure-dependent properties. These indirect measurements must be calibrated to SI units by a direct measurement, most commonly a McLeod gauge. Aneroid gauges are based on a metallic pressure-sensing element that flexes elastically under the effect of a pressure difference across the element. "Aneroid" means "without fluid", and the term originally distinguished these gauges from the hydrostatic gauges described above. However, aneroid gauges can be used to measure the pressure of a liquid as well as a gas, and they are not the only type of gauge that can operate without fluid. For this reason, they are often called mechanical gauges in modern language. Aneroid gauges are not dependent on the type of gas being measured, unlike thermal and ionization gauges, and are less likely to contaminate the system than hydrostatic gauges. The pressure sensing element may be a Bourdon tube, a diaphragm, a capsule, or a set of bellows, which will change shape in response to the pressure of the region in question. The deflection of the pressure sensing element may be read by a linkage connected to a needle, or it may be read by a secondary transducer. The most common secondary transducers in modern vacuum gauges measure a change in capacitance due to the mechanical deflection. Gauges that rely on a change in capacitance are often referred to as capacitance manometers. The Bourdon pressure gauge uses the principle that a flattened tube tends to straighten or regain its circular form in cross-section when pressurized. (A party horn illustrates this principle.) This change in cross-section may be hardly noticeable, involving moderate stresses within the elastic range of easily workable materials. The strain of the material of the tube is magnified by forming the tube into a C shape or even a helix, such that the entire tube tends to straighten out or uncoil elastically as it is pressurized. Eugène Bourdon patented his gauge in France in 1849, and it was widely adopted because of its superior simplicity, linearity, and accuracy; Bourdon is now part of the Baumer group and still manufacture Bourdon tube gauges in France. Edward Ashcroft purchased Bourdon's American patent rights in 1852 and became a major manufacturer of gauges. Also in 1849, Bernard Schaeffer in Magdeburg, Germany patented a successful diaphragm (see below) pressure gauge, which, together with the Bourdon gauge, revolutionized pressure measurement in industry. But in 1875 after Bourdon's patents expired, his company Schaeffer and Budenberg also manufactured Bourdon tube gauges. In practice, a flattened thin-wall, closed-end tube is connected at the hollow end to a fixed pipe containing the fluid pressure to be measured. As the pressure increases, the closed end moves in an arc, and this motion is converted into the rotation of a (segment of a) gear by a connecting link that is usually adjustable. A small-diameter pinion gear is on the pointer shaft, so the motion is magnified further by the gear ratio. The positioning of the indicator card behind the pointer, the initial pointer shaft position, the linkage length and initial position, all provide means to calibrate the pointer to indicate the desired range of pressure for variations in the behavior of the Bourdon tube itself. Differential pressure can be measured by gauges containing two different Bourdon tubes, with connecting linkages (but is more usually measured via diaphragms or bellows and a balance system). Bourdon tubes measures gauge pressure, relative to ambient atmospheric pressure, as opposed to absolute pressure; vacuum is sensed as a reverse motion. Some aneroid barometers use Bourdon tubes closed at both ends (but most use diaphragms or capsules, see below). When the measured pressure is rapidly pulsing, such as when the gauge is near a reciprocating pump, an orifice restriction in the connecting pipe is frequently used to avoid unnecessary wear on the gears and provide an average reading; when the whole gauge is subject to mechanical vibration, the case (including the pointer and dial) can be filled with an oil or glycerin. Typical high-quality modern gauges provide an accuracy of ±1% of span (Nominal diameter 100mm, Class 1 EN837-1), and a special high-accuracy gauge can be as accurate as 0.1% of full scale. Force-balanced fused quartz Bourdon tube sensors work on the same principle but uses the reflection of a beam of light from a mirror to sense the angular displacement and current is applied to electromagnets to balance the force of the tube and bring the angular displacement back to zero, the current that is applied to the coils is used as the measurement. Due to the extremely stable and repeatable mechanical and thermal properties of quartz and the force balancing which eliminates nearly all physical movement these sensors can be accurate to around 1 PPM of full scale. Due to the extremely fine fused quartz structures which must be made by hand these sensors are generally limited to scientific and calibration purposes. In the following illustrations of a compound gauge (vacuum and gauge pressure), the case and window has been removed to show only the dial, pointer and process connection. This particular gauge is a combination vacuum and pressure gauge used for automotive diagnosis: - The left side of the face, used for measuring vacuum, is calibrated in inches of mercury on its outer scale and centimetres of mercury on its inner scale - The right portion of the face is used to measure fuel pump pressure or turbo boost and is scaled in pounds per square inch on its outer scale and kg/cm2 on its inner scale. - A: Receiver block. This joins the inlet pipe to the fixed end of the Bourdon tube (1) and secures the chassis plate (B). The two holes receive screws that secure the case. - B: Chassis plate. The dial is attached to this. It contains bearing holes for the axles. - C: Secondary chassis plate. It supports the outer ends of the axles. - D: Posts to join and space the two chassis plates. - Stationary end of Bourdon tube. This communicates with the inlet pipe through the receiver block. - Moving end of Bourdon tube. This end is sealed. - Pivot and pivot pin - Link joining pivot pin to lever (5) with pins to allow joint rotation - Lever, an extension of the sector gear (7) - Sector gear axle pin - Sector gear - Indicator needle axle. This has a spur gear that engages the sector gear (7) and extends through the face to drive the indicator needle. Due to the short distance between the lever arm link boss and the pivot pin and the difference between the effective radius of the sector gear and that of the spur gear, any motion of the Bourdon tube is greatly amplified. A small motion of the tube results in a large motion of the indicator needle. - Hair spring to preload the gear train to eliminate gear lash and hysteresis A second type of aneroid gauge uses deflection of a flexible membrane that separates regions of different pressure. The amount of deflection is repeatable for known pressures so the pressure can be determined by using calibration. The deformation of a thin diaphragm is dependent on the difference in pressure between its two faces. The reference face can be open to atmosphere to measure gauge pressure, open to a second port to measure differential pressure, or can be sealed against a vacuum or other fixed reference pressure to measure absolute pressure. The deformation can be measured using mechanical, optical or capacitive techniques. Ceramic and metallic diaphragms are used. For absolute measurements, welded pressure capsules with diaphragms on either side are often used. - Flattened tube In gauges intended to sense small pressures or pressure differences, or require that an absolute pressure be measured, the gear train and needle may be driven by an enclosed and sealed bellows chamber, called an aneroid. (Early barometers used a column of liquid such as water or the liquid metal mercury suspended by a vacuum.) This bellows configuration is used in aneroid barometers (barometers with an indicating needle and dial card), altimeters, altitude recording barographs, and the altitude telemetry instruments used in weather balloon radiosondes. These devices use the sealed chamber as a reference pressure and are driven by the external pressure. Other sensitive aircraft instruments such as air speed indicators and rate of climb indicators (variometers) have connections both to the internal part of the aneroid chamber and to an external enclosing chamber. These gauges use the attraction of two magnets to translate differential pressure into motion of a dial pointer. As differential pressure increases, a magnet attached to either a piston or rubber diaphragm moves. A rotary magnet that is attached to a pointer then moves in unison. To create different pressure ranges, the spring rate can be increased or decreased. The spinning-rotor gauge works by measuring how a rotating ball is slowed by the viscosity of the gas being measured. The ball is made of steel and is magnetically levitated inside a steel tube closed at one end and exposed to the gas to be measured at the other. The ball is brought up to speed (about 2500 or 3800 rad/s), and the deceleration rate is measured after switching off the drive, by electromagnetic transducers. The range of the instrument is 5−5 to 102 Pa (103 Pa with less accuracy). It is accurate and stable enough to be used as a secondary standard. During the last years this type of gauge became much more user friendly and easier to operate. In the past the instrument was famous to requires some skill and knowledge to use correctly. For high accuracy measurements various corrections must be applied and the ball must be spun at a pressure well below the intended measurement pressure for five hours before using. It is most useful in calibration and research laboratories where high accuracy is required and qualified technicians are available. Insulation vacuum monitoring of cryogenic liquids is a perfect suited application for this system too. With the inexpensive and long term stable, weldable sensor, that can be separated from the more costly electronics/read it is a perfect fit to all static vacuums. Electronic pressure instruments - Metal strain gauge - The strain gauge is generally glued (foil strain gauge) or deposited (thin-film strain gauge) onto a membrane. Membrane deflection due to pressure causes a resistance change in the strain gauge which can be electronically measured. - Piezoresistive strain gauge - Uses the piezoresistive effect of bonded or formed strain gauges to detect strain due to applied pressure. - Piezoresistive silicon pressure sensor - The sensor is generally a temperature compensated, piezoresistive silicon pressure sensor chosen for its excellent performance and long-term stability. Integral temperature compensation is provided over a range of 0–50°C using laser-trimmed resistors. An additional laser-trimmed resistor is included to normalize pressure sensitivity variations by programming the gain of an external differential amplifier. This provides good sensitivity and long-term stability. The two ports of the sensor, apply pressure to the same single transducer, please see pressure flow diagram below. This is an over-simplified diagram, but you can see the fundamental design of the internal ports in the sensor. The important item here to note is the "diaphragm" as this is the sensor itself. Please note that is it slightly convex in shape (highly exaggerated in the drawing); this is important as it affects the accuracy of the sensor in use. The shape of the sensor is important because it is calibrated to work in the direction of air flow as shown by the RED arrows. This is normal operation for the pressure sensor, providing a positive reading on the display of the digital pressure meter. Applying pressure in the reverse direction can induce errors in the results as the movement of the air pressure is trying to force the diaphragm to move in the opposite direction. The errors induced by this are small, but can be significant, and therefore it is always preferable to ensure that the more positive pressure is always applied to the positive (+ve) port and the lower pressure is applied to the negative (-ve) port, for normal 'gauge pressure' application. The same applies to measuring the difference between two vacuums, the larger vacuum should always be applied to the negative (-ve) port. The measurement of pressure via the Wheatstone Bridge looks something like this.... The effective electrical model of the transducer, together with a basic signal conditioning circuit, is shown in the application schematic. The pressure sensor is a fully active Wheatstone bridge which has been temperature compensated and offset adjusted by means of thick film, laser trimmed resistors. The excitation to the bridge is applied via a constant current. The low-level bridge output is at +O and -O, and the amplified span is set by the gain programming resistor (r). The electrical design is microprocessor controlled, which allows for calibration, the additional functions for the user, such as Scale Selection, Data Hold, Zero and Filter functions, the Record function that stores/displays MAX/MIN. - Uses a diaphragm and pressure cavity to create a variable capacitor to detect strain due to applied pressure. - Measures the displacement of a diaphragm by means of changes in inductance (reluctance), LVDT, Hall effect, or by eddy current principle. - Uses the piezoelectric effect in certain materials such as quartz to measure the strain upon the sensing mechanism due to pressure. - Uses the physical change of an optical fiber to detect strain due to applied pressure. - Uses the motion of a wiper along a resistive mechanism to detect the strain caused by applied pressure. - Uses the changes in resonant frequency in a sensing mechanism to measure stress, or changes in gas density, caused by applied pressure. Generally, as a real gas increases in density -which may indicate an increase in pressure- its ability to conduct heat increases. In this type of gauge, a wire filament is heated by running current through it. A thermocouple or resistance thermometer (RTD) can then be used to measure the temperature of the filament. This temperature is dependent on the rate at which the filament loses heat to the surrounding gas, and therefore on the thermal conductivity. A common variant is the Pirani gauge, which uses a single platinum filament as both the heated element and RTD. These gauges are accurate from 10−3 Torr to 10 Torr, but their calibration is sensitive to the chemical composition of the gases being measured. Pirani (one wire) A Pirani gauge consists of a metal wire open to the pressure being measured. The wire is heated by a current flowing through it and cooled by the gas surrounding it. If the gas pressure is reduced, the cooling effect will decrease, hence the equilibrium temperature of the wire will increase. The resistance of the wire is a function of its temperature: by measuring the voltage across the wire and the current flowing through it, the resistance (and so the gas pressure) can be determined. This type of gauge was invented by Marcello Pirani. In two-wire gauges, one wire coil is used as a heater, and the other is used to measure temperature due to convection. Thermocouple gauges and thermistor gauges work in this manner using a thermocouple or thermistor, respectively, to measure the temperature of the heated wire. Ionization gauges are the most sensitive gauges for very low pressures (also referred to as hard or high vacuum). They sense pressure indirectly by measuring the electrical ions produced when the gas is bombarded with electrons. Fewer ions will be produced by lower density gases. The calibration of an ion gauge is unstable and dependent on the nature of the gases being measured, which is not always known. They can be calibrated against a McLeod gauge which is much more stable and independent of gas chemistry. Thermionic emission generates electrons, which collide with gas atoms and generate positive ions. The ions are attracted to a suitably biased electrode known as the collector. The current in the collector is proportional to the rate of ionization, which is a function of the pressure in the system. Hence, measuring the collector current gives the gas pressure. There are several sub-types of ionization gauge. - Useful range: 10−10 - 10−3 torr (roughly 10−8 - 10−1 Pa) Most ion gauges come in two types: hot cathode and cold cathode. In the hot cathode version, an electrically heated filament produces an electron beam. The electrons travel through the gauge and ionize gas molecules around them. The resulting ions are collected at a negative electrode. The current depends on the number of ions, which depends on the pressure in the gauge. Hot cathode gauges are accurate from 10−3 Torr to 10−10 Torr. The principle behind cold cathode version is the same, except that electrons are produced in the discharge of a high voltage. Cold cathode gauges are accurate from 10−2 Torr to 10−9 Torr. Ionization gauge calibration is very sensitive to construction geometry, chemical composition of gases being measured, corrosion and surface deposits. Their calibration can be invalidated by activation at atmospheric pressure or low vacuum. The composition of gases at high vacuums will usually be unpredictable, so a mass spectrometer must be used in conjunction with the ionization gauge for accurate measurement. A hot-cathode ionization gauge is composed mainly of three electrodes acting together as a triode, wherein the cathode is the filament. The three electrodes are a collector or plate, a filament, and a grid. The collector current is measured in picoamperes by an electrometer. The filament voltage to ground is usually at a potential of 30 volts, while the grid voltage at 180–210 volts DC, unless there is an optional electron bombardment feature, by heating the grid, which may have a high potential of approximately 565 volts. The most common ion gauge is the hot-cathode Bayard–Alpert gauge, with a small ion collector inside the grid. A glass envelope with an opening to the vacuum can surround the electrodes, but usually the nude gauge is inserted in the vacuum chamber directly, the pins being fed through a ceramic plate in the wall of the chamber. Hot-cathode gauges can be damaged or lose their calibration if they are exposed to atmospheric pressure or even low vacuum while hot. The measurements of a hot-cathode ionization gauge are always logarithmic. Electrons emitted from the filament move several times in back-and-forth movements around the grid before finally entering the grid. During these movements, some electrons collide with a gaseous molecule to form a pair of an ion and an electron (electron ionization). The number of these ions is proportional to the gaseous molecule density multiplied by the electron current emitted from the filament, and these ions pour into the collector to form an ion current. Since the gaseous molecule density is proportional to the pressure, the pressure is estimated by measuring the ion current. The low-pressure sensitivity of hot-cathode gauges is limited by the photoelectric effect. Electrons hitting the grid produce x-rays that produce photoelectric noise in the ion collector. This limits the range of older hot-cathode gauges to 10−8 Torr and the Bayard–Alpert to about 10−10 Torr. Additional wires at cathode potential in the line of sight between the ion collector and the grid prevent this effect. In the extraction type the ions are not attracted by a wire, but by an open cone. As the ions cannot decide which part of the cone to hit, they pass through the hole and form an ion beam. This ion beam can be passed on to a: - Faraday cup - Microchannel plate detector with Faraday cup - Quadrupole mass analyzer with Faraday cup - Quadrupole mass analyzer with microchannel plate detector and Faraday cup - Ion lens and acceleration voltage and directed at a target to form a sputter gun. In this case a valve lets gas into the grid-cage. There are two subtypes of cold-cathode ionization gauges: the Penning gauge (invented by Frans Michel Penning), and the inverted magnetron, also called a Redhead gauge. The major difference between the two is the position of the anode with respect to the cathode. Neither has a filament, and each may require a DC potential of about 4 kV for operation. Inverted magnetrons can measure down to 1×10−12 Torr. Likewise, cold-cathode gauges may be reluctant to start at very low pressures, in that the near-absence of a gas makes it difficult to establish an electrode current - in particular in Penning gauges, which use an axially symmetric magnetic field to create path lengths for electrons that are of the order of metres. In ambient air, suitable ion-pairs are ubiquitously formed by cosmic radiation; in a Penning gauge, design features are used to ease the set-up of a discharge path. For example, the electrode of a Penning gauge is usually finely tapered to facilitate the field emission of electrons. Maintenance cycles of cold cathode gauges are, in general, measured in years, depending on the gas type and pressure that they are operated in. Using a cold cathode gauge in gases with substantial organic components, such as pump oil fractions, can result in the growth of delicate carbon films and shards within the gauge that eventually either short-circuit the electrodes of the gauge or impede the generation of a discharge path. |Physical phenomena||Instrument||Governing equation||Limiting factors||Practical pressure range||Ideal accuracy||Response time| |Mechanical||Liquid column manometer||atm. to 1 mbar| |Mechanical||Capsule dial gauge||Friction||1000 to 1 mbar||±5% of full scale||Slow| |Mechanical||Strain gauge||1000 to 1 mbar||Fast| |Mechanical||Capacitance manometer||Temperature fluctuations||atm to 10−6 mbar||±1% of reading||Slower when filter mounted| |Mechanical||McLeod||Boyle's law||10 to 10−3 mbar||±10% of reading between 10−4 and 5⋅10−2 mbar| |Transport||Spinning rotor (drag)||10−1 to 10−7 mbar||±2.5% of reading between 10−7 and 10−2 mbar 2.5 to 13.5% between 10−2 and 1 mbar |Transport||Pirani (Wheatstone bridge)||Thermal conductivity||1000 to 10−3 mbar (const. temperature) 10 to 10−3 mbar (const. voltage) |±6% of reading between 10−2 and 10 mbar||Fast| |Transport||Thermocouple (Seebeck effect)||Thermal conductivity||5 to 10−3 mbar||±10% of reading between 10−2 and 1 mbar| |Ionization||Cold cathode (Penning)||Ionization yield||10−2 to 10−7 mbar||+100 to -50% of reading| |Ionization||Hot cathode (ionization induced by thermionic emission)||Low current measurement; parasitic x-ray emission||10−3 to 10−10 mbar||±10% between 10−7 and 10−4 mbar ±20% at 10−3 and 10−9 mbar ±100% at 10−10 mbar When fluid flows are not in equilibrium, local pressures may be higher or lower than the average pressure in a medium. These disturbances propagate from their source as longitudinal pressure variations along the path of propagation. This is also called sound. Sound pressure is the instantaneous local pressure deviation from the average pressure caused by a sound wave. Sound pressure can be measured using a microphone in air and a hydrophone in water. The effective sound pressure is the root mean square of the instantaneous sound pressure over a given interval of time. Sound pressures are normally small and are often expressed in units of microbar. - frequency response of pressure sensors Calibration and standards The American Society of Mechanical Engineers (ASME) has developed two separate and distinct standards on pressure measurement, B40.100 and PTC 19.2. B40.100 provides guidelines on Pressure Indicated Dial Type and Pressure Digital Indicating Gauges, Diaphragm Seals, Snubbers, and Pressure Limiter Valves. PTC 19.2 provides instructions and guidance for the accurate determination of pressure values in support of the ASME Performance Test Codes. The choice of method, instruments, required calculations, and corrections to be applied depends on the purpose of the measurement, the allowable uncertainty, and the characteristics of the equipment being tested. The methods for pressure measurement and the protocols used for data transmission are also provided. Guidance is given for setting up the instrumentation and determining the uncertainty of the measurement. Information regarding the instrument type, design, applicable pressure range, accuracy, output, and relative cost is provided. Information is also provided on pressure-measuring devices that are used in field environments i.e., piston gauges, manometers, and low-absolute-pressure (vacuum) instruments. These methods are designed to assist in the evaluation of measurement uncertainty based on current technology and engineering knowledge, taking into account published instrumentation specifications and measurement and application techniques. This Supplement provides guidance in the use of methods to establish the pressure-measurement uncertainty. European (CEN) Standard - EN 472 : Pressure gauge - Vocabulary. - EN 837-1 : Pressure gauges. Bourdon tube pressure gauges. Dimensions, metrology, requirements and testing. - EN 837-2 : Pressure gauges. Selection and installation recommendations for pressure gauges. - EN 837-3 : Pressure gauges. Diaphragm and capsule pressure gauges. Dimensions, metrology, requirements, and testing. - B40.100-2013: Pressure gauges and Gauge attachments. - PTC 19.2-2010 : The Performance test code for pressure measurement. - Staff (2016). "2 - Diving physics". Guidance for Diving Supervisors (IMCA D 022 August 2016, Rev. 1 ed.). London, UK: International Marine Contractors' Association. p. 3. - Page 2-12. - "Understanding Vacuum Measurement Units". 9 February 2013. - Methods for the Measurement of Fluid Flow in Pipes, Part 1. Orifice Plates, Nozzles and Venturi Tubes. British Standards Institute. 1964. p. 36. - Manual of Barometry (WBAN) (PDF). U.S. Government Printing Office. 1963. pp. A295–A299. - [Was: "fluidengineering.co.nr/Manometer.htm". At 1/2010 that took me to bad link. Types of fluid Manometers] - "Techniques of High Vacuum". Tel Aviv University. 2006-05-04. Archived from the original on 2006-05-04. - Beckwith, Thomas G.; Marangoni, Roy D. & Lienhard V, John H. (1993). "Measurement of Low Pressures". Mechanical Measurements (Fifth ed.). Reading, MA: Addison-Wesley. pp. 591–595. ISBN 0-201-56947-7. - The Engine Indicator Canadian Museum of Making - Boyes, Walt (2008). Instrumentation Reference Book (Fourth ed.). Butterworth-Heinemann. p. 1312. - "Characterization of quartz Bourdon-type high-pressure transducers". ResearchGate. Retrieved 2019-05-05. - Product brochure from Schoonover, Inc - A. Chambers, Basic Vacuum Technology, pp. 100–102, CRC Press, 1998. ISBN 0585254915. - John F. O'Hanlon, A User's Guide to Vacuum Technology, pp. 92–94, John Wiley & Sons, 2005. ISBN 0471467154. - Robert M. Besançon, ed. (1990). "Vacuum Techniques". The Encyclopedia of Physics (3rd ed.). Van Nostrand Reinhold, New York. pp. 1278–1284. ISBN 0-442-00522-9. - Nigel S. Harris (1989). Modern Vacuum Practice. McGraw-Hill. ISBN 978-0-07-707099-1. - US Navy (1 December 2016). U.S. Navy Diving Manual Revision 7 SS521-AG-PRO-010 0910-LP-115-1921 (PDF). Washington, DC.: US Naval Sea Systems Command. Archived (PDF) from the original on Dec 28, 2016.
Engage your students with 7 Error Analysis worksheets that focus on creating and analyzing patterns. This patterns activity will help students to look critically identifying patterns, rules, and create their own similar problems. I began creating Error Analysis sheets for my students after reading about Marzano’s New Taxonomy, or Systems of Knowledge. Under Analysis he lists Error Analysis as an exceptional to promote thinking and learning. My students LOVE error analysis, and I have even seen kids take error analyses out to recess because they are determined the figure out what error took place, or the perfect wording to describe what happened. Some of these patterns problems are tricky, but the kids get a sense of satisfaction out of figuring out what went wrong! What kinds of patterns will students be analyzing in these error analysis worksheets? - Function tables - Geometric growth patterns - Increasing growth patterns - Picture patterns - Input/output patterns - Word Problems with numeric patterns What is included in my purchase? - 7 patterns error analysis worksheets - 7 possible answer keys Digital Access: This resource includes DIGITAL Access via Google Slides. All ‘Patterns Error Analysis' pages are available as printable and digital pages. The digital version is optimized for digital use and includes interactive and movable elements! How can I use Patterns Error Analysis in my classroom? You can use these as warm ups with the whole class, as an assessment, math centers, or enrichment for early finishers! Have fun! Looking For More Pattern Activities? Patterns and Functions Task Cards © Teaching with a Mountain View All rights reserved by author. Permission to copy for single classroom use only. There are no reviews yet.
An endangered species is a type of organism that is threatened by extinction. Species become endangered for two main reasons: loss of habitat and loss of genetic variation. Loss of Habitat A loss of habitat can happen naturally. Dinosaurs, for instance, lost their habitat about 65 million years ago. The hot, dry climate of the Cretaceous period changed very quickly, most likely because of an asteroid striking the Earth. The impact of the asteroid forced debris into the atmosphere, reducing the amount of heat and light that reached Earth’s surface. The dinosaurs were unable to adapt to this new, cooler habitat. Dinosaurs became endangered, then extinct. Human activity can also contribute to a loss of habitat. Development for housing, industry, and agriculture reduces the habitat of native organisms. This can happen in a number of different ways. Development can eliminate habitat and native species directly. In the Amazon rain forest of South America, developers have cleared hundreds of thousands of acres. To “clear” a piece of land is to remove all trees and vegetation from it. The Amazon rain forest is cleared for cattle ranches, logging, and urban use. Development can also endanger species indirectly. Some species, such as fig trees of the rain forest, may provide habitat for other species. As trees are destroyed, species that depend on that tree habitat may also become endangered. Tree crowns provide habitat in the canopy, or top layer, of a rain forest. Plants such as vines, fungi such as mushrooms, and insects such as butterflies live in the rain forest canopy. So do hundreds of species of tropical birds and mammals such as monkeys. As trees are cut down, this habitat is lost. Species have less room to live and reproduce. Loss of habitat may happen as development takes place in a species range. Many animals have a range of hundreds of square kilometers. The mountain lion of North America, for instance, has a range of up to 1,000 square kilometers (386 square miles). To successfully live and reproduce, a single mountain lion patrols this much territory. Urban areas, such as Los Angeles, California, and Vancouver, British Columbia, Canada, grew rapidly during the 20th century. As these areas expanded into the wilderness, the mountain lion’s habitat became smaller. That means the habitat can support fewer mountain lions. Because enormous parts of the Sierra Nevada, Rocky, and Cascade mountain ranges remain undeveloped, however, mountain lions are not endangered. Loss of habitat can also lead to increased encounters between wild species and people. As development brings people deeper into a species range, they may have more exposure to wild species. Poisonous plants and fungi may grow closer to homes and schools. Wild animals are also spotted more frequently. These animals are simply patrolling their range, but interaction with people can be deadly. Polar bears, mountain lions, and alligators are all predators brought into close contact with people as they lose their habitat to homes, farms, and businesses. As people kill these wild animals, through pesticides, accidents such as collisions with cars, or hunting, native species may become endangered. Loss of Genetic Variation Genetic variation is the diversity found within a species. It’s why human beings may have blond, red, brown, or black hair. Genetic variation allows species to adapt to changes in the environment. Usually, the greater the population of a species, the greater its genetic variation. Inbreeding is reproduction with close family members. Groups of species that have a tendency to inbreed usually have little genetic variation, because no new genetic information is introduced to the group. Disease is much more common, and much more deadly, among inbred groups. Inbred species do not have the genetic variation to develop resistance to the disease. For this reason, fewer offspring of inbred groups survive to maturity. Loss of genetic variation can occur naturally. Cheetahs are a threatened species native to Africa and Asia. These big cats have very little genetic variation. Biologists say that during the last ice age, cheetahs went through a long period of inbreeding. As a result, there are very few genetic differences between cheetahs. They cannot adapt to changes in the environment as quickly as other animals, and fewer cheetahs survive to maturity. Cheetahs are also much more difficult to breed in captivity than other big cats, such as lions. Human activity can also lead to a loss of genetic variation. Overhunting and overfishing have reduced the populations of many animals. Reduced population means there are fewer breeding pairs. A breeding pair is made up of two mature members of the species that are not closely related and can produce healthy offspring. With fewer breeding pairs, genetic variation shrinks. Monoculture, the agricultural method of growing a single crop, can also reduce genetic variation. Modern agribusiness relies on monocultures. Almost all potatoes cultivated, sold, and consumed, for instance, are from a single species, the Russet Burbank. Potatoes, native to the Andes Mountains of South America, have dozens of natural varieties. The genetic variation of wild potatoes allows them to adapt to climate change and disease. For Russet Burbanks, however, farmers must use fertilizers and pesticides to ensure healthy crops because the plant has almost no genetic variation. Plant breeders often go back to wild varieties to collect genes that will help cultivated plants resist pests and drought, and adapt to climate change. However, climate change is also threatening wild varieties. That means domesticated plants may lose an important source of traits that help them overcome new threats. The Red List The International Union for Conservation of Nature (IUCN) keeps a “Red List of Threatened Species.” The Red List defines the severity and specific causes of a species’ threat of extinction. The Red List has seven levels of conservation: least concern, near threatened, vulnerable, endangered, critically endangered, extinct in the wild, and extinct. Each category represents a different threat level. Species that are not threatened by extinction are placed within the first two categories—least concern and near-threatened. Those that are most threatened are placed within the next three categories, known as the threatened categories—vulnerable, endangered, and critically endangered. Those species that are extinct in some form are placed within the last two categories—extinct in the wild and extinct. Classifying a species as endangered has to do with its range and habitat, as well as its actual population. For this reason, a species can be of least concern in one area, and endangered in another. The gray whale, for instance, has a healthy population in the eastern Pacific Ocean, along the coast of North and South America. The population in the western Pacific, however, is critically endangered. Least concern is the lowest level of conservation. A species of least concern is one that has a widespread and abundant population. Human beings are a species of least concern, along with most domestic animals, such as dogs and cats. Many wild animals, such as pigeons and houseflies, are also classified as least concern. A near threatened species is one that is likely to qualify for a threatened category in the near future. Many species of violets, native to tropical jungles in South America and Africa, are near threatened, for instance. They have healthy populations, but their rain forest habitat is disappearing at a fast pace. People are cutting down huge areas of rain forest for development and timber. Many violet species are likely to become threatened. The definitions of the three threatened categories (vulnerable, endangered, and critically endangered) are based on five criteria: population reduction rate, geographic range, population size, population restrictions, and probability of extinction. Threatened categories have different thresholds for these criteria. As the population and range of the species decreases, the species becomes more threatened. 1) Population reduction rate A species is classified as vulnerable if its population has declined between 30 and 50 percent. This decline is measured over 10 years or three generations of the species, whichever is longer. A generation is the period of time between the birth of an animal and the time it is able to reproduce. Mice are able to reproduce when they are about one month old. Mouse populations are mostly tracked over 10-year periods. An elephant's generation lasts about 15 years. So, elephant populations are measured over 45-year periods. A species is vulnerable if its population has declined at least 50 percent and the cause of the decline is known. Habitat loss is the leading known cause of population decline. A species is also classified as vulnerable if its population has declined at least 30 percent and the cause of the decline is not known. A new, unknown virus, for example, could kill hundreds or even thousands of individuals before being identified. 2) Geographic range A species is vulnerable if its “extent of occurrence” is estimated to be less than 20,000 square kilometers (7,722 square miles). An extent of occurrence is the smallest area that could contain all sites of a species’ population. If all members of a species could survive in a single area, the size of that area is the species’ extent of occurrence. A species is also classified as vulnerable if its “area of occupancy” is estimated to be less than 2,000 square kilometers (772 square miles). An area of occupancy is where a specific population of that species resides. This area is often a breeding or nesting site in a species range. 3) Population size Species with fewer than 10,000 mature individuals are vulnerable. The species is also vulnerable if that population declines by at least 10 percent within 10 years or three generations, whichever is longer. 4) Population restrictions Population restriction is a combination of population and area of occupancy. A species is vulnerable if it is restricted to less than 1,000 mature individuals or an area of occupancy of less than 20 square kilometers (8 square miles). 5) Probability of extinction in the wild is at least 10 percent within 100 years. Biologists, anthropologists, meteorologists, and other scientists have developed complex ways to determine a species’ probability of extinction. These formulas calculate the chances a species can survive, without human protection, in the wild. Vulnerable Species: Ethiopian Banana Frog The Ethiopian banana frog (Afrixalus enseticola) is a small frog native to high-altitude areas of southern Ethiopia. It is a vulnerable species because its area of occupancy is less than 2,000 square kilometers (772 square miles). The extent and quality of its forest habitat are in decline. Threats to this habitat include forest clearance, mostly for housing and agriculture. Vulnerable Species: Snaggletooth Shark The snaggletooth shark (Hemipristis elongatus) is found in the tropical, coastal waters of the Indian and Pacific Oceans. Its area of occupancy is enormous, from southeast Africa to the Philippines, and from China to Australia. However, the snaggletooth shark is a vulnerable species because of a severe population reduction rate. Its population has fallen more than 10 percent over 10 years. The number of sharks is declining due to fisheries, especially in the Java Sea and Gulf of Thailand. The snaggletooth shark’s flesh, fins, and liver are considered high-quality foods. They are sold in commercial fish markets, as well as restaurants. Vulnerable Species: Galapagos Kelp Galapagos kelp (Eisenia galapagensis) is a type of seaweed only found near the Galapagos Islands in the Pacific Ocean. Galapagos kelp is classified as vulnerable because its population has declined more than 10 percent over 10 years. Climate change is the leading cause of decline among Galapagos kelp. El Nino, the natural weather pattern that brings unusually warm water to the Galapagos, is the leading agent of climate change in this area. Galapagos kelp is a cold-water species and does not adapt quickly to changes in water temperature. 1) Population reduction rate A species is classified as endangered when its population has declined between 50 and 70 percent. This decline is measured over 10 years or three generations of the species, whichever is longer. A species is classified as endangered when its population has declined at least 70 percent and the cause of the decline is known. A species is also classified as endangered when its population has declined at least 50 percent and the cause of the decline is not known. 2) Geographic range An endangered species’ extent of occurrence is less than 5,000 square kilometers (1,930 square miles). An endangered species’ area of occupancy is less than 500 square kilometers (193 square miles). 3) Population size A species is classified as endangered when there are fewer than 2,500 mature individuals. When a species population declines by at least 20 percent within five years or two generations, it is also classified as endangered. 4) Population restrictions A species is classified as endangered when its population is restricted to less than 250 mature individuals. When a species’ population is this low, its area of occupancy is not considered. 5) Probability of extinction in the wild is at least 20 percent within 20 years or five generations, whichever is longer. Endangered Species: Siberian Sturgeon The Siberian sturgeon (Acipenser baerii) is a large fish found in rivers and lakes throughout the Siberian region of Russia. The Siberian sturgeon is a benthic species. Benthic species live at the bottom of a body of water. The Siberian sturgeon is an endangered species because its total population has declined between 50 and 80 percent during the past 60 years (three generations of sturgeon). Overfishing, poaching, and dam construction have caused this decline. Pollution from mining activities has also contributed to abnormalities in the sturgeon’s reproductive system. Endangered Species: Tahiti Reed-warbler The Tahiti reed-warbler (Acrocephalus caffer) is a songbird found on the Pacific island of Tahiti. It is an endangered species because it has a very small population. The bird is only found on a single island, meaning both its extent of occurrence and area of occupancy are very small. The Tahiti reed-warbler is also endangered because of human activity. The tropical weed Miconia is a non-native species that has taken over much of Tahiti’s native vegetation. The reed-warbler lives almost exclusively in Tahiti’s bamboo forests. The bird nests in bamboo and feeds on flowers and insects that live there. As development and invasive species such as Miconia destroy the bamboo forests, the population of Tahiti reed-warblers continues to shrink. Endangered Species: Ebony Ebony (Diospyros crassiflora) is a tree native to the rain forests of central Africa, including Congo, Cameroon, and Gabon. Ebony is an endangered species because many biologists calculate its probability of extinction in the wild is at least 20 percent within five generations. Ebony is threatened due to overharvesting. Ebony trees produce a very heavy, dark wood. When polished, ebony can be mistaken for black marble or other stone. For centuries, ebony trees have been harvested for furniture and sculptural uses such as chess pieces. Most ebony, however, is harvested to make musical instruments such as piano keys and the fingerboards of stringed instruments. Critically Endangered Species 1) Population reduction rate A critically endangered species’ population has declined between 80 and 90 percent. This decline is measured over 10 years or three generations of the species, whichever is longer. A species is classified as critically endangered when its population has declined at least 90 percent and the cause of the decline is known. A species is also classified as endangered when its population has declined at least 80 percent and the cause of the decline is not known. 2) Geographic range A critically endangered species’ extent of occurrence is less than 100 square kilometers (39 square miles). A critically endangered species’ area of occupancy is estimated to be less than 10 square kilometers (4 square miles). 3) Population size A species is classified as critically endangered when there are fewer than 250 mature individuals. A species is also classified as critically endangered when the number of mature individuals declines by at least 25 percent within three years or one generation, whichever is longer. 4) Population restrictions A species is classified as critically endangered when its population is restricted to less than 50 mature individuals. When a species’ population is this low, its area of occupancy is not considered. 5) Probability of extinction in the wild is at least 50 percent within 10 years or three generations, whichever is longer. Critically Endangered Species: Bolivian Chinchilla Rat The Bolivian chinchilla rat (Abrocoma boliviensis) is a rodent found in a small section of the Santa Cruz region of Bolivia. It is critically endangered because its extent of occurrence is less than 100 square kilometers (39 square miles). The major threat to this species is loss of its cloud forest habitat. People are clearing forests to create cattle pastures. Critically Endangered Species: Transcaucasian Racerunner The Transcaucasian racerunner (Eremias pleskei) is a lizard found on the Armenian Plateau, located in Armenia, Azerbaijan, Iran, and Turkey. The Transcaucasian racerunner is a critically endangered species because of a huge population decline, estimated at more than 80 percent during the past 10 years. Threats to this species include the salination, or increased saltiness, of soil. Fertilizers used for agricultural development seep into the soil, increasing its saltiness. Racerunners live in and among the rocks and soil, and cannot adapt to the increased salt in their food and shelter. The racerunner is also losing habitat as people create trash dumps on their area of occupancy. Critically Endangered Species: White Ferula Mushroom The white ferula mushroom (Pleurotus nebrodensis) is a critically endangered species of fungus. The mushroom is critically endangered because its extent of occurrence is less than 100 square kilometers (39 square miles). It is only found in the northern part of the Italian island of Sicily, in the Mediterranean Sea. The leading threats to white ferula mushrooms are loss of habitat and overharvesting. White ferula mushrooms are a gourmet food item. Farmers and amateur mushroom hunters harvest the fungus for food and profit. The mushrooms can be sold for up to $100 per kilogram (2.2 pounds). Extinct In The Wild A species is extinct in the wild when it only survives in cultivation (plants), in captivity (animals), or as a population well outside its established range. A species may be listed as extinct in the wild only after years of surveys have failed to record an individual in its native or expected habitat. Extinct in the Wild: Scimitar-horned Oryx The scimitar-horned oryx (Oryx dammah) is a species of antelope with long horns. Its range extends across northern Africa. The scimitar-horned oryx is listed as extinct in the wild because the last confirmed sighting of one was in 1988. Overhunting and habitat loss, including competition with domestic livestock, are the main reasons for the extinction of the oryx’s wild population. Captive herds are now kept in protected areas of Tunisia, Senegal, and Morocco. Scimitar-horned oryxes are also found in many zoos. Extinct in the Wild: Black Soft-shell Turtle The black soft-shell turtle (Nilssonia nigricans) is a freshwater turtle that exists only in one man-made pond, at the Baizid Bostami Shrine near Chittagong, Bangladesh. The 150 to 300 turtles that live at the pond rely entirely on humans for food. Until 2000, black soft-shell turtles lived throughout the wetlands of the Brahmaputra River, feeding mostly on freshwater fish. Unlike other animals that are extinct in the wild, black soft-shell turtles are not found in many zoos. The shrine’s caretakers do not allow anyone, including scientists, to take the turtles. The reptiles are considered to be the descendants of people who were miraculously turned into turtles by a saint during the 13th century. Extinct in the Wild: Mt. Kaala Cyanea The Mt. Kaala cyanea (Cyanea superba) is a large, flowering tree native to the island of Oahu, in the U.S. state of Hawaii. The Mt. Kaala cyanea has large, broad leaves and fleshy fruit. The tree is extinct in the wild largely because of invasive species. Non-native plants crowded the cyanea out of its habitat, and non-native animals such as pigs, rats, and slugs ate its fruit more quickly than it could reproduce. Mt. Kaala cyanea trees survive in tropical nurseries and botanical gardens. Many botanists and conservationists look forward to establishing a new population in the wild. A species is extinct when there is no reasonable doubt that the last remaining individual of that species has died. Extinct: Cuban Macaw The Cuban macaw (Ara tricolor) was a tropical parrot native to Cuba and a small Cuban island, Isla de la Juventud. Hunting and collecting the birds for pets led to the bird’s extinction. The last specimen of the Cuban macaw was collected in 1864. Extinct: Ridley’s Stick Insect Ridley’s stick insect (Pseudobactricia ridleyi) was native to the tropical jungle of the island of Singapore. This insect, whose long, segmented body resembled a tree limb, is only known through a single specimen, collected more than 100 years ago. During the 20th century, Singapore experienced rapid development. Almost the entire jungle was cleared, depriving the insect of its habitat. Extinct: Sri Lankan Legume Tree The Sri Lankan legume tree (Crudia zeylanica), native only to the island of Sri Lanka in the Indian Ocean, was a giant species of legume. Peas and peanuts are smaller types of legumes. Habitat loss from development in the 20th century is the main reason the tree went extinct in the wild. A single specimen survived at the Royal Botanical Garden in Peradeniya, Sri Lanka, until 1990, when that, too, was lost. Endangered Species and People When a species is classified as endangered, governments and international organizations can work to protect it. Laws may limit hunting and destruction of the species’ habitat. Individuals and organizations that break these laws may face huge fines. Because of such actions, many species have recovered from their endangered status. The brown pelican was taken off the endangered species list in 2009, for instance. This seabird is native to the coasts of North America and South America, as well as the islands of the Caribbean Sea. It is the state bird of the U.S. state of Louisiana. In 1970, the number of brown pelicans in the wild was estimated at 10,000. The bird was classified as vulnerable. During the 1970s and 1980s, governments and conservation groups worked to help the brown pelican recover. Young chicks were reared in hatching sites, then released into the wild. Human access to nesting sites was severely restricted. The pesticide DDT, which damaged the eggs of the brown pelican, was banned. During the 1980s, the number of brown pelicans soared. In 1988, the IUCN “delisted” the brown pelican. The bird, whose population is now in the hundreds of thousands, is now in the category of least concern. Until 2012, Lonesome George was the most endangered species on the planet. He was the only living species of Pinta Island tortoise known to exist. The Pinta Island tortoise was only found on Pinta, one of the Galapagos Islands. The Charles Darwin Research Station, a scientific facility in the Galapagos, offered a $10,000 reward to any zoo or individual for locating a single Pinta Island tortoise female. On June 25, 2012, Lonesome George died, leaving one more extinct species in the world. Convention on Biological Diversity The Convention on Biological Diversity is an international treaty to sustain and protect the diversity of life on Earth. This includes conservation, sustainability, and sharing the benefits of genetic research and resources. The Convention on Biological Diversity has adopted the IUCN Red List of endangered species in order to monitor and research species' population and habitats. Three nations have not ratified the Convention on Biological Diversity: Andorra, the Holy See (Vatican), and the United States. deformity or condition different from the normal or expected state of being. in large amounts. to adjust to new surroundings or a new situation. the strategy of applying profit-making practices to the operation of farms and ranches. the art and science of cultivating the land for growing crops (farming) or raising livestock (ranching). the distance above sea level. science of the origin, development, and culture of human beings. geographic area where a specific population of a species resides, used to determine a species' level of conservation. irregularly shaped planetary body, ranging from 6 meters (20 feet) to 933 kilometers (580 miles) in diameter, orbiting the sun between Mars and Jupiter. layers of gases surrounding a planet or other celestial body. type of huge, woody grass. to prohibit, or not allow. having to do with the bottom of a deep body of water. large predators, including tigers, lions, jaguars, and leopards. scientist who studies living organisms. place where plants and flowers are grown and displayed for education and study. to mate animals in a controlled environment. animals who cooperate over a period of time to produce generations of offspring. to reach a conclusion by mathematical or logical methods. one of the top layers of a forest, formed by the thick leaves of very tall trees. large area of grassland where cattle graze. large farm where cattle are raised and bred for meat or leather. large, spotted cat native to Africa. game of strategy played by two players, each with 16 pieces moved across a checkered board. rodent native to South America. to identify or arrange by specific type or characteristic. all weather conditions for a given location over a period of time. gradual changes in all the interconnected weather elements on our planet. wooded area, usually high-altitude, almost always covered by clouds and fog. edge of land along the sea or other large body of water. management of a natural resource to prevent exploitation, destruction, or neglect. international treaty to sustain and protect the diversity of life on Earth. 145 million to 65 million years ago. The period ended with extinction of the dinosaurs and the rise of mammals. level of conservation between "endangered" and "extinct in the wild." to encourage the growth of something through work and attention. type of plant native to Hawaii with large, broad leaves. structure built across a river or other waterway to control the flow of water. (dichloro-diphenyl-trichloroethane) toxic chemical used as an insecticide but illegal for most uses in the U.S. since 1972. remains of something broken or destroyed; waste, or garbage. to reduce or go down in number. children, grandchildren, and other offspring. construction or preparation of land for housing, industry, or agriculture. very large, extinct reptile chiefly from the Mesozoic Era, 251 million to 65 million years ago. animal that has been tamed for work or to be a pet. to make something appear small by having it appear next to something much larger. tree native to Africa that produces dark, hard timber. irregular, recurring weather system that features a warm, eastern-flowing ocean current in the eastern Pacific Ocean. to meet, especially unexpectedly. organism threatened with extinction. smallest area that could contain all the sites of a species' population, used to determine a species level of conservation. no longer existing. highest level of conservation of a living species, when the only living members of that species are protected in captivity such as zoos or aquariums. process of complete disappearance of a species from Earth. land cultivated for crops, livestock, or both. nutrient-rich chemical substance (natural or manmade) applied to soil to encourage plant growth. to punish, usually by charging an economic penalty or fee. Or, the penalty or fee itself. flat part of a stringed instrument where the musician presses the string down to create a note. industry or occupation of harvesting fish, either in the wild or through aquaculture. (singular: fungus) organisms that survive by decomposing and absorbing nutrients in organic material such as soil or dead organisms. group in a species made up of members that are roughly the same age. differences in the genes among individual members of a species. high-quality, expensive, or difficult-to-prepare food. environment where an organism lives throughout the year or for shorter periods of time. to emerge from an egg. large mammal native to Africa that lives near rivers. long period of cold climate where glaciers cover large parts of the Earth. The last ice age peaked about 20,000 years ago. Also called glacial age. collision or crash. to produce offspring with close family members. activity that produces goods and services. unit made up of governments or groups in different countries, usually for a specific purpose. environmental organization concerned with preserving natural ecosystems and habitats. type of plant or animal that is not indigenous to a particular area and causes economic or environmental harm. body of land surrounded by water. tropical ecosystem filled with trees and underbrush. type of seaweed. lowest level of conservation, used when the population and habitat of a species are healthy. type of plant with a pod that splits, with seeds in the middle, such as peanuts. one of seven categories of a species' threat of extinction, assigned by the International Union for the Conservation of Nature: least concern, near threatened, vulnerable, endangered, critically endangered, extinct in the wild, and extinct. animals raised for sale and profit. industry engaged in cutting down trees and moving the wood to sawmills. last surviving Pinta Island tortoise. long-tailed parrot native to the Americas. type of metamorphic rock. adult member of a species who is able to reproduce. person who studies patterns and changes in Earth's atmosphere. process of extracting ore from the Earth. to observe and record behavior or data. the system of growing one type of crop. large cat native to North and South America. Also called a cougar, puma, catamount, and panther. series or chain of mountains that are close together. level of conservation between "least concern" and "vulnerable." place where birds build nests and raise their young. a type of plant or animal that is not indigenous to a particular area. Non-native species can sometimes cause economic or environmental harm as an invasive species. the children of a person or animal. antelope native to Africa. to harvest aquatic life to the point where species become rare in the area. to use more of a resource than can be replaced naturally. to capture and kill enough animals to reduce their breeding population below sustainable levels. type of bird with a large beak. to survey and monitor an area by passing through it. large marine bird with a big bill. natural or manufactured substance used to kill organisms that threaten agriculture or are undesirable. Pesticides can be fungicides (which kill harmful fungi), insecticides (which kill harmful insects), herbicides (which kill harmful plants), or rodenticides (which kill harmful rodents.) animal kept as a helper or companion. lever, either black or white, which triggers a hammer to hit a specific string inside the body of a piano. large region that is higher than the surrounding area and relatively flat. to hunt, trap, or fish illegally. toxic or containing dangerous chemicals. introduction of harmful materials into the environment. rate at which the numbers of a specific species are shrinking. calculation of a species' population and its area of occupancy that helps determine its conservation status. calculation of how long a species can survive without human protection. area of tall, mostly evergreen trees and a high amount of rainfall. to formally approve or confirm. list defining the severity and causes of each species' threat of extinction. The Red List is maintained by the International Union for the Conservation of Nature (IUCN). to lower or lessen. to create offspring, by sexual or asexual means. organs involved in an organism's reproduction. order of mammals often characterized by long teeth for gnawing and nibbling. large, brown-skinned potato often used for making french fries in fast food restaurants. process where soils build up high salt content. long, curved sword with a sharp outer edge, developed in the Middle East. bird native to an aquatic environment. marine algae. Seaweed can be composed of brown, green, or red algae, as well as "blue-green algae," which is actually bacteria. to slowly flow through a border. place of worship or spiritual devotion. top layer of the Earth's surface where plants can grow. bird with a recognizable vocal pattern. native, geographic area in which an organism can be found. Range also refers to the geographic distribution of a particular species. individual organism that is a typical example of its classification. type of marine or freshwater large, long, bony fish. use of resources in such a manner that they will never be exhausted. habit or predictable way of behaving. three levels of endangered species: vulnerable, endangered, and critically endangered. organism that may soon become endangered. point in a process that must be met to start a new stage in the process. wood in an unfinished form, either trees or logs. land-based turtle, usually with a tall, rounded shell. top branches of a tree. existing in the tropics, the latitudes between the Tropic of Cancer in the north and the Tropic of Capricorn in the south. type of reptile with a shell encasing most of its body. developed, densely populated area where most inhabitants have nonagricultural jobs. all the plant life of a specific place. plant with small flowers. pathogenic agent that lives and multiplies in a living cell. level of conservation between "near threatened" and "endangered." Vulnerable is the lowest of the "threatened" categories. type of small songbird. repeating or predictable changes in the Earth's atmosphere, such as winds, precipitation, and temperatures. area of land covered by shallow water or saturated by water. place where animals are kept for exhibition.
1824 Constitution of Mexico |This article needs additional citations for verification. (October 2015) (Learn how and when to remove this template message)| |Federal Constitution of the United Mexican States| Original front of the 1824 Constitution |Created||January 21, 1824| |Ratified||October 4, 1824| |Location||General Archive of the Nation in the Lecumberri Palace| |Author(s)||General Constituent Congress| |Signatories||General Constituent Congress| The Federal Constitution of the United Mexican States of 1824 (Spanish: Constitución Federal de los Estados Unidos Mexicanos de 1824) was enacted on October 4 of 1824, after the overthrow of the Mexican Empire of Agustin de Iturbide. In the new constitution, the republic took the name of United Mexican States, and was defined as a representative federal republic, with Catholicism as the official and unique religion. It was replaced by the Federal Constitution of the United Mexican States of 1857. - 1 Backpickle - 2 Second Constituent Congress - 3 Drafting a constitution - 4 Nature of the constitution - 5 Struggle among confederalists, federalists, and centralists - 6 Weak executive branch - 7 Constitution of 1824 - 8 Content - 9 Federation - 10 Reactions - 11 Repeal and resettlement - 12 See also - 13 References - 14 External links The Mexican War of Independence (1810–1821) severed control that Spain had exercised on its North American territories, and the new country of Mexico was formed from much of the individual territory that had comprised New Spain. The victorious rebels issued a provisional constitution, the Plan de Iguala. This plan reaffirmed many of the ideals of the Spanish Constitution of 1812 and granted equal citizenship rights to all races. In the early days of the country, there was much disagreement over whether Mexico should be a federal republic or a constitutional monarchy. One of the leaders of the revolution became the first monarch, Agustin I. Discontent with Agustin's national government grew, Brigadier Antonio López de Santa Anna initiated an insurrection. Generals issued the Plan of Casa Mata on 1 February 1823. The plan won the support of the provinces because it included a provision granting local authority to the provincial deputations. The election of a new legislature constituted the plan’s principal demand, because provincial leaders considered the composition of the first congress to be flawed. Following the precedent of the Hispanic Cortes, Mexican political leaders considered the executive to be subservient to the legislature. Thus, a new congress, which did not possess the liabilities of the old, could restore confidence even if the executive remained in place. Mexican politicians, of course, expected the new body to keep the emperor in check. Agustin abdicated in March 1823. The failure of Iturbide’s short-lived empire ensured that any future government would be republican. The reconvened Mexican Cortes appointed a triumvirate called the Supreme Executive Power which would alternate the presidency among its members on a monthly basis. But the question of how the nation was to be organized remained unresolved. The Mexican Cortes, following the Cádiz model, maintained that it was sovereign since it represented the nation. The provinces, however, believed that they possessed sovereignty, a portion of which they collectively ceded to form a national government. The Cortes insisted on writing the nation’s constitution, but the provinces maintained that it could only convene a new constituent congress based on the electoral regulations of the Constitution of Cádiz. Neither side was willing to cede. In the months that followed, the provinces assumed control of their governments through their provincial deputations. Four provinces, Oaxaca, Yucatán, Guadalajara, and Zacatecas, converted themselves into states. To avoid civil war, the Cortes acquiesced and elected a new constituent congress. Elections for a second constituent assembly, based on a convocatoria issued 26 June 1821 by the Hispanic Cortes, were held throughout the nation in August and September. The executive branch was not restructured, because both the provinces and the new constituent congress considered it subservient to the legislature. Second Constituent Congress The new congress, which the provinces had insisted upon since March, finally met on 7 November 1823. The second Constituent Congress was quite different from the first. It represented the provinces more equitably, and some of its members possessed instructions to form only a federal republic. Oaxaca, Yucatán, Jalisco, and Zacatecas, which had become states, elected state congresses, rather than provincial deputations, as the convocatoria required. The Mexico City-based national elite, which had been struggling for power since 1808, and which had taken control in 1821, lost it two years later to the provincial elites. Although some members of the national elite were elected to the new constituent congress, they formed a distinct minority. Indeed, only thirty-five of the one hundred-forty-four deputies and alternates elected to the new legislature had served in the earlier Mexican Cortes. The constituent congress, which convened on 7 November 1823, faced very different circumstances from its predecessor. Not only had the provinces declared their sovereignty, but they had also restricted the authority of their delegates. Valladolid, Michoacán, for example, declared: "This province in the federation does not wish to relinquish the major portion of its liberty and other rights; it only grants [its deputies] the authority absolutely necessary to keep the portion it retains." Mérida, Yucatán, decreed that "the elected deputies are granted only the power (...) to constitute the nation in a government that is republican, representative and federal", and that: "The federal constitution that they form and agree with the other deputies of the Constituent Congress will not have the force of law in the nation until the majority of the federated states ratify it." Zacatecas, Zacatecas, was even more explicit, asserting that "The deputies to the future congress cannot constitute the nation as they deem convenient, but only as a federal republic." Guadalajara insisted that the pueblos of Jalisco wanted only a popular, representative and republican form of government. Other provinces made similar declarations. The new congress represented regional interests. Therefore, the debate in the legislature focused on the division of power between the national and the provincial governments, not on whether Mexico would be a federal or a central republic. The delegates were divided into a confederalist, two federalist, and one centralist faction. The confederalists, extreme defenders of local rights like Juan de Dios Cañedo, argued that only the provinces possessed sovereignty, a portion of which they collectively ceded to the union to form a national government. This interpretation meant that the provinces, or states, as Oaxaca, Yucatán, Jalisco and Zacatecas now called themselves, could subsequently reclaim the power they had relinquished. They were opposed by federalists like Servando Teresa de Mier who believed that only the nation was sovereign. In their view, although the country was organised into provinces, or states, for political purposes, the people, not the states, possessed sovereignty. The deputies, therefore, did not represent the states, but the people who constituted the nation. As the representative of the Mexican people, Congress possessed greater power and authority than the state legislatures. In a sense, they were reasserting the position which had prevailed in Cádiz in 1812. Midway between these extremes stood men like the federalist Ramos Arizpe, who believed that the national government and the states shared sovereignty. Although they favoured states’ rights, they nevertheless believed that the national government had to command sufficient power to function effectively. The confederalist/federalist factions were opposed by a tiny minority of centralists who argued that sovereignty was vested in the nation and that Mexico needed a strong national government. Drafting a constitution A committee consisting of Ramos Arizpe, Cañedo, Miguel Argüelles, Rafael Mangino, Tomás Vargas, José de Jesús Huerta, and Manuel Crescencio Rejón, submitted an Acta Constitutiva (draft of a constitution) on 20 November. The group completed the draft of the charter in a few days. This was possible because the document was based on the shared Hispanic political theory and practice that Mexicans, the former novohispanos, knew well, since they had played a significant role in shaping it. In the years since Napoleon had invaded Spain in 1808, the political entities that formed the Mexican nation in 1821 had undergone a series of rapid political changes that politicised the majority of the population and led to a vibrant political discourse. The Hispanic Constitution of 1812 and its institutions of government were well known; moreover, seven proposals for a Mexican constitution had been debated throughout the country in the previous months. The constituent congress, therefore, was filled with educated individuals with diverse ideas and extensive political experience at the local, state, national, and international levels. A few, like Ramos Arizpe and Guridi y Alcocer, had served in the Cortes in Spain and had participated in the discussions of the Constitution of 1812. In addition, Ramos Arizpe had been working on a federal constitution for some time. Nature of the constitution The Acta Constitutiva submitted by the committee was modelled on the Hispanic Constitution of 1812. Most of its articles were based on the Peninsular document; a few were adopted verbatim from that charter. For example, on the question of sovereignty the Hispanic Constitution stated: "Sovereignty resides essentially in the nation and, therefore, it [the nation] possesses the exclusive right to adopt the form of government that seems most convenient for its conservation and prosperity". Article 3 of the Mexican Acta Constitutiva read: "Sovereignty resides radically and essentially in the nation and, therefore, it [the nation] possesses the exclusive right to adopt by means of its representatives the form of government and other fundamental laws that seem most convenient for its conservation and greater prosperity". Although the deputies relied on their first constitutional experience, the Constitution of 1812, they did not slavishly copy the Hispanic model. Guridi y Alcocer, for example, explained that ever since he had served on the constitutional commission in the Hispanic Cortes he had maintained that sovereignty resided radically in the nation, by which he meant that the nation, as the institutional representative of el Pueblo, could not lose its sovereignty. His principal critics were radical federalists like Juan de Dios Cañedo, deputy from Jalisco, who challenged the need for an article declaring national sovereignty. He asked: that the article be deleted because in a republican federal government each state is sovereign. (…) Therefore, it is impossible to conceive how sovereignty, which is the origin and source of authority and power, can be divided among the many states. [T]hat is why the first constitution of the United States [the Articles of Confederation] (…) does not mention national sovereignty. And, therefore, (…) Article 1 which discusses the nation should not be approved because it is not appropriate in the system we now have. The Acta, unlike the Hispanic constitution, did not grant exclusive or even preponderant sovereignty to the nation, because the states also claimed sovereignty. Accordingly, Article 6 stated: "Its integral parts are independent, free, and sovereign States in that which exclusively concerns their administration and interior government". The issue of sovereignty remained at heart a question of the division of power between the national and the state governments. It was an issue that would be debated at length in the months to come. Struggle among confederalists, federalists, and centralists The proponents of state sovereignty—the confederalists—were challenged by some less radical federalist delegates who argued that only the nation could be sovereign. Because these men stressed the need to endow the national government with sufficient power to sustain national interests, they are often mistakenly considered centralists. Servando Teresa de Mier, their outstanding spokesman, argued that people wrongly considered him a centralist, an error that arose from an unnecessarily restrictive definition of federalism. He indicated that federalism existed in many forms: the Netherlands, Germany, Switzerland and the United States were federations, yet each was different. Mier advocated the establishment of a unique brand of federalism suited to Mexico. He believed that local realities precluded the adoption of the extreme form of federalism—confederalism—championed by states’ righters. He declared: "I have always been in favour of a federation, but a reasonable and moderate federation. (...) I have always believed in a medium between the lax federation of the United States, whose defects many writers have indicated, (…) and the dangerous concentration [of executive power] in Colombia and Peru." In his view, Mexico needed a strong federal system because the country required an energetic and decisive national government to lead it during the crucial early years of nationhood, particularly since Spain refused to recognise Mexico’s independence and the Holy Alliance threatened to intervene. For these reasons, Mier voted in favour of Article 5, which established a federal republic, while opposing Article 6, which granted sovereignty to the states. Neither the advocates of states' rights, like Cañedo, nor the proponents of national sovereignty, like Mier, triumphed. Instead, a compromise emerged: shared sovereignty, as advocated by moderate federalists such as Ramos Arizpe. Throughout the debates, he and others argued that although the nation was sovereign, the states should control their internal affairs. The group saw no conflict between Article 3, which declared that sovereignty resided in the nation, and Article 6, which granted sovereignty to the states on internal matters. The moderates were able to forge shifting coalitions to pass both articles. First, they brought Article 3 to a vote. A coalition of the proponents of national sovereignty, the advocates of shared sovereignty, and a few centralists passed the article by a wide margin. To secure passage of Article 6, those favouring approval succeeded in having the question brought to the floor in two parts. The first vote, on the section of Article 6 which indicated that the states were independent and free to manage their own affairs, passed by a wide margin, since the wording pleased all the confederalist/federalist groups, including the one led by Father Mier. Only seven centralist deputies opposed the measure. Then Congress examined the section of Article 6 which declared that the states were sovereign. The coalition divided on this issue: Father Mier and his supporters joined the centralists in voting against the measure. Nevertheless, the proponents of states' rights and those who believed in shared sovereignty possessed enough strength to pass the measure by a margin of 41 to 28 votes. The states did not just share sovereignty with the national government; they obtained the financial means to enforce their authority. They gained considerable taxing power at the expense of the federal government, which lost approximately half the revenue formerly collected by the viceregal administration. To compensate for that loss, the states were to pay the national government a contingente assessed for each state according to its means. As a result, the nation would have to depend upon the goodwill of the states to finance or fulfil its responsibilities. Weak executive branch The constituent congress’s decision to share sovereignty, moreover, did not settle the question of the division of power within the national government. Although all agreed on the traditional concept of separation of powers among the legislative, executive, and judicial branches, most congressmen believed that the legislature should be dominant. Recent Hispanic and Mexican experience had fostered a distrust of executive power. Therefore, the earlier Mexican Cortes had established a plural executive, the Supreme Executive Power. Since that body was perceived as subservient to the legislature, neither the provinces nor the Second Constituent Congress bothered to appoint a new executive. The authors of the Acta Constitutiva, however, proposed in Article 16 that executive power be conferred "on an individual with the title of president of the Mexican Federation, who must be a citizen by birth of said federation and have attained at least thirty-five years of age". The proposal led to a heated debate that transcended the former division between states’ righters and strong nationalist coalitions. While Cañedo supported Ramos Arizpe in favouring a single executive, others, including Rejón and Guridi y Alcocer, insisted on the need to weaken executive power by establishing a plural executive. Ramos Arizpe proposed that the president govern with the aid of a council of government. But that was not sufficient to mollify the opposition, which had the majority in congress. The opponents of a single executive presented several counter-proposals. Demetrio Castillo of Oaxaca suggested that a president, a vice-president and an alternate, called designee, should govern. Each would have a vote, but the president would cast the deciding one. Rejón, instead, recommended that three individuals form the Supreme Executive Power; their terms would be staggered so that one member would always possess seniority, but no individual would serve more than three years. Guridi y Alcocer proposed that the executive power be conferred on two persons. He argued that the best solution was to merge the experiences of ancient Rome, Spain, and the United States. Therefore, he urged that the two members of the executive power be backed by two alternates, who might resolve any differences that arose between the two members of the executive. Article 16 of the Acta Constitutiva was put to a vote on 2 January 1824 at an extraordinary session. It was defeated by a vote of 42 to 25. As a result, the congress did not address Article 17, which dealt with the vice-president. The proposal to establish a president and a vice-president was one of the few instances in which the second constitution of the United States served as a model. The majority did not agree with the proposal because it feared the possibility of one individual dominating Congress through military or popular forces, as Iturbide had done. The commission on the constitution revised the articles on the executive a number of times, but could not obtain support for its proposals. The fear of provincial disorder also influenced the debate. After Articles 5 and 6 of the Acta Constitutiva had been approved, several provinces decided to implement their right to form their own government. The national administration viewed their actions with concern, particularly because some movements were also anti-European Spaniards. The revolt of 12 December in Querétaro, for example, demanded the expulsion of gachupines (Spaniards who had come to Mexico) from the country. A similar uprising occurred later in Cuernavaca. In both instances, the national government sent forces to restore order. Then, on 23 December, Puebla declared itself a sovereign, free, and independent state. The authorities in Mexico City immediately concluded that the military commander of the province, General José Antonio de Echávarri, was responsible for the "revolt". Therefore, the government dispatched an army under the command of Generals Manuel Gómez Pedraza and Vicente Guerrero to restore order. The forces of the national government approached the capital city of Puebla at the end of December 1823. After lengthy negotiations, General Gómez Pedraza proposed that, since Congress was about to issue the convocatoria for national and state elections, the leaders of Puebla renounce their earlier action and hold new elections. The Poblanos agreed. The convocatoria was received in Puebla on 12 January 1824. Elections were held throughout the province and a new state government was inaugurated on 22 March 1824. Although the national government had maintained order in the nation, the revolt led by General Jose María Lobato on 20 January 1824 demonstrated that the plural executive could not act with the unity of purpose and the speed necessary to quell a large scale uprising in the capital. The rebels demanded the dismissal of Spaniards from government jobs and their expulsion from the country. Lobato managed to win support of the garrisons in the capital and the government seemed on the verge of capitulation when the Supreme Executive Power convinced Congress to declare Lobato an outlaw and to grant the executive sufficient power to quell the rebellion. As a result of the crisis, the majority in Congress eventually decided to establish an executive branch composed of a president and a vice-president. The creation of a single executive, however, did not mean that Congress had accepted a strong presidency. Most Mexicans continued to favour legislative supremacy. The Mexican charter, like the Hispanic constitution, severely restricted the power of the chief executive. The Constitution of 1824 created a quasi-parliamentary system in which the ministers of state answered to the congress. Consequently, the minister of interior and foreign relations acted as a quasi-prime minister. The creation of a national government did not end the tensions between the provinces and Mexico City. The debate over the location of the country's capital sparked a new conflict. The national elite favoured making the "Imperial City of Mexico" the capital of the republic. The regional elites were divided. During 1823, while discussing the importance of local control, they also emphasised the need to maintain a "centre of unity", that is, a capital. However, a significant number pointedly refused to bestow that honour upon Mexico City. The special committee on the nation's capital recommended to the Constituent Congress on 31 May 1824 that another city, Querétaro, become the capital, and that the territory around it become the federal district. After a heated debate, Congress rejected the proposal to move the capital from Mexico City. Thereafter, the discussion centred on whether or not a federal district should be created. The ayuntamiento and the provincial deputation of Mexico were vehemently against such action. Indeed, the provincial legislature threatened secession and civil war if Mexico City were federalised. Nevertheless, on 30 October Congress voted fifty-two to thirty-one to make Mexico City the nation’s capital and to create a federal district. Constitution of 1824 After months of debate, Congress ratified the constitution, on 4 October 1824. The new charter affirmed that: Article 3: The religion of the Mexican nation is and will permanently be the Roman, Catholic, Apostolic [religion]. The nation protects her with wise and just laws and prohibits the exercise of any other [religion]. Article 4. The Mexican nation adopts for its government a representative, popular, federal republic. Article 5. The parts of this federation are the following states and territories: the states of Chiapas, Chihuahua, Coahuila and Texas, Durango, Guanajuato, México, Michoacán, Nuevo León, Oaxaca, Puebla de los Ángeles, Querétaro, San Luis Potosí, Sonora and Sinaloa, Tabasco, Tamaulipas, Veracruz, Xalisco, Yucatán and Zacatecas; and the territories of: Alta California, Baja California, Colima and Santa Fe de Nuevo México. A constitutional law will determine the status of Tlaxcala. Article 74. The supreme executive power of the federation is deposited in only one individual who shall be called President of the United States of Mexico (Estados Unidos Mexicanos). Article 75. There will also be a vice president who, in case of the physical or moral incapacity of the president, will receive all his authority and prerogatives. Like the Acta Constitutiva, the Constitution of 1824 was modelled on the Hispanic Constitution of 1812, not, as is often asserted, on the U.S. Constitution of 1787. Although superficially similar to the second U. S. Charter, and although it adopted a few practical applications from the U.S. Constitution, such as the executive, the Mexican document was based primarily on Hispanic constitutional and legal precedents. For example, although the Constitution of 1824 created a president, in Mexico the office was subordinate to the legislature. Since the Mexican republic was essentially confederalist rather than federalist, the Mexican Charter was closer in spirit to the U.S.’s first constitution, the Articles of Confederation, than to the U.S. Constitution of 1787. Entire sections of the Cádiz Charter were repeated verbatim in the Mexican document because Mexicans did not reject their Hispanic heritage, and because some of the individuals who drafted the new republican constitution had served in the Cortes of Cádiz and had helped write the 1812 Charter. Both the Hispanic Constitution of 1812 and the Mexican Constitution of 1824 established powerful legislatures and weak executives. But it would be an error to consider the Constitution of 1824 a mere copy of the 1812 document. Events in Mexico, particularly the assertion of states’ rights by the former provinces, forced Congress to frame a constitution to meet the unique circumstances of the nation. The principal innovations—republicanism, federalism, and a presidency—were adopted to address Mexico's new reality. The monarchy was abolished because both Fernando VII and Agustín I had failed as political leaders, not because Mexicans imitated the United States' charter. Federalism arose naturally from Mexico's earlier political experience. The provincial deputations created by the Constitution of Cádiz simply converted themselves into states. However, unlike the 1812 document, the Mexican charter gave the states significant taxing power. Although modelled on the Hispanic Constitution of 1812, the new charter did not address a number of issues included in the earlier document because the new Mexican federation shared sovereignty between the national government and the states. Thus, unlike the Constitution of Cádiz, which defined citizenship, the Mexican Constitution of 1824 remained silent on the subject. Similarly, it didn't define who possessed the suffrage, nor did it determine the size of the population required to establish ayuntamientos, two significant factors in determining the popular nature of the Hispanic constitutional system. These decisions were the prerogatives of the states. The constitutions of the states of the Mexican federation varied, but they generally followed the precedents of the Constitution of Cádiz. Most state constitutions explicitly defined the people in their territory as being citizens of the state; they were chiapanecos, sonorenses, chihuahuenses, duranguenses, guanajuatenses, etc. Some states, such as Mexico and Puebla, simply referred to "the natives and citizens of the estate". Following the Cádiz model, all states established indirect elections. A few, however, introduced property qualifications. Many also followed the constitution of 1812 in allowing ayuntamientos in towns with more than 1,000 persons, but some raised the population requirements to 2,000, 3,000 or 4,000. Tabasco only permitted the cabeceras of the partido (district head towns) to have ayuntamientos. Article 78 of Veracruz's constitution stated that the jefe of the department "will arrange the number and function of the ayuntamientos". The 1824 Constitution was composed of 7 titles and 171 articles, and was based on the Constitution of Cadiz for American issues, on the United States Constitution for the formula for federal representation and organization, and on the Constitutional Decree for the Liberty of Mexican America of 1824, which abolished the monarchy. It introduced the system of federalism in a popular representative republic with Catholicism as official religion. The 1824 constitution does not expressly state the rights of citizens. The right to equality of citizens was restricted by the continuation of military and ecclesiastical courts. The most relevant articles were: - 1. The Mexican nation is sovereign and free from the Spanish government and any other nation. - 3. The religion of the nation is the Roman Catholic Church and is protected by law and prohibits any other. - 4. The Mexican nation adopts as its form of government a popular federal representative republic. - 6. The supreme power of the federation is divided into Legislative power, Executive power and Judiciary power. - 7. Legislative power is deposited in a Congress of two chambers—a Chamber of Deputies and a Chamber of Senators. - 50. Political freedom of press in the federation and the states (paragraph 1). - 74. Executive power is vested in a person called the President of the United Mexican States. - 75. It provides the figure of vice president, who in case of physical or moral impossibility of the president, exercise the powers and prerogatives of the latter. - 95. The term of the president and vice president shall be four years. - 123. Judiciary power lies in a Supreme Court, the Circuit Courts and the District Courts. - 124. The Supreme Court consists of eleven members divided into three rooms and a prosecutor. - 157. The individual state governments will be formed by the same three powers. Although this was not stipulated in the constitution, slavery was prohibited in the Republic. Miguel Hidalgo promulgated the abolition in Guadalajara on 6 December 1810. President Guadalupe Victoria declared slavery abolished too, but it was President Vicente Guerrero who made the decree of Abolition of Slavery on 15 September 1829. At the time of the promulgation of the Constitution, the nation was composed of 19 free states and 3 territories. That same year, two changes were made in the structure, resulting finally in 19 free states, 5 territories and the federal district. |Map of Mexico under the Constitution of 1824||The 19 founding states were:| - The five Federal Territories were: Alta California, Baja California, Colima, Tlaxcala, and Santa Fe de Nuevo México. - The Federal District was established around the City of México on November 18, 1824. Due to the influence of Spanish liberal thought, the fragmentation that had been gradually consolidated by the Bourbon Reforms in New Spain, the newly won Independence of Mexico, the size of the territory—almost 4,600,000 km² (1,776,069 sq mi)—and lack of easy communication across distances, there resulted a federal system with regional characteristics. The central states—Mexico, Puebla, Querétaro, Guanajuato, Veracruz and Michoacán—which were the most populated, worked as an administrative decentralization. The states of the periphery—Zacatecas, Coahuila y Texas, Durango, Chihuahua, Jalisco, San Luis Potosí and Nuevo León—acquired a moderate confederalism. The states furthest from the center—Yucatán, Sonora y Sinaloa, Tamaulipas and Las Californias—acquired a radical confederalism. Without the existence of established political parties, three political tendencies are distinguished. The first still supported the empire of Iturbide, but was a minority. The second was influenced by the Yorkist Lodge of freemasonry, whose philosophy was radical Federalism and also encouraged an anti-Spanish sentiment largely promoted by the American plenipotentiary Joel Roberts Poinsett. And the third was influenced by the Scottish Lodge of freemasonry, which had been introduced to Mexico by the Spaniards themselves, favored Centralism, and yearned for the recognition of the new nation by Spain and the Holy See. With the consummation of independence, the "Royal Patronage" was gone, the federal government and state governments now considered these rights to belong to the State. The way to manage church property was the point that most polarized the opinions of the political class. Members of the Yorkist Lodge intended to use church property to clean up the finances, the members of the Scottish Lodge considered the alternative anathema. According to the federal commitment, states should provide an amount of money and men for the army, or blood quota. The federal budget was insufficient to pay debt, defense, and surveillance of borders, and states resisted meeting the blood quota, sometimes meeting that debt with criminals. Some state constitutions were more radical and took supplies to practice patronage locally, under the banner of "freedom and progress". The constitutions of Jalisco and Tamaulipas decreed government funding of religion, the constitutions of Durango and the State of Mexico allowed the governor the practice of patronage, the constitution of Michoacán gave the local legislature the power to regulate the enforcement of fees and discipline of clergy, and the constitution of Yucatán, in a vanguardist way, decreed freedom of religion. Repeal and resettlement In 1835, there was a drastic shift to the new Mexican Nation. The triumph of conservative forces in the elections unleashed a series of events that culminated on 23 October 1835, during the interim presidency of Miguel Barragán (the constitutional president was Antonio López de Santa Anna, but he was out of office), when the "Basis of Reorganization of the Mexican Nation" was approved, which ended the federal system and established a provisional centralist system. On 30 December 1836, interim president José Justo Corro issued the Seven Constitutional Laws, which replaced the Constitution. Secondary laws were approved on 24 May 1837. The Seven Constitutional Laws, among other things, replaced the "free states" with French-style "departments", centralizing national power in Mexico City. This created an era of political instability, unleashing conflicts between the central government and the former states. Rebellions arose in various places, the most important of which were: - Texas declared its independence following the change from the federalist system, and refused to participate in the centralized system. American settlers held a convention in San Felipe de Austin and declared the people of Texas to be at war against Mexico's central government, therefore ignoring the authorities and laws. Thus arose the Republic of Texas. - Yucatán under its condition of Federated Republic declared its independence in 1840 (officially in 1841). The Republic of Yucatán finally rejoined the nation in 1848. - The states of Nuevo León, Tamaulipas, and Coahuila became de facto independent from Mexico (in just under 250 days). The Republic of the Rio Grande never consolidated, because independence forces were defeated by the centralist forces. - Tabasco decreed its separation from Mexico in February 1841, in protest against centralism, rejoining in December 1842. The Texas annexation and the border conflict after the annexation led to the Mexican-American War. As a result, the Constitution of 1824 was restored by interim President José Mariano Salas on 22 August 1846. In 1847, The Reform Act was published, which officially incorporated, with some changes, the Federal Constitution of 1824, to operate while the next constitution was drafted. This federalist phase culminated in 1853. The Plan of Ayutla, which had a federalist orientation, was proclaimed on 1 March 1854. In 1855, Juan Álvarez, interim President of the Republic, issued the call for the Constituent Congress, which began its work on 17 February 1856 to produce the Federal Constitution of the United Mexican States of 1857. - Constitutions of Mexico - Constitutionalists in the Mexican Revolution - Political Constitution of the United Mexican States of 1917 (currently in force) - Federal Constitution of the United Mexican States (1824) - Manchaca (2001), p. 161 - Edmondson (2000), p. 71 - "La Diputación Provincial y el Federalismo Mexicano" (in Spanish). - "Historia de Mexico Volumen 2" (in Spanish). - "Manuel Gomez Pedraza (Cancilleres de Mexico)" (PDF) (in Spanish). - Vázquez (2009), pp. 534–535 - Vázquez (2009), p. 539 - Edmondson, J.R. (2000), The Alamo Story-From History to Current Conflicts, Plano, TX: Republic of Texas Press, ISBN 1-55622-678-0 - Manchaca, Martha (2001), Recovering History, Constructing Race: The Indian, Black, and White Roots of Mexican Americans, The Joe R. and Teresa Lozano Long Series in Latin American and Latino Art and Culture, Austin, TX: University of Texas Press, ISBN 0-292-75253-9 - Vázquez, Josefina Zoraida (2009), "Los primeros tropiezos", in Daniel Cosío Villegas; et al., Historia general de México, versión 2000, Mexico City: Centro de Estudios Históricos de El Colegio de México, pp. 525–582, ISBN 968-12-0969-9
By the end of this section, you will be able to: - Calculate the total force (magnitude and direction) exerted on a test charge from more than one charge. - Describe an electric field diagram of a positive point charge and of a negative point charge with twice the magnitude of the positive charge. - Draw the electric field lines between two points of the same charge and between two points of opposite charge. The information presented in this section supports the following AP® learning objectives and science practices: - 2.C.1.2 The student is able to calculate any one of the variables – electric force, electric charge, and electric field – at a point given the values and sign or direction of the other two quantities. - 2.C.2.1 The student is able to qualitatively and semiquantitatively apply the vector relationship between the electric field and the net electric charge creating that field. - 2.C.4.1 The student is able to distinguish the characteristics that differ between monopole fields (gravitational field of spherical mass and electrical field due to single point charge) and dipole fields (electric dipole field and magnetic field) and make claims about the spatial behavior of the fields using qualitative or semiquantitative arguments based on vector addition of fields due to each point source, including identifying the locations and signs of sources from a vector diagram of the field. (S.P. 2.2, 6.4, 7.2) - 2.C.4.2 The student is able to apply mathematical routines to determine the magnitude and direction of the electric field at specified points in the vicinity of a small set (2-4) of point charges, and express the results in terms of magnitude and direction of the field in a visual representation by drawing field vectors of appropriate length and direction at the specified points. (S.P. 1.4, 2.2) - 3.C.2.3 The student is able to use mathematics to describe the electric force that results from the interaction of several separated point charges (generally 2-4 point charges, though more are permitted in situations of high symmetry). (S.P. 2.2) Drawings using lines to represent electric fields around charged objects are very useful in visualizing field strength and direction. Since the electric field has both magnitude and direction, it is a vector. Like all vectors, the electric field can be represented by an arrow that has length proportional to its magnitude and that points in the correct direction. (We have used arrows extensively to represent force vectors, for example.) Figure 18.30 shows two pictorial representations of the same electric field created by a positive point charge . Figure 18.30 (b) shows the standard representation using continuous lines. Figure 18.30 (a) shows numerous individual arrows with each arrow representing the force on a test charge . Field lines are essentially a map of infinitesimal force vectors. Note that the electric field is defined for a positive test charge , so that the field lines point away from a positive charge and toward a negative charge. (See Figure 18.31.) The electric field strength is exactly proportional to the number of field lines per unit area, since the magnitude of the electric field for a point charge is and area is proportional to . This pictorial representation, in which field lines represent the direction and their closeness (that is, their areal density or the number of lines crossing a unit area) represents strength, is used for all fields: electrostatic, gravitational, magnetic, and others. In many situations, there are multiple charges. The total electric field created by multiple charges is the vector sum of the individual fields created by each charge. The following example shows how to add electric field vectors. Adding Electric Fields Find the magnitude and direction of the total electric field due to the two point charges, and , at the origin of the coordinate system as shown in Figure 18.32. Since the electric field is a vector (having magnitude and direction), we add electric fields with the same vector techniques used for other types of vectors. We first must find the electric field due to each charge at the point of interest, which is the origin of the coordinate system (O) in this instance. We pretend that there is a positive test charge, , at point O, which allows us to determine the direction of the fields and . Once those fields are found, the total field can be determined using vector addition. The electric field strength at the origin due to is labeled and is calculated: Four digits have been retained in this solution to illustrate that is exactly twice the magnitude of . Now arrows are drawn to represent the magnitudes and directions of and . (See Figure 18.32.) The direction of the electric field is that of the force on a positive charge so both arrows point directly away from the positive charges that create them. The arrow for is exactly twice the length of that for . The arrows form a right triangle in this case and can be added using the Pythagorean theorem. The magnitude of the total field is The direction is or above the x-axis. In cases where the electric field vectors to be added are not perpendicular, vector components or graphical techniques can be used. The total electric field found in this example is the total electric field at only one point in space. To find the total electric field due to these two charges over an entire region, the same technique must be repeated for each point in the region. This impossibly lengthy task (there are an infinite number of points in space) can be avoided by calculating the total field at representative points and using some of the unifying features noted next. Figure 18.33 shows how the electric field from two point charges can be drawn by finding the total field at representative points and drawing electric field lines consistent with those points. While the electric fields from multiple charges are more complex than those of single charges, some simple features are easily noticed. For example, the field is weaker between like charges, as shown by the lines being farther apart in that region. (This is because the fields from each charge exert opposing forces on any charge placed between them.) (See Figure 18.33 and Figure 18.34(a).) Furthermore, at a great distance from two like charges, the field becomes identical to the field from a single, larger charge. Figure 18.34(b) shows the electric field of two unlike charges. As the two unlike charges are also equal in magnitude, the pair of charges is also known as an electric dipole. The field is stronger between the charges. In that region, the fields from each charge are in the same direction, and so their strengths add. The field of two unlike charges is weak at large distances, because the fields of the individual charges are in opposite directions and so their strengths subtract. At very large distances, the field of two unlike charges looks like that of a smaller single charge. We use electric field lines to visualize and analyze electric fields (the lines are a pictorial tool, not a physical entity in themselves). The properties of electric field lines for any charge distribution can be summarized as follows: - Field lines must begin on positive charges and terminate on negative charges, or at infinity in the hypothetical case of isolated charges. - The number of field lines leaving a positive charge or entering a negative charge is proportional to the magnitude of the charge. - The strength of the field is proportional to the closeness of the field lines—more precisely, it is proportional to the number of lines per unit area perpendicular to the lines. - The direction of the electric field is tangent to the field line at any point in space. - Field lines can never cross. The last property means that the field is unique at any point. The field line represents the direction of the field; so if they crossed, the field would have two directions at that location (an impossibility if the field is unique). Move point charges around on the playing field and then view the electric field, voltages, equipotential lines, and more. It's colorful, it's dynamic, it's free.
Use your compass to construct a circle like the one shown below on a piece of paper. Describe how to fold the paper two times in order to help you construct a square. https://www.youtube.com/watch?v=t-ZtoNhEYWQ Constructing an Equilateral Triangle A regular polygon is a polygon that is equiangular and equilateral. This means that all its angles are the same measure and all its sides are the same length. The most basic example of a regular polygon is an equilateral triangle, a triangle with three congruent sides and three congruent angles. Squares are also regular polygons, because all their angles are the same and all their sides are the same length. Regular polygons with five or more sides do not have special names. Instead, the word regular is used to describe them. For example, a regular hexagon is a hexagon (6 sided polygon) whose angles are all the same measure and sides are all the same length. All regular polygons have rotation symmetry. This means that a rotation of less than will carry the regular polygon onto itself. In fact, a regular -sided polygon has rotation symmetry for any multiple of . Constructions are step-by-step processes used to create accurate geometric figures. To create a construction by hand, there are a few tools that you can use: - Compass: A device that allows you to create a circle with a given radius. Not only can compasses help you to create circles, but also they can help you to copy distances. - Straightedge: Anything that allows you to produce a straight line. A straightedge should not be able to measure distances. An index card works well as a straightedge. You can also use a ruler as a straightedge, as long as you only use it to draw straight lines and not to measure. - Paper: When a geometric figure is on a piece of paper, the paper itself can be folded in order to construct new lines. You can construct some regular polygons by hand if you remember the definitions and properties of these regular polygons. With the additional help of geometry software or a protractor, you can construct any regular polygon. is one side of what will become equilateral triangle . You need to put point in the correct place in order to make the equilateral triangle. Where should point be placed with respect to points and ? Solution: Let the distance between and be . Point needs to be away from point and also away from point . Use a straightedge to draw line segment . Use the ideas from Example A to construct equilateral triangle . Solution: Use a compass to measure the length of . Make a partial circle of points that are the length of away from point . Make another partial circle of points that are the length of away from point . The point of intersection of these two partial circles is point . Points , , , and are on a circle centered at point . Prove that is a square. Solution: because they are all radii of the same circle. Since is a right angle, and must also be right angles. Therefore, . This means that by . because they are corresponding parts of congruent triangles. All four triangles are isosceles because they each have two congruent sides. This means that their base angles are congruent. Because the vertex angle of each triangle is , the base angles of each triangle must be . The four angles that make up the quadrilateral are each made of two of these angles, and are each therefore . Because the quadrilateral has four congruent sides and four angles, it is a square. Concept Problem Revisited Fold the circle so that the two halves overlap to create a crease that is the diameter. Fold the circle in half again to create the perpendicular bisector of the diameter. To do this, fold so that the two endpoints of the diameter meet. The second crease will also be a diameter. Note that the two diameters are perpendicular to one another. Connect the four points of intersection on the circle to construct the square. You can be certain that this is a square due to the proof in Example C. A regular polygon is a polygon that is equiangular (all angles the same measure) and equilateral (all sides the same length). A drawing is a rough sketch used to convey an idea. A construction is a step-by-step process used to create an accurate geometric figure. A compass is a device that allows you to create a circle with a given radius. Compasses can also help you to copy distances. A straightedge is anything that allows you to produce a straight line. A straightedge should not be able to measure distances. An index card works well as a straightedge. You can also use a ruler as a straightedge, as long as you only use it to draw straight lines and not to measure. 1. The regular hexagon below has been divided into six congruent triangles. What type of triangles are they? Explain. 2. Six points have been evenly spaced around the circle below. Explain why a regular hexagon is created when these points are connected. 3. Construct a regular hexagon inscribed in a circle. 1. They must be equilateral triangles. - A full circle is , so each angle at the center of the hexagon must be . *This is also why regular hexagons demonstrate rotation symmetry at multiples of .* - The six triangles are congruent, so the six segments connecting the center of the hexagon to the vertices must be congruent. This means the six triangles are all isosceles. - The base angles of each of the isosceles triangles must be . - The measure of each angle of all of the triangles is , so all triangles are equilateral. 2. Because the six points are evenly spaced, each of the segments connecting the six points must be the same length. Therefore, the polygon must be regular. Because there are six sides, it must be a regular hexagon. 3. “Inscribed in a circle” means all six vertices of the hexagon are on the same circle. Start by constructing a circle and a point on the circle. You know that the radius of the circle is the same as the length of each side of the circle (see guided practice #1). Therefore, your goal is to place six points around the circle that are the same distance apart from one another as the radius of the circle. Keep your compass open to the same width as the radius of the circle and make one new mark on the circle. Continue to make new marks around the circle that are the same distance apart from one another. Connect the intersection points to form the regular hexagon. 1. Construct an equilateral triangle. 2. Construct another equilateral triangle. 3. Explain why your process for constructing equilateral triangles works. 4. Construct a square inscribed in a circle by making two folds. 5. Justify why the polygon you've created is actually a square. Use your straightedge to construct . 6. Construct the perpendicular bisector of . 7. Construct a circle with diameter . 8. Construct a square inscribed in the circle by connecting the four endpoints of the diameters. 9. Extend your construction to a regular octagon by bisecting each of the right angles at the center of the circle. 10. Construct a regular hexagon inscribed in a circle. 11. Explain why the method for constructing a regular hexagon relies on a circle. 12. Explain how you could extend your construction of the regular hexagon to a construction of a regular 12-gon. 13. Construct an equilateral triangle. Explain how you could construct the circle that passes through the three points of the equilateral triangle. 14. Given an equilateral triangle inscribed in a circle, how could you extend the construction to construct a regular hexagon? 15. Given a circle and a protractor, explain how you could create a regular pentagon.
Students will understand the value of planning and maintaining a balanced personal budget and learn about tools available to assist. FCCLA Activity Option National Program-Financial Fitness-Cash Control Project at http://www.fcclainc.org/content/financial-fitness/ A budget or spending plan is a financial statement individuals can use to assist money management. It is important because it has many positive uses including: A budget has two main components: Income and expenses. Income is money earned. It can come from wages or salaries, tips, withdrawal of money from saving, interest earned on savings accounts, scholarships, monetary gifts, etc. an expense is money spent. The two main types of expenses are fixed and variable, also known as fixed and flexible. Fixed expenses are those which have to be paid by a certain date. These expenses are often contractual and little can be done to change the expense in a short period of time. Flexible expenses can easily be reduced or eliminated and are often not due by a certain date. A net loss is when a person has more expenses than income during the time period of the budget. If this occurs, either increase the income or decrease the expenses. A net gain is when a person has more income than expenses. Remaining income can be allocated into savings, investing, or spending. The detail of a spending plan may vary depending upon a person's needs. However, it needs to have enough detail so a person knows where the money is going regardless of the amount. Determine the appropriate record keeping format to use, select categories for the budget and select a time period, usually when paychecks are received (weekly, bi-weekly, monthly, etc.) Record keeping allows a person to realize potential problems early if they are spending too much in one area. Record systems include: Content Outline, Activities and Teaching Strategies (All options do not necessarily need to be taught. Select ones to cover standards and objectives and according to your district policies.) Option 1: Group Juggle Use Teacher Information Group Juggle (pdf) for this energizer. Option 2: Budget Buster Use Teacher Information Budget Buster (pdf) for this activity. Option 3: Budget Overview Discuss methods of establishing a budget by using the Teacher Information (pdf) on Budget Overview with Budget Terms and Transparencies. Option 4: Making a Budget Activity to help students understand that budgeting must be flexible because there are always unexpected expenses that occur. Use the Making a Budget teacher information (pdf) material and My Own Budget student worksheet (pdf). Option 5: FEFE Spending Plan Students identify the components of a spending plan and develop one for a chosen career, a college student, personal use, or a school organization. Students then compare a monthly spending plan to an individual's actual expenses. The comparison will be completed by using a spending plan control method and discussing the success of the spending plan. Step #1: Go to: http://www.fefe.arizona.edu/download-lessons. (You will need to register and log in to this website prior to use) Step #2: Click on: Educational Resources then click on Curriculum then click on 15.0 Spending Plans. Step #3: Click on 1.15.2 Developing a Spending Plan and download/print the lesson plan and resources. Option 7: Financial Planning and Budgeting UEN Lesson Plan that contains an introduction of financial planning, budgeting money, identifying and prioritizing personal and financial goals, understanding what it means to budget money, identifying reasons to maintain a budget, create and maintain a personal budget that supports personal and financial goals, and if necessary make adjustments to a planned budget to stay out of the negatives. Step #1: Go to: www.uen.org. Step #2: Click on: Lesson Plans then click on Financial Literacy Step #3: Click on Financial Planning and Budgeting and download/print the lesson plan and resources. Author: Nicole Larsen Option 8: Guest Speaker The Utah Bankers Association will send a guest speaker to your classroom. The Banker will teach a "Your Bank and You" Course. To schedule a banker click on this website. Under the teacher heading you will see "click here for a request form". People often have a difficult time understanding the percentage of their take home pay that goes toward different expenses. Recommendations have been developed to guide individuals on how to allocate money. These recommendations are based upon national averages and guidelines for financial management. The method each individual uses to allocate money depends upon personal values, needs, and wants.
In any computer network, a device must be uniquely identified in order to be reached by any other network component(e.g., router, server, printer, computing equipment, sensor etc.). For this, in IP networks, a 32-bit number assigned to each device, called the IP address, is used. An IPv4 address can be expressed both in a dotted - decimal format, but also in binary notation. In the former format, the 32 bits of the address are divided in four octets that are separated by periods,such as 188.8.131.52 which can be the IP address of a device. Each octet can have values between the number 0 (00000000 in binary) and 255 (11111111 again in binary). The calculations for converting the binary value 111111112(the subscript 2 denotes that the number is in binary, like the bits that can have two values 0 or 1) to the corresponding decimal, can be seen below: 111111112 = 1*27 + 1*26+ 1*25 + 1*24 + 1*23 + 1*22 + 1*21+ 1*20 = 128 + 64 + 32+ 16 + 8 + 4 + 2 + 1 = 255 One more example is given here for better understanding of the process: 100010102 = 1*27 + 0*26+ 0*25 + 0*24 + 1*23 + 0*22 + 1*21+ 0*20 = 128 + 8 + 2 =138 (which is thesecond decimal number in the given IP) The above calculations are very trivial in Computer Networks and the interested readers must get quickly familiarized with these. The same IP address described above 184.108.40.206can, also, be written in binary with 32 bits like: 11000000 10001010 0000001000000011 using the same technique that was described above. All the above analysis, along with the one that follows in this course considers IPv4 addresses, meaning addresses that follow the IPv4 protocol of addressing by using 32-bit numbers. Lately, due to the extinction of the 232 possible addresses that could be assigned in IPv4, there is a global effort to assign addresses in the new, updated addressing scheme called IPv6. IPv6 uses 128-bit for the addresses in an effort to create a large number of addresses (2128) that will not be extinct rapidly, but will be able to handle the proliferation of devices that demand access to the Web. Further discussion about IPv6 addressing will be covered in a future version of this course. A little help (Subnet Mask) Each packet that travels inside a IP based computer network searches to find its destination by knowing, only, its unique IP address. This means that decisions regarding the route of packets in the network are taken using only the destination address. Keeping in mind the enormous size of the web, it is easy to understand that no router or device is aware of the full network topology of the modern web. Instead, using the IP address each device can forward the packet(s) towards the right direction in order to be able to find other devices with better knowledge, as it is closing the destination. In fact, the same mechanism takes place, also, in real life.Imagine you want to visit your best friend in the University campus where he/she studies. Having no idea where the campus lies, first you want to reach the town where the campus is and then to ask the locals about the exact location of the campus. This mechanism, where the decision about the forwarding of a packet takes place based on its IP address and the knowledge of the device that will decide, is called routing. From the above, it is easy to understand that addressing plays a significant role for routing, as well. But in order to be able to handle addressing, IP address is not the only information that someone needs. In fact, an IP address is divided in two smaller parts: the network part that denotes the network where the source of the packet belongs and the host part that specifically and uniquely identifies the source. These two parts are expressed inside an IP address and there is a need for another piece of information in order to be able to identify them inside the IP address. This is the role of the subnet mask, another 32-bit number (same size as the IP address) that reports on the network and the host part of the IP address. The subnet mask, always, consists of a series of 1s that are followed by some 0s. The number of 1s on the subnet mask denotes the network part (in terms of bits) of the IP address while the number of 0s denotes the host part. The following example clarifies this notion. Let’s consider theIP address that we have seen earlier 220.127.116.11 and the subnet mask255.255.255.0. If we write these numbers on binary, then we have: 192.168.2.3 = 11000000 10001010 00000010 00000011 255.255.255.0 = 1111111111111111 11111111 00000000 Network part (n) Host part(h) From the above figure we can see that the first 3 octets of bits (24 bits) identify the network that the device belongs to, while the final octet identifies the device itself in that network. Since the subnet mask has 24 1s followed by 8 0s, apart from the dotted decimal format that we have seen: 255.255.255.0 it can also be written as a prefix /24, showing the number of 1s that is consisted. Remember that a subnet mask always starts with a series of 1s (that is the number following the “/”) and is filled in with 0s up to 32 digits/bits. As an example, there are three numbers that are not all valid subnet masks. 255.255.255.128à 1111111111111111 11111111 10000000 VALID 255.255.255.224à 1111111111111111 11111111 11100000 VALID 255.255.255.3 à 1111111111111111 11111111 00000011 NOTVALID The first two numbers are valid subnet mask numbers because of the series of continuous 1s,followed by a series of continuous 0s. The third number is not a valid subnet mask because the series of 1s is not followed by a series of 0s since there are also 1s in the last bits. Therefore, this is not a valid subnet mask value. One last remark is that number n equals with the number of network bits in an IP address, while h equals with the number of host bits. Keep in mind that, since an IP address is 32-bit long, then h + n =32. Addresses with a … class A very popular network architecture, that was in use since the early 80s on the Internet, suggested that the IP address space should be divided in five different classes of addresses namely class A, B, C,D or E. Those classes created networks of different sizes by having different values on the network part (n) and the host part (h). At the following figure you can see the differences between the values of n and h in the IP addresses of the aforementioned classes. From the above figure, it is easy to understand that for Class A n= 8 and h = 24 (24 = 32-8), therefore the number of networks that can be created is small (since n = 8) but each of these networks can support many hosts (h = 8+8+8 = 24), resulting in a possible network of large size. On the other hand, in Class B the number of available networks is increased (n = 8+8 = 16) while the number of possible hosts is decreasing (h =16), therefore such an address allows for the creation of more networks but each of smaller size. The same goes for Class C where again n increases and h decreases. Addresses of Class D and E cannot be given to our networks and they exist to serve special reasons that are out of the scope of this course. The following figure clarifies the number of host and network bit per class. Number of Network Bits (n) Number of Host Bits (h) Number of Hosts Per Network 216 – 2 28 – 2 For the calculation of the number of hosts per network, we use the formula 2h-2, where h are the hots bits of each class, and decrease of the hosts number by 2 is because in each network, 2 addresses are kept for special reasons: one for designating the whole network’s address and the other for broadcasting a message in this network. Furthermore, in order to be able to understand the class to which an IP address belongs, there are two ways of doing so. The first is with the help of the subnet mask and can be seen in the figure below.Since Class A has n = 8 them the first 8 bits of the subnet mask (e.g., those that indicate the network part as has been mentioned above) should be equal to 1 and all the rest should be equal to 0. For Class B, n = 16 therefore the 16 bits (from the left) should equal with 1 while in Class C the first 24 bits equal with 1. At the end of each line you can, also, see the dotted decimal format of the subnet mask. Class A: 11111111 00000000 00000000 00000000 (255.0.0.0) Class B: 11111111 11111111 00000000 00000000 (255.255.0.0) Class C: 11111111 11111111 11111111 00000000 (255.255.255.0) The second way to find the class that an IP address belongs to, is by looking on the address itself and especially on the value of the first octet. For Class A, where the first 8 bits of the IP address point to the network part, since for designing reasons the 1stbit must always be 0 (e.g., 0xxxxxxx),then the various combinations of IP addresses that follow that rule and,therefore, belong to Class A are all the addresses whose first octet of bits are between 00000000 and 01111111. That is the decimal numbers from 0 to 127. Since some of these addresses are reserved for other reasons the correct result is from 1 to 126. For Class B addresses the 2 first bits of the first octet are fixed as 10xxxxxx creating IP addresses from 128 to 191. Finally, for Class C the 3 first bits are reserved (110xxxxx) creating IP addresses from 192 to 223. The following table shows which IP addresses belong to which class along with some useful information that can be extracted from this. Number of fixed Bits First Octet Range Valid Network Numbers Number of Networks Number of Hosts Per Network 1 - 126 18.104.22.168 – 22.214.171.124 27 – 2 = 126 127 – 191 126.96.36.199 – 188.8.131.52 214 = 16.384 216 - 2 192 - 223 192.0.0.0 – 184.108.40.206 221 = 2.097.152 28 - 2 The available number of networks in each case is given by 2n-i where n is the number of bits in the network part and i is the number of fixed bit in the first octet (see 2ndcolumn). Therefore, Class A has 28-1 = 27 networks. The minus 2 for Class A is subjected to the fact that address 0.0.0.0 is used for designating the default route, while the address 127.0.0.1 for loopback tests.Therefore, these two special addresses do not belong to a certain class and that’s the reason for having 27 – 2 = 126 networks in Class A. This restriction does not apply to the other 2 classes. Up until now, we have considered that one IP address is given to a device and from this address we have learned to identify the class that the host belongs. But what happens if we want to create more than one networks and we have been assigned with only one IP address? Subnetting shows how we can create subnetworks from one given IP address. In order to be able to create subnets someone must consider changing the values of the bits on the host part of the IP address and consider them, from now on as subnet bits (s). The number of subnets that we can create is related to the number of subnet bits (or s) in the network in fact it is equal to 2s. Therefore, by subnetting, the number of hosts per network is reduced (since h is decreased) but the number of subnets is increasing (since number of subnets = 2s). At the following example, we consider an IP address 192.168.2.3 (the same as the one we used before) and we will try to create subnets from this address. First we should try to write it down in bits: 11000000 10001010 00000010 00000011 Using the table above we can see that this address shows that the network belongs in Class C. Therefore, the subnet mask is 255.255.255.0. The subnet mask in binary is: 11111111 11111111 11111111 00000000 Remembering that subnets are created by changing the values of host bits we can see that by selecting to change the two first host bits (i.e., the two first bits of the fourth octet for Class C addresses resulting in s = 2) then we can create 4 (22) subnets. The number of hosts for each subnet will be 2h-2 but h = 6, therefore 62 hosts per subnet. As an exercise, let’s try to find the IP addresses of the 4 equal subnets we can create.The word equal that is used means that the number of hosts that each subnet can have is the same for all the 4 subnets. Also, to complete this task, we will need to remember a magic number that is 2h which for our example equals 64 (h = 6). Now, to find the address of the first subnet we should place 0s to the bits of the host part of the IP address. That is the last octet for the example: 11000000 10001010 00000010 00000000 In decimal this IP address is 192.168.2.0. Now this is the first subnet’s address. The next subnet’s address (i.e., the second subnet) can be easily found by adding the magic number to the address of the previous subnet at the same octet, that is 192.168.2.64(address of the 2nd subnet) 192.168.2.128(address of the 3rd subnet) 192.168.2.192(address of the 4th subnet) Now, we can see the IP address that belong to the 1st subnet. It would be from 192.168.2.0 to 192.168.2.63, since 192.168.2.64 is the first IP address of the next subnet. Now, the first address of each (sub)net is reserved to characterize the whole (sub)net and, therefore,cannot be received by any host. At the same time, the last IP address of each(sub)net is reserved as the broadcast address, an address to send a packet to all the devices in that network. Remember that these two address in each(sub)net are the reason that we have to extract 2 in order to find the number of hosts per (sub)network (2h-2). To summarize this, we can see that for the 1st subnet we have: 1st subnet network IP address: 192.168.2.0 1st subnet first available IP address: 192.168.2.1 (192.168.2.0+1) 1st subnet last available IP address: 192.168.2.62 1st subnet broadcast IP address: 192.168.2.63 (192.168.2.64-1) To better understand the logic, we will continue with the related information regarding the 2nd subnet: 2nd subnet network IP address: 192.168.2.64 2nd subnet first available IP address: 192.168.2.65 (192.168.2.64+1) 2nd subnet last available IP address: 192.168.2.126 2nd subnet broadcast IP address: 192.168.2.127 (192.168.2.128-1) For the 3rdsubnet: 3rd subnet network IP address: 192.168.2.128 3rd subnet first available IP address: 192.168.2.129 (192.168.2.128+1) 3rd subnet last available IP address: 192.168.2.190 3rd subnet broadcast IP address: 192.168.2.191 (192.168.2.192-1) And the 4th: 4th subnet network IP address: 192.168.2.192 4th subnet first available IP address: 192.168.2.193 (192.168.2.192+1) 4th subnet last available IP address: 192.168.2.190 4th subnet broadcast IP address: 192.168.2.255 (192.168.2.256-1) Beyond the IP addressing, which is a fundamental skill of a network engineer, the routing process is also of a significant importance. The routing process defines the flow of information inside a network and can be implemented using two different techniques: the static routing or the dynamic routing. Usually the characteristics of the designed network topology determine which of those two techniques should be applied. However, it is network engineer’s responsibility to choose the proper technique in order to achieve the desired functionality. Static routing fundamentals In this section of the course, a closer look on the static routing technique takes place. In order to clearly understand the functionality of routing a brief analysis of the router’s operation should be made. The router is the device which determines the flow of the information in a network and the different routing techniques are applied on this device. Similarly,to post office in real life, the router checks the destination of the packet and then it forwards it to the fastest route. In order to decide which path the packet should follow, the router contains routing tables. Those tables are filled with IP addresses of the neighboring networks. So when a packet arrives to a router, the router checks its destination and uses the longest prefix matching technique in order to decide which route the packet should take. When the longest prefix matching technique takes place the router compares the destination IP address of the packet with the routing table’s addresses. This comparison is initiated digit by digit and finally the address that matches better wins and usually is the fastest route. Table 1. Routing table and Destination IP Table 2.Longest Prefix Matching When a network engineer applies static routing, practically (s)he alters manually the routing tables. As a result, (s)he defines the flow of the information in the network as the routes become static.This routing method produces advantages and disadvantages that are discussed below. Main advantages of Static routing The implementation of static routing requires very good design of the network and advanced administration skills by the network engineer. One of the advantages provided by this technique is that it offers full network flow control to the network administrator. Furthermore, static routing can be used in small networks as it improves the security of the network, considering that the network access is always controlled by the network engineer. Also, when static route is applied, the routers are not consuming CPU resources as the routes are static and therefore routers with low CPU can be used. Finally, another advantage of this kind of routing is the fact that it does not consuming bandwidth, as it provides low overhead in the network. Main disadvantages of Static routing On the other hand, it can be considered as disadvantage the fact that static routing requires advanced networking skills and also very good knowledge of the network topology. Supposing that a new network administrator is hired to maintain the network of a company that is configured using static routing. The administrator should know every configuration that has taken place in every router, that is quite inefficient,especially when a network is medium or large size. Furthermore, if a network malfunction occurs, the troubleshooting time increases. Finally, the scalability of a network is restricted when static routing takes place, as every new network extension will lead to the reconfigure of all routers in the network. In the hands on part of this course, a configuration of static routing in a network topology takes place. This process is extremely beneficial for academic purposes as the operation of routers and the construction of a fully functional network will add experience and knowledge to learners. Additionally, the operation of the router and the role of routing tables and longest prefix matching technique will be examined in practice. Static routing implementation hands on In this section of the course, the implementation of static routing technique takes place. Having in mind the theory of static routing and how it is applied, we proceed to the configuration of the routers in order to achieve an end to end connectivity. The network topology presented below is a scenario where static routing should be applied.The network topology is consisted of three routers, where each router has two Fast Ethernet network interfaces. Based on the figure above the routers Router-1 and Router-3 use only the FastEthernet0/0 interface and only Router-2 uses both interfaces. It is worth mentioning that all the given IP addresses are of class C, therefore their subnet mask is /24. The interface of Router-1 has the IP 192.168.1.1 and belongs to the 192.168.1.0 network. However, the FastEthernet 0/0 interface of Router-2 has the 192.168.1.2 IP address and belongs also to the 192.168.1.0 network. The second interface of Router-2 has the 192.168.2.2 IP address and belongs to the 192.168.2.0 network similarly to the interface of Router-3, which has the 192.168.2.1 IP address. Having in mind the information provided by the network topology, the configuration of the routers takes place.Each router should be configured via the Command Line Interface (CLI). The figure below displays how the proper configuration of a router can be accomplished. In this figure the process of configuring Router-1 is displayed.In order to access the privilege execmode the enable command is used. Then the configure terminal command is used to enter the global configuration mode where the most of our work takes place. Information related to exec mode, privilege exec mode and global configuration can be found in the Basic Network Router Configuration course. When the configure terminal command is applied we are able to see the config word inside the brackets (config). Now we are able to select the interface that we want to configure by using the command interface Similarly, we are able to configure all the routers in the network topology according to our scenario. When this basic configuration is done, the IPs of the same network will be able to communication (ping one another) but IPs of different network will not. The solution to this problem is the implementation of routing techniques and in our case the implementation of static routing. Figure 3.Static routing configuration for Router-1 In order to apply static routing we should enter first to the global mode as it is displayed above. Then we manual change the routing table of the router and we set routes by using the command: ip route Similarly, we perform static routing to Router-2 and Router-3 as the figures below display. It is worth mentioning that we should set all the necessary routes in order to achieve end to end connectivity. In example, if we set the routes in Router-1 and Router-2 and forget the Router-3 we will not have end to end connectivity if we try to ping from Router-1 to Router-3 because the ICMP packets of Router-1 will reach Router-3 but the reply from Router-3 will never reach Router-1. Figure 4. Static routing configuration for Router-2 Figure 5. Static routing configuration for Router-3 When the configuration of those three routers is complete and all the interfaces are up then or final network topology will display green lights as the figure below displays. Figure 6. Final network topology Figure 7.Connectivity check Finally, in order to check the end to end connectivity we try to ping the Router-1 from Router-3 and we also perform traceroute. The figure above presents the results of our test where we achieved end to end communication. The aforementioned scenario is an example of the static routing implementation. Nevertheless, in the following interactive section students will have a hands on experience by performing static routing using the PT Anywhere tool. In the interactive part, students should create an identical topology like the one that has been presented, and perform similar actions in order to achieve end to end connectivity. Introduction to Dynamic routing Beyond the static routing there is also the dynamic routing technique which is presented in this section of the course. In contrast to static routing, the dynamic technique uses routing protocols. Those routing protocols are responsible for updating the routing tables and also for network discovery and network communication, as they define the rules of communication among the routers. Some of the protocols used in dynamic routing are the RIPv1 (Routing Information Protocol version 1), the RIPv2 (Routing Information Protocol version 2), the OSPF (Open Shortest Path First) and the EIGRP (Enhanced Interior Gateway Routing Protocol). However, the choice of which protocol should be used is engineer’s responsibility. Later on, information related to certain routing protocols are displayed and examined. Furthermore, the dynamic routing is the preferred technique for small, medium and large size networks as it can be easily implemented (simply activate and configure the protocols). Also it provides scalability to a network considering that it automatically adapts to network topology changes (add or remove routers). Nevertheless, due to protocols usage this kind of technique requires routers with CPU, memory and bandwidth to the communication links as it introduces overhead to the network. Focusing on the protocols operation, for the route computations algorithms are used. The algorithms used in routing protocols can be divided in two categories: the link state algorithms and the distance vector algorithms. On the one hand, when link state protocols are used, the router constructs a map with the available routers of the network in the form of a graph, where each path is calculated by the router and acquires a value (e.g Dijkstra algorithm). On the other hand, when distance vector protocols are used, the router calculates the path based on the hop count where the hop is a node (Bellman-Ford). Finally, when distance-vector protocol is used the router should inform periodically its neighboring routers for topology changes in contrast to link-state protocols where a router informs all the network nodes when topology changes. Figure 8.Dijkstra algorithm RIPv2 (Routing Information Protocolversion 2) The RIPv2 is a classless distance-vector protocol,which means that it is used in networks with the same subnet mask (e.g /24).When RIPv2 is used, the router periodically transmits updates to the other routers configured with the same protocol. Usually RIPv2 is used in small size networks as the lifetime of the protocol can reach 15 hops, which means 15 routers, then the packet is dropped. OSPF (Open Shortest Path First) The OSPF protocol is a quite popular protocol as it is very flexible, scalable and can be used in any kinds of network(small, medium and large networks). This protocol is link-state and uses the Dijkstra algorithm and has no hop count limitation. The OSPF protocol provides the ability to divide a large network into many small networks which are called areas. This ability is very useful for administration and troubleshooting purposes. Additionally, OSPF supports VLSM/CIDR networks and it is preferred instead of RIPv1 and RIPv2 when dynamic routing as to be applied. Finally, each router informs the whole network regarding its neighbors state, so every router in a network has a complete overview of the network neighborhood. Also, OSPF can be used under specific criteria such as network traffic. Table 3. OSPF vs RIPv1 and RIPv2 Type of protocol Multicast on change Hop count limit Hierarchical network requirement Yes (using areas) No (flat only) No (flat only) Main advantages of Dynamic routing The dynamic routing provides lots of benefits in the design and maintenance of a network and reduces the workload of the network engineer. One of the main advantages of dynamic routing technique is that the routes are automatically created by the usage of protocols, therefore no manual configuration of routes is needed. In addition, this type of routing can be used independently of the size of the network, therefore it adds scalability to our system and automatically removes the complexity in matters of network administration and troubleshooting. Finally, the aforementioned advantages are able to justify why this type of routing is used in almost any network. Main disadvantages of Dynamic routing However, the fact that routing tables are automatically updated, when dynamic routing is used in a network,has as a result increased networking traffic and overhead introduction to the network. Also, the usage of this type of routing demands routers with stronger CPU than those used in static routing, therefore the cost of those routers increases.Finally, the dynamic routing technique cannot provide the security that was presented by the static routing technique due to its nature (no manual route configuration is needed). Concluding, a hands on part of dynamic routing follows, where the configuration of a network topology based on OSPF protocol takes place. This process is extremely beneficial for academic purposes as the operation of routers and the construction of a fully functional network will add experience and knowledge to learners. Additionally, the functionality of the dynamic routing is highlighted with the usage of OSPF. Dynamic routing implementation hands on In this section of the course, the implementation of dynamic routing technique takes place. Having in mind the theory of dynamic routing and the protocols functionality, we proceed to the configuration of the routers in order to achieve an end to end connectivity.The network topology presented below is a scenario where static OSPF dynamic routing protocol should be applied. The network topology is consisted of four routers, where each router has two Fast Ethernet network interfaces.
In order to continue enjoying our site, we ask that you confirm your identity as a human. Thank you very much for your cooperation. Jamestown Fort Jamestown Colony, first permanent English settlement in North America, located near present-day Williamsburg, Virginia. Established on May 14, 1607, the colony gave England its first foothold in the European competition for the New World, which had been dominated by the Spanish since the voyages of Christopher Columbus in the late 15th century. The colony was a private venture, financed and organized by the Virginia Company of London. King James I granted a charter to a group of investors for the establishment of the company on April 10, 1606. During this era, “Virginia” was the English name for the entire East Coast of North America north of Florida. The charter gave the company the right to settle anywhere from roughly present-day North Carolina to New York state. The company’s plan was to reward investors by locating gold and silver deposits and by finding a river route to the Pacific Ocean for trade with the Orient. Jamestown Colony, first permanent English settlement in North America, located near present-day Williamsburg, Virginia. Established on May 14, 1607, the colony gave England its first foothold in the European competition for the New World, which had been dominated by the Spanish since the voyages of Christopher Columbus in the late 15th century. The colony was a private venture, financed and organized by the Virginia Company of London. King James I granted a charter to a group of investors for the establishment of the company on April 10, 1606. During this era, “Virginia” was the English name for the entire East Coast of North America north of Florida. The charter gave the company the right to settle anywhere from roughly present-day North Carolina to New York state. The company’s plan was to reward investors by locating gold and silver deposits and by finding a river route to the Pacific Ocean for trade with the Orient. Jamestown Settlement: Godspeed A contingent of approximately 105 colonists departed England in late December 1606 in three ships—the Susan Constant, the Godspeed, and the Discovery—under the command of Christopher Newport. They reached Chesapeake Bay on April 26, 1607. Soon afterward the captains of the three ships met to open a box containing the names of members of the colony’s governing council: Newport; Bartholomew Gosnold, one of the behind-the-scenes initiators of the Virginia Company; Edward-Maria Wingfield, a major investor; John Ratcliffe; George Kendall; John Martin; and Captain John Smith, a former mercenary who had fought in the Netherlands and Hungary. Wingfield became the colony’s first president. Smith had been accused of plotting a mutiny during the ocean voyage and was not admitted to the council until weeks later, on June 10. After a period of searching for a settlement site, the colonists moored the ships off a peninsula (now an island) in the James River on the night of May 13 and began to unload them on May 14. The site’s marshy setting and humidity would prove to be unhealthful, but the site had several apparent advantages at the time the colony’s leaders chose it: ships could pull up close to it in deep water for easy loading and unloading, it was unoccupied, and it was joined to the mainland only by a narrow neck of land, making it simpler to defend. The settlement, named for James I, was known variously during its existence as James Forte, James Towne, and James Cittie. Most Indian tribes of the region were part of the Powhatan empire, with Chief Powhatan as its head. The colonists’ relations with the local tribes were mixed from the beginning. The two sides conducted business with each other, the English trading their metal tools and other goods for the Native Americans’ food supplies. At times the Indians showed generosity in providing gifts of food to the colony. On other occasions, encounters between the colonists and the tribes turned violent, and the Native Americans occasionally killed colonists who strayed alone outside the fort. On May 21, 1607, a week after the colonists began occupying Jamestown, Newport took five colonists (including Smith) and 18 sailors with him on an expedition to explore the rivers flowing into the Chesapeake and to search for a way to the Pacific Ocean. On returning, they found that the colony had endured a surprise attack and had managed to drive the attackers away only with cannon fire from the ships. However, when Newport left for England on June 22 with the Susan Constant and the Godspeed—leaving the smaller Discovery behind for the colonists—he brought with him a positive report from the council in Jamestown to the Virginia Company. The colony’s leaders wrote, and probably believed, that the colony was in good condition and on track for success. Get a Britannica Premium subscription and gain access to exclusive content. Subscribe Now The report proved too optimistic. The colonists had not carried out the work in the springtime needed for the long haul, such as building up the food stores and digging a freshwater well. The first mass casualties of the colony took place in August 1607, when a combination of bad water from the river, disease-bearing mosquitoes, and limited food rations created a wave of dysentery, severe fevers, and other serious health problems. Numerous colonists died, and at times as few as five able-bodied settlers were left to bury the dead. In the aftermath, three members of the council—John Smith, John Martin, and John Ratcliffe—acted to eject Edward-Maria Wingfield from his presidency on September 10. Ratcliffe took Wingfield’s place. It was apparently a lawful transfer of power, authorized by the company’s rules that allowed the council to remove the president for just cause. Shortly after Newport returned in early January 1608, bringing new colonists and supplies, one of the new colonists accidentally started a fire that leveled all of the colony’s living quarters. The fire further deepened the colony’s dependence on the Indians for food. In accord with the Virginia Company’s objectives, much of the colony’s efforts in 1608 were devoted to searching for gold. Newport had brought with him two experts in gold refining (to determine whether ore samples contained genuine gold), as well as two goldsmiths. With the support of most of the colony’s leadership, the colonists embarked on a lengthy effort to dig around the riverbanks of the area. Councillor John Smith objected, believing the quest for gold was a diversion from needed practical work. “There was no talke, no hope, no worke, but dig gold, refine gold, load gold,” one colonist remembered. During the colony’s second summer, President Ratcliffe ordered the construction of an overelaborate capitol building. This structure came to symbolize the colony’s mismanagement in the minds of some settlers. With growing discontent over his leadership, Ratcliffe left office; whether he resigned or was overthrown is unclear. John Smith took his place on September 10, 1608. To impose discipline on malingering colonists, Smith announced a new rule: “He that will not worke shall not eate (except by sicknesse he be disabled).” Even so, the colony continued to depend on trade with the Indians for much of its food supply. During Smith’s administration, no settlers died of starvation, and the colony survived the winter with minimal losses. In late September 1608 a ship brought a new group of colonists that included Jamestown’s first women: Mistress Forrest and her maid, Anne Burras. In London, meanwhile, the company received a new royal charter on May 23, 1609, which gave the colony a new form of management, replacing its president and council with a governor. The company determined that Sir Thomas Gates would hold that position for the first year of the new charter. He sailed for Virginia in June with a fleet of nine ships and hundreds of new colonists. The fleet was caught in a hurricane en route, however, and Gates’s ship was wrecked off Bermuda. Other ships of the fleet did arrive in Virginia that August, and the new arrivals demanded that Smith step down. Smith resisted, and finally it was agreed that he would remain in office until the expiration of his term the following month. His presidency ended early nonetheless. While still in command, Smith was seriously injured when his gunpowder bag caught fire from mysterious causes. He sailed back to England in early September. A nobleman named George Percy, the eighth son of an earl, took his place as the colony’s leader.
Solving for x means isolating x (or whatever the variable of interest – it doesn't have to be called x – on one side of the equal sign (it doesn't matter which). The way I like to think about the process of solving for x is that I need to "peel away" everything that's in some way "stuck" to it. Let's discover how to do this with examples. First the easy stuff, then we'll do more difficult problems. I'll only give some basics here. As you learn more about algebra, you'll be able to solve for variables in many other more interesting and challenging situations. For our first example, we'll solve for x when some number is added to it. Here's such an equation: What needs to be done to isolate the variable x on the left side is to "move" that 3 over to the right. That's accomplished by subtracting a 3 from both sides. Remember, we have to do the same thing to both sides, unless it doesn't have an un-balancing consequence, like adding zero or multiplying by 1. Until you get used to doing this kind of small thing in your head, you might want to write it out like this: Now it's just some arithmetic to get the answer: Let's do the same kind of equation, but this time x has something subtracted from it. Right away we know that the method for solving this problem is the same, because subtraction is just addition of a negative. Here's the problem We need to get rid of the 8 on the left by "moving it" to the right side. It is subtracted from x, so we use addition to move it to the right. Here's how you can write it out explicitly. -8 + 8 = 0 on the left: Then the final arithmetic to get the value of x: What about if x has something multiplied by it.\? Well, it's not that different. We just get rid of the multiplier using the inverse operation of multiplication, division. Here's our example: We'll move the 3 to the right side by dividing 3 into both the left and right sides of the equation. If what's on the left is equal to what's on the right, then 1/3 of each must also be equal. You can write it out like this, and "cancel" the 3's on the left: Then just do the division on the right (12/3) to get the solution: The situation is very similar when x is divided by a constant. Here's an example: x is divided by 7. The inverse operation of division is multiplication, so we'll multiply both sides of the equation by 7 to move that 7 to the right side: The 7's on the left cancel, which is just to say that 7/7 = 1. Now 7(2) is 14, so we have Now let's do some combinations. In this example the variable x has something multiplying it and something added to that: We'll need two steps to liberate the x this time. Do the easy one first, and add the 5 to both sides to move it to the right. Then "divide away" the 4. First the addition to move the 5: Here's the intermediate result: Then divide both sides by 4, canceling the 4's on the left and complete the arithmetic. I'll leave the answer like that. I think it's unfortunate that this kind of fraction has been called "improper." There's nothing wrong with it, and in fact in mathematics, it's preferred over the compound fraction 4-1/2. Here's another combination example: This time we'll need to move the three (first – it's the easiest) by subtraction, then the 4 by multiplication (the inverse of division). Here's how the 3 gets moved: And here we get rid of the 4 on the left to isolate x: And the result is: Notice that in the last two examples, we chose to do the subtraction and addition parts before the addition or multiplication. Let's try one of those again: But this time we'll divide by the 4 first, then do our addition to get x alone. First the division, noting that in this case we have to divide all three terms of the equation by 4: The result is Now we move the 5/4 to the right side to find x: which is the same result we got above. But ... it was a bit more cumbersome than just doing it the easy way from the beginning. These can get more complicated, too, so it's important to get the order right. You know about order of operations (we gave it the acronym PEMDAS) if you read the algebra basics section, and it turns out you can't go wrong in solving for x if you just reverse it: "SADMEP." That means to do the subtraction and addition first, then the division and multiplication, then the exponents and what's inside parenthesis last. Check out examples 5 and 6 above and you'll see that's exactly what we did. I like to think of solving for x in this way as "picking the low-hanging fruit" first. I do what's easy first, usually addition and subtraction. I keep peeling layers away in this way until I have x isolated. This kind of problem can cause all kinds of confusion, but the basic procedure is rock-solid and easy to remember.Here's the problem: There's one thing that really needs to be done in such a problem, and that's to get the variable out of the denominator. Always remember that: To remove x from the denominator, we multiply both sides by x. The x's on the left will cancel (x/x = 1), and the x on the right is now in the numerator of 8x. Now it's a straightforward process to isolate x on the right side: You can check this answer in the original equation if you remember that dividing by 1/8 is the same as multiplying by the reciprocal of 1/8, which is 8. xaktly.com by Dr. Jeff Cruzan is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. © 2012, Jeff Cruzan. All text and images on this website not specifically attributed to another source were created by me and I reserve all rights as to their use. Any opinions expressed on this website are entirely mine, and do not necessarily reflect the views of any of my employers. Please feel free to send any questions or comments to email@example.com.
Printable Volume and Surface Area Worksheets Our volume and surface area worksheets are designed for KS3 and KS4 students, and cover everything your pupil or child needs to know to conquer the topics. We have fun and challenging maths worksheets on topics from the surface area of a cuboid, to the volume of a sphere, through to the volume of frustrums worksheet at the higher levels. We have a great range of printable pdf worksheets that will challenge students of all abilities. All volume and surface area worksheets are supplied with answers to ensure you can easily assess how well each student is progressing and identify any areas for revision. Volume and surface area can be a fun and practical topic and so make it enjoyable for your students by using Cazoom Maths worksheets. |Level ||GCSE Grade||Preview||Worksheets|| Answers | |Volume of Cuboids made with Cubes| |Volume and Surface Area of Cuboids| |Volume of Prisms| |Volume of Compound 3D shapes| |Surface Area of Prisms| |Surface Area of Cones and Spheres| |Volume of Pyramids Cones and Spheres| |Formulae for Pyramids Cones and Spheres| |Volume and Surface Area of Cones and Spheres| |Volume of Pyramids and Cones| |Volume Word Problems| Real life applications of volume and surface area Mastering volume and surface area calculations is a necessary skill for many different careers and activities. In engineering, volume and area are very important. Without volume, you can’t figure out key calculations like density, for example. If you are shipping goods, and you want to know how many boxes with fit inside a container then you need to know the volume and surface area of the items involved. Surface area is even important in biology, where certain biological structures often try to maximize their surface area! The applications of volume and surface area are endless. Use our maths worksheets to master calculations such as the volume of a cylinder and the volume and surface area of other 3D shapes. Volume and Surface area at KS3 and KS4 Volume and Surface area is a section of algebra that runs right the way through Key Stage 3 and Key Stage 4. Year 7 students will find resources such as our volume of cuboids worksheet or our recognising prisms worksheet very relevant. Students in year 11 on the other hand will need to make use of our volume of pyramids and cones worksheet and our volume of frustrums worksheet. We have a volume and surface area worksheet for students at all levels of their KS3 and KS4 studies.
Although scientists at one time believed the contrary, we now know that a weak atmosphere surrounds the moon. Unfortunately, astronomers have been unable to collect data or understand the composition and characteristics of this atmosphere due to the location. This problem may soon be solved, however, with NASA’s recent Lunar Atmosphere and Dust Environment Explorer launch, known as LADEE. LADEE launched on the Minotaur V rocket, on September 6. Interestingly, this launch was the first to happen at NASA’s Virginia base, instead of Cape Canaveral. Now that LADEE is in the air, it will first orbit earth three times, which will take about a month. Once it reaches the right location, it will then begin to orbit the moon. The orbit will be anywhere from 20 to 60 kilometers from the surface, close enough to record information about the contents of the moon’s atmosphere. Studies will last about 100 days; once the operation finishes, LADEE will find its new home, crashed on the surface of the moon. One of the major goals of this operation is to understand the contents of the moon’s atmosphere, as well as the effect moon missions have had on it. Unlike Earth’s atmosphere, the moon’s atmosphere is comprised of minute particles floating over the surface. It is best compared to the Earth’s exosphere, the furthest layer of our atmosphere. Knowing more about the moon’s atmosphere and how it differs from our own could help scientists hypothesize the characteristics of the atmospheres of other celestial bodies. LADEE will collect and analyze samples of dust, which will not only allow scientists to understand the components of the atmosphere, but also answer the mystery created by a glow in the horizon, seen by Apollo astronauts. As of now, scientists are unable to definitively explain the glow, but hypothesize that it is the effect of dust particles in the atmosphere. With data from LADEE, NASA scientists will be able to determine whether or not this hypothesis is correct. LADEE is also special because of the technology it holds. This operation will test whether or not lasers are feasible for communication as it is the first craft to use Lunar Laser Communication Demonstration, or LLCD. If this program works, lasers could replace radio waves for transmissions between spacecraft and Earth. As science advances, this information could become crucial, allowing scientists to not only communicate with future crafts but also advance satellite capability. With better satellites, technology for observation, downloading, and terrestrial communication would most likely advance at rapid rates. The program LADEE is testing increases speed to 622 mbps, and as NASA boasts, should be capable of downloading an HD movie from the moon in 9 minutes. To put this in perspective, Wi-Fi’s capacity is at maximum 300 mbps. This test could create the foundation of communication and satellite technology in the upcoming years. There were a few problems with LADEE’s takeoff, however. Scientists uncovered a glitch inside the code for the craft’s spacecraft reaction wheels. Fortunately, this was discovered and fixed quickly in the launch; otherwise it could have easily ruined the mission. The reaction wheels stabilize the craft in flight, and unlike other devices, can do so without using rocket fuel, a precious commodity on a long flight. According to NASA Ames Research Center director Pete Worden, “The LADEE spacecraft is healthy and communicating with mission operators.” Ames Research Center is LADEE’s birthplace, and they control the mission. Hopefully the rest of the mission runs smoothly, giving scientists both crucial information on the moon’s atmosphere and insight into future communication possibilities. Categories: Science & Tech
The cosmic story that unfolded following the Big Bang is ubiquitous no matter where you are. The formation of atomic nuclei, atoms, stars, galaxies, planets, complex molecules, and eventually life is a part of the shared history of everyone and everything in the Universe. As we understand it today, life on our world began, at the latest, only a few hundred million years after Earth was formed. That puts life as we know it already nearly 10 billion years after the Big Bang. The Universe couldn't have formed life from the very first moments; both the conditions and the ingredients were all wrong. But that doesn't mean it took all those billions and billions of years of cosmic evolution to make life possible. It could have begun when the Universe was just a few percent of its current age. Here's when life might have first arisen in our Universe. At the moment of the hot Big Bang, the raw ingredients for life could in no way stably exist. Particles, antiparticles, and radiation all zipped around at relativistic speeds, blasting apart any bound structures that might form by chance. As the Universe aged, though, it also expanded and cooled, reducing the kinetic energy of everything in it. Over time, antimatter annihilated away, stable atomic nuclei formed, and electrons could stably bind to them, forming the first neutral atoms in the Universe. Yet these earliest atoms were only hydrogen and helium: insufficient for life. Heavier elements, such as carbon, nitrogen, oxygen and more, are required to build the molecules that all life processes rely on. For that, we need to form stars in great abundance, have them go through their life-and-death cycle, and return the products of their nuclear fusion to the interstellar medium. It takes 50-to-100 million years to form the first stars, sure, which form in relatively large clusters. But in the densest regions of space, these star clusters will gravitationally pull in other matter, including material for additional stars and other star clusters, paving the way for the first galaxies. By time only ~200-to-250 million years have passed, not only will multiple generations of stars have lived-and-died, but the earliest star clusters will have grown into galaxies. This is important, because we don't just need to create the heavy elements like carbon, nitrogen, and oxygen; we need to create enough of them — and all of the life-essential elements — to produce a wide diversity of organic molecules. We need those molecules to stably exist in a location where they can experience an energy gradient, such as on a rocky moon or planet in the vicinity of a star, or with enough undersea hydrothermal activity to support certain chemical reactions. And we need for those locations to be stable enough that whatever counts as a life process can self-sustain. In astronomy, all of these conditions get lumped together by a single term: metals. When we look at a star, we can measure the strength of the different absorption lines coming from it, which tell us — in combination with the star's temperature and ionization — what the abundances of the different elements are that went into creating it. Add them all up, and that gives you the star's metallicity, or the fraction of the elements within it that are heavier than either plain hydrogen or helium. Our Sun's metallicity is somewhere between 1-and-2%, but that might be excessive for a requirement for life. Stars possessing just a fraction of that, perhaps as little as 10% the Sun's heavy element content, might still have enough of the necessary ingredients, across-the-board, to make life possible. This gets really interesting, nearby, when we look at globular clusters. Globular clusters contain some of the oldest stars in the Universe, with many of them forming when the Universe was less than 10% its current age. They formed when a very massive cloud of gas collapsed, leading to stars that are all of the same age. Since a star's lifetime is determined by its mass, we can look at the stars remaining in a globular cluster and determine its age. For the more than 100 globular clusters in our Milky Way, most of them formed 12-to-13.4 billion years ago, which is extremely impressive considering the Big Bang was just 13.8 billion years ago. Most of the oldest ones, as you might expect, have just 2% of the heavy elements that our Sun has; they're metal-poor and unsuited for life. But a few globular clusters, like Messier 69, offer a tremendous possibility. Like most globular clusters, Messier 69 is old. It has no O-stars, no B-stars, no A-stars and no F-stars; the most massive stars remaining are comparable in mass to our Sun. Based on our observations, it appears to be 13.1 billion years old, meaning its stars come from just 700 million years after the Big Bang. But its location is unusual. Most globular clusters are found in the halos of galaxies, but Messier 69 is a rare one found close to the galactic center: just 5,500 light-years away. (For comparison, our Sun is about 27,000 light-years from the galactic center.) This close proximity means that: - more generations of stars have lived-and-died here than on the galaxy's outskirts, - more supernovae, neutron star mergers and gamma-ray bursts have occurred here than where we are, - and, therefore, these stars should have a much greater abundance of heavy elements than other globular clusters. And boy, does this globular cluster ever deliver! Despite its stars forming when the Universe was just 5% its present age, the close proximity to the galactic center means that the material its stars formed from were already polluted, and filled with heavy elements. When we deduce its metallicity today, even though these stars formed just a few hundred million years after the Big Bang, we find they have 22% the heavy elements that the Sun does. So that's the recipe! Make many generations of stars quickly, form a planet resilient enough around one of the lower-mass, longer-lived stars (like a G-star or a K-star) to protect itself from whatever supernovae, gamma-ray bursts, or other cosmic catastrophes it may encounter, and let the ingredients do what they do. Whether we get lucky or not, there's certainly an opportunity for life at the centers of the oldest galaxies we could ever hope to discover. Wherever we look in space around the centers of galaxies, or around massive, newly forming stars, or in the environments where metal-rich gas is going to form future stars, we find a whole host of complex, organic molecules. These range from sugars to amino acids to ethyl formate (the molecule that gives raspberries their scent) to intricate aromatic hydrocarbons; we find molecules that are precursors to life. We only find them nearby, of course, but that's because we don't know how to look for individual molecular signatures much beyond our own galaxy. But even when we look in our nearby neighborhood, we find some circumstantial evidence that life existed in the cosmos before Earth did. There's even some interesting evidence that life on Earth didn't even begin with Earth. We still don't know how life in the Universe got its start, or whether life as we know it is common, rare, or a once-in-a-Universe proposition. But we can be certain that life came about in our cosmos at least once, and that it was built out of the heavy elements made from previous generations of stars. If we look at how stars theoretically form in young star clusters and early galaxies, we could reach that abundance threshold after several hundred million years; all that remains is putting those atoms together in a favorable-to-life arrangement. If we form the molecules necessary for life and put them in an environment conducive to life arising from non-life, suddenly the emergence of biology could have come when the Universe was just a few percent of its current age. The earliest life in the Universe, we must conclude, could have been possible before it was even a billion years old. Further reading on what the Universe was like when: - What was it like when the Universe was inflating? - What was it like when the Big Bang first began? - What was it like when the Universe was at its hottest? - What was it like when the Universe first created more matter than antimatter? - What was it like when the Higgs gave mass to the Universe? - What was it like when we first made protons and neutrons? - What was it like when we lost the last of our antimatter? - What was it like when the Universe made its first elements? - What was it like when the Universe first made atoms? - What was it like when there were no stars in the Universe? - What was it like when the first stars began illuminating the Universe? - What was it like when the first stars died? - What was it like when the Universe made its second generation of stars? - What was it like when the Universe made the very first galaxies? - What was it like when starlight first broke through the Universe's neutral atoms? - What was it like when the first supermassive black holes formed?
What is an algorithm? In the most general sense, an algorithm is a series of instructions telling a computer how to transform a set of facts about the world into useful information. The facts are data, and the useful information is knowledge for people, instructions for machines or input for yet another algorithm. There are many common examples of algorithms, from sorting sets of numbers to finding routes through maps to displaying information on a screen. To get a feel for the concept of algorithms, think about getting dressed in the morning. Few people give it a second thought. But how would you write down your process or tell a 5-year-old your approach? Answering these questions in a detailed way yields an algorithm. In mathematics and computer science, an algorithm is a finite sequence of well-defined, computer-implementable instructions, typically to solve a class of specific problems or to perform a computation. What are the characteristics of an algorithm? - Precision – the steps are precisely stated. - Uniqueness – results of each step are uniquely defined and only depend on the input and the result of the preceding steps. - Finiteness – the algorithm stops after a finite number of instructions are executed. - Input – the algorithm receives input. - Output – the algorithm produces output. - Generality – the algorithm applies to a set of inputs. How do computer algorithms work? Computer algorithms work via input and output. They take the input and apply each step of the algorithm to that information to generate an output. For example, a search engine is an algorithm that takes a search query as an input and searches its database for items relevant to the words in the query. It then outputs the results. You can easily visualize algorithms as a flowchart. The input leads to steps and questions that need handling in order. When each section of the flowchart is completed, the generated result is the output. How to design an algorithm? 1. Obtain a description of the problem This step is much more difficult than it appears. You first have to create a description of the problem. It's quite common for a problem description to suffer from one or more of the following types of defects: - The description relies on unstated assumptions, - The description is ambiguous, - The description is incomplete, - Or the description has internal contradictions. These defects are rarely due to carelessness. Instead, they are due to the fact that natural languages are rather imprecise. Part of your responsibility is to identify defects in the description of a problem and to work with the client to remedy those defects. 2. Analyze the problem The purpose of this step is to determine both the starting and ending points for solving the problem. This process is analogous to a mathematician determining what is given and what must be proven. A good problem description makes it easier to perform this step. When determining the starting point, we should start by seeking answers to the following questions: - What data are available? - Where is that data? - What formulas pertain to the problem? - What rules exist for working with the data? - What relationships exist among the data values? When determining the ending point, we need to describe the characteristics of a solution. In other words, how will we know when we're done? Asking the following questions often helps to determine the ending point: - What new facts will we have? - What items will have changed? - What changes will have been made to those items? - What things will no longer exist? 3. Develop a high-level algorithm An algorithm is a plan for solving a problem but plans come in several levels of detail. It's usually better to start with a high-level algorithm that includes the major part of a solution but leaves the details until later. 4. Refine the algorithm by adding more detail. A high-level algorithm shows the major steps that need to be followed to solve a problem. Now we need to add details to these steps, but how much detail should we add? The answer to this question really depends on the situation. We have to consider who (or what) is going to implement the algorithm and how much that person (or thing) already knows how to do. When our goal is to develop algorithms that will lead to computer programs, we need to consider the capabilities of the computer and provide enough detail so that someone else could use our algorithm to write a computer program that follows the steps in our algorithm. Remember, when in doubt, or when you are learning, it is better to have too much detail than to have too little. For larger, more complex problems, it is common to go through this process several times, developing intermediate-level algorithms as we go. Each time, we add more detail to the previous algorithm, stopping when we see no benefit to further refinement. This technique of gradually working from a high level to a detailed algorithm is often called stepwise refinement. Stepwise refinement is a process for developing a detailed algorithm by gradually adding detail to a high-level algorithm. 5. Review the algorithm. The final step is to review the algorithm. What are we looking for? First, we need to work through the algorithm step by step to determine whether or not it will solve the original problem. Once we are satisfied that the algorithm does provide a solution to the problem, we start to look for other things. The following questions are typical of ones that should be asked whenever we review an algorithm. Asking these questions and seeking their answers is a good way to develop skills that can be applied to the next problem: - Does this algorithm solve a very specific problem or does it solve a more general problem? If it solves a very specific problem, should it be generalized? - Can this algorithm be simplified? - Is this solution similar to the solution to another problem? How are they alike? How are they different?
Black history or African-American history is the history of the American population of black African descent, from the colonial period to the present. It was a narrow specialty until the Civil Rights Movement in the 1960s made it a high priority for historical research and teaching. It is now one of the largest fields of American history. The history is one of struggle against slavery, segregation, racism and second class citizenship. Historians debate whether to emphasize radical protest, as typified by W.E.B. DuBois, or upward striving through the system, as preached by Booker T. Washington. The 2008 election of Barack Obama as president has been hailed as the culmination of the black struggle for political equality. - 1 Colonial era - 2 Revolution and early republic: 1775-1840 - 3 Age of abolition, 1840-1877 - 4 Age of Jim Crow, 1877-1954 - 5 Age of Civil Rights, 1954 to present - 6 Recent years - 7 Historiography - 8 Knowledge of black history - 9 Bibliography - 10 External links - 11 See also see also Slavery Africans first arrived in 1619, when a Dutch ship sold 19 blacks as indentured servants (not slaves) to Englishmen at Jamestown, Virginia. In all, about 10-12 million Africans were transported to Western Hemisphere. The vast majority of these people came from that stretch of the West African coast extending from present-day Senegal to Angola; a small percentage came from Madagascar and East Africa. Only 3% (about 300,000) went to the American colonies. The vast majority went to the West Indies, where they died quickly. Demographic conditions were highly favorable in the American colonies, with less disease, more food, good medical care, and lighter work loads. Coming as they did from such an extensive area in Africa, they were not of one physical or cultural type. Significant differences existed among them, but they shared a general set of characteristics. They were tall and had dark skin, tight woolly hair, full lips, broad noses, and limited facial and body hair. Gomez (1998) suggests that Africans, upon arriving in America, were dispersed along ethnic and cultural lines. While they eventually dropped their African ethnic identities, they retained some of their original cultures. For example, runaway-slave advertisements sometimes identified the slaves by their ethnic roots ("Dinah, an Ebo wench that speaks very good English"). Historians have disagreed as to whether slavery in colonial Virginia was made politically and psychologically acceptable by an inherent racism among white Europeans, or if slavery emerged as a result of economic factors and racism developed as a consequence of it. The consensus is that the enslavement of Africans was due to economic requirements for labor, to the inability of Africans to resist slavery, and to European beliefs that Africans were an inferior branch of humanity, suited by their characteristics and circumstances to be lifelong slaves. At first the Africans in the South were outnumbered by white indentured servants, who came voluntarily from Britain. They avoided the plantations. With the vast amount of good land and the shortage of laborers, plantation owners turned to lifetime slaves who worked for their keep but were not paid wages and could not easily escape. Slaves had some legal rights (it was a crime to kill a slave, and whites were hung for it.) Generally the slaves developed their own family system, religion and customs in the slave quarters with little interference from owners, who were only interested in work outputs. By 1700 there were 25,000 slaves in the American colonies, about 10% of the population. A few had come from Africa but most came from the West Indies (especially Barbados), or, increasingly, were native born. Their legal status was now clear: they were slaves for life and so were the children of slave mothers. They could be sold, or freed, and a few ran away. Slowly a free black population emerged, concentrated in port cities along the Atlantic coast from Charleston to Boston. Slaves in the cities and towns had many more privileges, but the great majority of slaves lived on southern tobacco or rice plantations, usually in groups of 20 or more. The most serious slave rebellion was the Stono Uprising, in September 1739 in South Carolina. The colony had about 56,000 slaves, who outnumbered whites 2:1. About 150 slaves rose up, and seizing guns and ammunition, murdered twenty whites, and headed for Spanish Florida. The local militia soon intercepted and killed most of them. All the American colonies had slavery, but it was usually the form of personal servants in the North (where 2% of the people were slaves), and field hands in plantations in the South (where 25% were slaves.) Revolution and early republic: 1775-1840 The Declaration of Independence of 1776 said that all men are born free. Acting on that principle, all the northern states abolished slavery between 1776 and 1805—these were the first places in the world where the government abolished slavery. (Britain abolished slavery in the 1830s.) However, with the cotton gin in the 1790s, slavery became highly profitable in the South and was not abolished. Indeed, it expanded rapidly due to demographic growth. In 1808 it became illegal to buy or sell slaves from abroad, but inside the U.S. South the trade was legal and flourished. By 1800 most slaves had become Christians. However few followed the Episcopal or Presbyterian affiliations of most masters; rather by the 1830s most had become Baptists or Methodists, but with a distinctive difference. Genovese (1974) identified the key features of the black version of Christianity as its raucous emotionalism, an absence of a sense of original sin or depravity, an emphasis on the role of Moses (who at times rivaled in importance Jesus), and an uneasy commingling with magic and conjuring. Genovese argued religion was increasingly central to the lives and self-identity of the slaves. "The religion practiced in the quarters gave the slaves the one thing they absolutely had to have if they were to resist. . . . It fired them with a sense of their own worth before God and man." The free black population in the South grew rapidly during 1771-1815, from 28,000 in 1790 to 186,000 in 1860 in the South Atlantic states alone. Before the American Revolution the increase in the free black population was due mainly to local emancipations, natural population increase, and migration from rural areas. During and after the Revolution, however, there were additional ways to become free, including petitions and lawsuits, the 1782 manumission act, self-purchase, purchase by already free blacks, and individual emancipation. Fear of free blacks in an age of black revolts, however, prompted whites to impose restrictions on manumission and migration and ultimately to revert to the colonial-era policy of expelling free blacks from Virginia. Formal laws and informal customs created innumerable obstacles to the socioeconomic advance of the free blacks in the South. Laws prohibited free blacks from some activities and occupations and restricted their participation in others. Racism and terrorism by whites also made advancement difficult. Despite these disadvantages, the free black population fared rather well, with much better nutrition than people back in Europe or Africa. They grew nearly as tall as white Americans and towered over contemporary Europeans. see also Slavery Age of abolition, 1840-1877 The Quakers, as well as Evangelical churches in the U.S. and Britain, led the battle for abolition of slavery. The abolition movement in the U.S. was highly visible and extremely controversial, but it was never large—with fewer than 50,000 activists at most, about half of them free blacks living in the North. Over 1 million slaves were moved from the older seaboard slave states, with their declining economies to the rich cotton states of the southwest; many others were sold and moved locally. Berlin (2003) argues that this "Second Middle Passage" - shredded the planters' paternalist pretenses in the eyes of black people and prodded slaves and free people of color to create a host of oppositional ideologies and institutions that better accounted for the realities of endless deportations, expulsions and flights that continually remade their world. The political and constitutional debate among whites led to the secession of the Deep South and to the Civil War in 1861. The new Republican Party saw slavery as an evil that had to be eventually put on the road to extinction. In the war, however, abolition became a tool to Union victory, as strategized by Abraham Lincoln. The point was that slavery was a main prop of the rebellion, and to win the war it had to be eliminated. Emancipation would have the effect of energizing Confederates who feared a race war, but it would also energize Northerners who saw it as a moral cause, and would help keep Europe from supporting the rebels. At the beginning of the war some Union commanders thought they were supposed to return escaped slaves to their masters. By 1862, when it became clear that this would be a long war, the question of what to do about slavery became more general. The Southern economy and military effort depended on slave labor. It began to seem unreasonable to protect slavery while blockading Southern commerce and destroying Southern production. As one Congressman put it, the slaves "cannot be neutral. As laborers, if not as soldiers, they will be allies of the rebels, or of the Union." The same Congressman—and his fellow Radical Republicans—put pressure on Lincoln to rapidly emancipate the slaves, whereas Conservative Republicans came to accept gradual, compensated emancipation and colonization. In 1861 Lincoln expressed the fear that premature attempts at emancipation would mean the loss of the border states, and that "to lose Kentucky is nearly the same as to lose the whole game." At first Lincoln reversed attempts at emancipation by Secretary of War Simon Cameron and Generals John C. Fremont (in Missouri) and David Hunter (in the South Carolina Sea Islands) in order to keep the loyalty of the border states and the War Democrats. Lincoln then tried to persuade the border states to accept his plan of gradual, compensated emancipation and voluntary colonization, while warning them that stronger measures would be needed if the moderate approach was rejected. Only the District of Columbia accepted Lincoln's gradual plan, and Lincoln issued his final Emancipation Proclamation on January 1 of 1863. In his letter to Hodges, Lincoln explained his belief that "If slavery is not wrong, nothing is wrong … And yet I have never understood that the Presidency conferred upon me an unrestricted right to act officially upon this judgment and feeling ... I claim not to have controlled events, but confess plainly that events have controlled me." The Emancipation Proclamation, announced in September 1862 and The Radical Republicsn put intense political pressure on Lincoln to use emancipation as a weapon. The problem was that he needed first to shore up pro-Union support in key border states, especially Kentucky. Only after it was safe could he act, and then he needed a military victory first. Lincoln thrilled the anti-slavery forces by announcing the Emancipation Proclamation in September 1862; the official proclamation came on January 1, 1863, and it had the effect of freeing most of the 4 million slaves. It also greatly reduced the Confederacy's hope of getting aid from Britain or France. Lincoln's moderate approach succeeded in getting border states, War Democrats and emancipated slaves fighting on the same side for the Union. The Union-controlled border states (Kentucky, Missouri, Maryland, Delaware and West Virginia) were not covered by the Emancipation Proclamation. All abolished slavery on their own, except Kentucky. The great majority of the 4 million slaves were freed by the Emancipation Proclamation, as Union armies moved South. To handle this problem Lincoln proposed the a constitutional amendment. The 13th amendment, passed by Congress in February 1865 and ratified by the states in December 1865, finally freed the remaining 40,000 slaves in Kentucky. Age of Jim Crow, 1877-1954 see Jim Crow Booker T. Washington (1856-1915) was the dominant political and educational leader of the African-American community 1890-1915. He is most famous for his inspiring autobiography, Up from Slavery, his leadership of black conservative business and religious leaders, his founding of Tuskeegee Institute as a college for technical training, and his emphasis on self-help and education as the cure for poverty and the second class status of blacks in America. In his "Atlanta Compromise" of 1895 Washington reluctantly accepted Jim Crow, segregation and disfranchisement in return for black freedom in economic, religious and cultural affairs. Washington was highly popular among top white leaders and most blacks, but his approach was attacked after 1909 as too conservative by W.E.B. DuBois and the NAACP. The most dramatic demographic change came after 1940, as most backs left the rural South—some for nearby southern cities, and most headed to large cities in the North and West. In the decade of the 1940s 1.6 million left the South; in the 1950s, 1.5 million, and in the 1960s 1.4 million. By 1970 there were very few back farmers left. Politically it was a movement from a white dominated rural South where few blacks could vote or speak out, to a pluralistic political environment where northern central cities were controlled by liberals and their allies in the labor unions. Age of Civil Rights, 1954 to present In 1955 blacks in Montgomery, Alabama undertook a boycott of the segregated city buses and chose a local pastor Martin Luther King as their leader, and Rosa Parks as a symbolic actor. Drawing on Gandhi's teachings, King directed a nonviolent boycott designed both to end an injustice and to redeem his white adversaries through love. Love, he said, not only avoided the internal violence of the spirit but also severed the external chain of hatred that only produced more hatred. Somebody, he argued, must be willing to break this chain so that "the beloved community" could be restored and true brotherhood could begin. In November 1956, the boycotters had won a resounding moral victory when the United States Supreme Court nullified the Alabama laws that enforced segregated buses. The Montgomery protest captured the imagination of the world over and marked the beginning of a southern black civil rights movement that rocked the Jim Crow South to its foundations. King, with extraordinary oratorical powers and rich religious imagery, emerged as the most inspiring new moral voice in civil rights. In August 1957 King and 115 other black leaders met in Montgomery and formed the Southern Christian Leadership Conference (SCLC), with King as leader. Working through southern churches, the SCLC enlisted the religious black community in the freedom struggle by expanding "the Montgomery way" across the South. In 1960 southern black college and high school students launched the sit-in movement, forming the Student Nonviolent Coordinating Committee (SNCC). Through 1961 and 1962 civil rights leaders pressured the John F. Kennedy administration to support a tough civil rights bill, seeking a sort of second Emancipation Proclamation that would employ federal power to wipe out segregation just as Lincoln's 1863 decree had abolished slavery. Kennedy, basically conservative and unwilling to offend his base of Southern white voters, refused to act. Civil rights groups thereupon launched multiple mass demonstrations throughout the South. King and the SCLC staff would single out some notoriously segregated city with officials who tolerated violence; mobilize the local blacks with songs, Bible readings, and rousing oratory; and then lead them on protest marches conspicuous for their nonviolent spirit and moral purpose. Then the marchers escalated their demands—even fill up the jails—until they brought about a moment of "creative tension," when white authorities would either agree to negotiate or resort to violence. If violence broke out it would humiliate the moderate whites and redouble national pressures from church and activists for federal intervention. So far there was no violence on the part of blacks, but they were growing more and more frustrated and angry, with militants like Malcolm X calling for more extreme measures. Nonviolent confrontation failed politically in Albany, Georgia, in 1962, where white authorities were equally nonviolent. In 1963 it succeeded in Birmingham, Alabama, where Police Commissioner Eugene ("Bull") Connor turned fire-hoses and police dogs on the marchers—in full view of reporters and television cameras. The civil rights activists thus exposed racist hatred to the scorn of national and world opinion. Jailed during the demonstrations, King wrote his classic "Letter from Birmingham Jail," the most influential and eloquent expression of the goals and philosophy of the civil rights movement. King's great speech, "I Have a Dream" during the March on Washington, August 28, 1963, galvanized the movement, putting forth a goal of an integrated color-blind society. President Lyndon Johnson, a long-time supporter of civil rights, had replaced Kennedy and he seized the moment to mobilize a majority coalition of northern Democrats, Republicans, white churches, and white labor unions to break a Senate filibuster and pass 1964 Civil Rights Act, which desegregated public facilities. Overnight Jim Crow vanished, with little protest or violence. However, within days of the passage of the powerful new law, rioting broke out in black ghettos, as the civil rights leadership discovered it could not control the angry masses. Nor could it control the radical students in SNCC and like-minded groups who were moving rapidly to the left, rejecting alliances with whites, discarding the goal of integration and demanding instead black separatism and "Black Power." In recent years blacks have made major gains in sports, entertainment and politics. George W. Bush appointed the first two blacks to head the cabinet, secretaries of state Colin Powell and Condoleezza Rice. In a stunning upset, Barack Obama defeated Hillary Clinton for the Democratic nomination for president in 2008, then defeated Republican John McCain McCain hailed Obama's win: - I've always believed that America offers opportunities to all who have the industry and will to seize it. Senator Obama believes that, too. But we both recognize that though we have come a long way from the old injustices that once stained our nation's reputation and denied some Americans the full blessings of American citizenship, the memory of them still had the power to wound. A century ago, President Theodore Roosevelt's invitation of Booker T. Washington to dine at the White House was taken as an outrage in many quarters. America today is a world away from the cruel and prideful bigotry of that time. There is no better evidence of this than the election of an African American to the presidency of the United States. Let there be no reason now for any American to fail to cherish their citizenship in this, the greatest nation on Earth. The history of slavery has always been a major research topic for white scholars, but they generally focused on the political and constitutional themes until the 1950s, generally ignoring the black slaves themselves. During Reconstruction and the late 19th century, blacks became major actors in the South. The Dunning School of white scholars generally cast the blacks as pawns of white Carpetbaggers but W.E.B. Dubois, a black historian, and Ulrich B. Phillips, a white historian, studied the African-American experience in depth. Indeed, Phillips set the main topics of inquiry that still guide the analysis of slave economics. In the black community, in the first half of the 20th century Carter G. Woodson was the major scholar studying and promoting the black historical experience. Woodson insisted that the study of African descendants be scholarly sound, creative, restorative, and, most important, directly relevant to the black community. He popularized black history with a variety of innovative strategies and vehicles, including Association for the Study of Negro Life outreach activities, Negro History Month (now Black History Month, in February), and a popular black history magazine. Woodson democratized, legitimized, and popularized black history. Benjamin Quarles (1904–96) and John Hope Franklin (1915-2009) provided a bridge between the work of historians in black schools such as Woodson, and the black history that is now well established in mainline universities. Quarles grew up in Boston, attended Shaw University as an undergraduate, and received a graduate degree at the University of Wisconsin. He began in 1953 teaching at Morgan State College in Baltimore, where he stayed, despite a lucrative offer from Johns Hopkins. Franklin taught at Brooklyn College and had a major impact when he was a professor at the elite University of Chicago, 1964-83. Black history always sought out black agency—even slaves had a certain amount of control over their lives. The assumptions was that slaves were passive and did not rebel was debated in the 1950s and rejected. Many of the white scholars were former Communists or members of the far left, and they looked for violent rebellion. They found few such rebellions, but much unrest. Herbert Gutman and Leon Litwack showed that in reconstruction how former slaves fought to keep their families together and struggled against tremendous odds to define themselves as free people. Robert Fogel, a former Communist who moved to the right, enraged the left when he used quantitative methods to show that the housing, food, clothing and living conditions of the slaves were reasonably favorable. He was awarded the Nobel prize in Economics for his work. Today proponents of black history argue that it promotes diversity, develops self-esteem, and corrects myths and stereotypes. Opponents, including Arthur Schlesinger, Jr. and Oscar Handlin, complain that such curricula are dishonest, divisive, and lack academic credibility and rigor. Knowledge of black history Surveys of 11th and 12th grade students and adults in 2005 show that American schools have made them very well informed about black history. Both groups were asked to name ten famous Americans, excluding presidents. Of the students, the three highest names were blacks: 67% named Martin Luther King, 60% Rosa Parks, and 44% Harriet Tubman. Among adults, King was 2nd (at 36%) and Parks was tied for 4th with 30%, while Tubman tied for 10th place with Henry Ford, at 16%. When distinguished historians were asked in 2006 to name the most prominent Americans, Parks and Tubman did not make the top 100. - Earle, Jonathan, and Malcolm Swanston. The Routledge Atlas of African American History (2000) excerpt and text search - Finkelman, Paul, ed. Encyclopedia of African American History, 1619-1895: From the Colonial Period to the Age of Frederick Douglass (3 vol 2006) - Franklin, John Hope, and Alfred Moss, From Slavery to Freedom. A History of African Americans, (2001), standard textbook; first edition in 1947 excerpt and text search - Litwack, Leon, and August Meier. Black Leaders of the 19th Century. (1988) - Franklin, John Hope, and August Meier, eds. Black Leaders of the Twentieth Century. (1982), short biographies by scholars. - Harris, William H. The Harder We Run: Black Workers Since the Civil War. (1982). online edition - Hine, Darlene Clark, Rosalyn Terborg-Penn and Elsa Barkley Brown, eds. Black Women in America - An Historical Encyclopedia, (2005) excerpt and text search - Hine, Darlene Clark, et al. The African-American Odyssey (2 vol, 4th ed. 2007) textbook excerpt and text search vol 1 - Holt, Thomas C. ed. Major Problems in African-American History: From Freedom to "Freedom Now," 1865-1990s (2000) reader in primary and secondary sources - Horton, James Oliver, and Lois E. Horton. Hard Road to Freedom: The Story of African America: From the Civil War to the Millennium (2002), well-balanced survey - Kelley, Robin D. G., and Earl Lewis, eds. To Make Our World Anew: A History of African Americans. (2000). 672pp; 10 long essays by leading scholars online edition, leftist emphasis - Lowery, Charles D. and John F. Marszalek, eds. Encyclopedia of African-American Civil Rights: From Emancipation to the Present (1992) online edition - Mandle, Jay R. Not Slave, Not Free: The African American Economic Experience since the Civil War (1992) online edition - Painter, Nell Irvin. Creating Black Americans: African American History and Its Meanings, 1619 to the Present. (2006), 480 pp survey; leftist emphasis - Palmer, Colin A. ed. Encyclopedia Of African American Culture And History: The Black Experience In The Americas (6 vol. 2005) - Salzman, Jack, David Lionel Smith, and Cornel West, eds. Encyclopedia of African-American Culture and History. (5 vol. 1996). - Smallwood, Arwin D The Atlas of African-American History and Politics: From the Slave Trade to Modern Times (1997) Slave era pre 1860 - Berlin, Ira. Many Thousands Gone: The First Two Centuries of Slavery in North America (2000) ACLS E-book - Blassingame, John W. The Slave Community: Plantation Life in the Antebellum South (2nd ed. 1979) excerpt and text search - Fogel, Robert. Time on the Cross: The Economics of American Negro Slavery, (2 vol, 1974). (with Stanley Engerman), highly controversial quantitative study by a conservative - Fogel, Robert. Without Consent or Contract: The Rise and Fall of American Slavery, (2 vol, 1989). - Genovese, Eugene. Roll Jordan Roll: The World the Slaves Made (1974), highly influential study of slavery excerpt and text search, by a former Communist who is now a prominent conservative - Gomez, Michael. Exchanging Our Country Marks: The Transformation of African Identities in the Colonial and Antebellum South (1998) 384pp excerpt and text search - Horton, James Oliver, and Lois E. Horton. Slavery and the Making of America (2006), well-balanced survey - Horton, James Oliver. In hope of liberty: culture, community, and protest among northern free Blacks, 1700-1860 (1998) ACLS E-book - Kolchin, Peter. American Slavery, 1619-1877 (wnd ed. 2003), a short survey excerpt and text search - Kulikoff, Allan. Tobacco and Slaves: The Development of Southern Cultures in the Chesapeake, 1680 - 1800 (1986) - Miller, Randall M., and John David Smith, eds. Dictionary of Afro-Amerian Slavery (1988) - Rothman, Adam. Slave Country: American Expansion and the Origins of the Deep South. (2005). 282 pp. excerpt and text search - Sobel, Mechal. The World They Made Together: Black and White Values in Eighteenth-Century Virginia (1987). - White, Deborah Gray. Ar'n't I a Woman? Female Slaves in the Plantation South, (2nd ed. 1999) excerpt and text search - Wood, Peter H. Black majority: Negroes in colonial South Carolina from 1670 through the Stono Rebellion (1975) ACLS E-book Emancipation and Reconstruction Era: 1860-1890 - Boles, John B. Black Southerners, 1619–1869. (1983) - Butchart, Ronald E. Northern Schools, Southern Blacks, and Reconstruction: Freedmen's Education, 1862-1875 (1980) onlineedition - Cimbala, Paul A. and Trefousse, Hans L. (eds.) The Freedmen's Bureau: Reconstructing the American South After the Civil War. 2005. - Click, Patricia C. Time Full of Trial: The Roanoke Island Freedmen's Colony, 1862-1867 (2001) online edition - Crouch, Barry. The Freedmen's Bureau and Black Texans (1992) - Du Bois, W. E. Burghardt. "The Freedmen's Bureau" (1901) by leading black scholar online edition - Du Bois, W. E. Burghardt. Black Reconstruction in America 1860-1880 (1935) - Durrill, Wayne K. "Political Legitimacy and Local Courts: 'Politicks at Such a Rage' in a Southern Community during Reconstruction" in Journal of Southern History, Vol. 70 #3, 2004 pp 577–617 online edition - Foner Eric. Reconstruction: America's Unfinished Revolution, 1863-1877 (1988), the standard history of Reconstruction. - Gutman, Herbert G. The Black Family in Slavery and Freedom, 1750-1925 (1977) - Hahn, Steven. A Nation under Our Feet: Black Political Struggles in the Rural South from Slavery to the Great Migration (2003), 1865-1950 ACLS E-book - Jones, Jacqueline. Labor of Love, Labor of Sorrow: Black Women, Work, and the Family from Slavery to the Present (1985) - Kolchin, Peter. First Freedom: The Responses of Alabama's Blacks to Emancipation and Reconstruction 1972. - Litwack, Leon F. Been in the Storm So Long: The Aftermath of Slavery. 1979, - Oubre, Claude F. Forty Acres and a Mule: The Freedmen's Bureau and Black Land Ownership 1978. - Quarles, Benjamin. The Negro in the Civil War'. (1953) by leading African American historian - Rabinowitz, Howard N. Race Relations in the Urban South, 1865-1890 (1978) - Ransom, Roger L. Conflict and Compromise. (1989), econometric history - Richardson, Joe M. Christian Reconstruction: The American Missionary Association and Southern Blacks, 1861-1890 (1986). - Rodrigue, John C. "Labor Militancy and Black Grassroots Political Mobilization in the Louisiana Sugar Region, 1865-1868" in Journal of Southern History, Vol. 67 #1, 2001 pp 115–45; online edition also in JSTOR - Schwalm, Leslie A. "'Sweet Dreams of Freedom': Freedwomen's Reconstruction of Life and Labor in Lowcountry South Carolina," Journal of Women's History, Vol. 9 #1, 1997 pp 9–32 online edition - Span, Christopher M. "'I Must Learn Now or Not at All': Social and Cultural Capital in the Educational Initiatives of Formerly Enslaved African Americans in Mississippi, 1862-1869," The Journal of African American History, 2002 pp 196–222 online edition - Williamson, Joel. After Slavery: The Negro in South Carolina during Reconstruction, 1861-1877 1965. Jim Crow Era: 1877-1954 - Anderson, James D. The Education of Blacks in the South, 1860-1935 (1988) online edition - Bayor, Ronald H. Race and the Shaping of Twentieth-Century Atlanta (1996) - Bond, Horace Mann. “The Extent and Character of Separate Schools in the United States.” Journal of Negro Education 4(July 1935):321–27. in JSTOR - Brundage, W. Fitzhugh, ed Booker T. Washington and Black Progress: Up from Slavery 100 Years Later (2003) - Bullock, Henry Allen. A History of Negro Education in the South: From 1619 to the Present (1967) ACLS E-book - Cartwright, Joseph H. The Triumph of Jim Crow: Tennessee Race Relations in the 1880s (1976) - Dailey, Jane, Glenda Elizabeth Gilmore, and Bryant Simon, eds. Jumpin' Jim Crow: Southern Politics from Civil War to Civil Rights (2000), essays by scholars on impact of Jim Crow on black communities online edition - Gaines, Kevin. Uplifting the Race: Black Leadership, Politics, and Culture in the Twentieth Century (1996). online edition - Gatewood, Jr., Willard B. Aristocrats of Color: The Black Elite, 1880-1920 (2000) - Gilmore, Glenda Elizabeth. Gender and Jim Crow Women and the Politics of White Supremacy in North Carolina, 1896-1920 (1996) online edition; also excerpt and text search - Gosnell, Harold F. Negro politicians: the rise of Negro politics in Chicago, (1935, 1967) ACLS E-book - Hahn, Steven. A Nation under Our Feet: Black Political Struggles in the Rural South from Slavery to the Great Migration (2003), 1865-1950 ACLS E-book; also excerpt and text search - Jones, Jacqueline. Labor of Love, Labor of Sorrow: Black Women, Work, and the Family from Slavery to the Present (1985) - Harlan. Louis R. Booker T. Washington: The Making of a Black Leader, 1856-1900 (1972) the standard biography, vol 1 - Harlan. Louis R. Booker T. Washington: The Wizard of Tuskegee 1901-1915 (1983), the standard scholarly biography vol 2 online edition vol 2 - Harlan. Louis R. Booker T. Washington in Perspective: Essays of Louis R. Harlan (1988) online edition - Harlan. Louis R. "The Secret Life of Booker T. Washington." Journal of Southern History 37#3 (1971). pp 393–416 Documents Booker T. Washington's secret financing and directing of litigation against segregation and disfranchisement. in JSTOR - McMurry, Linda O. George Washington Carver, Scientist and Symbol (1982) online edition - Jones, Jacqueline. Labor of Love, Labor of Sorrow: Black Women, Work, and the Family from Slavery to the Present (1985) excerpt and text search - Lemann, Nicholas. The Promised Land: The Great Black Migration and How It Changed America (1992) excerpt and text search - Lewis, David Levering. W. E. B. DuBois, 1868-1919: Biography of a Race (2 vol 1993, 2000). excerpt and text search vol 1, winner of Pulitzer Prize; W.E.B. Du Bois: The Fight for Equality and the American Century 1919-1963 (2000) excerpt and text search vol 2 - Litwack, Leon F. Trouble in Mind: Black Southerners in the Age of Jim Crow (1998) excerpt and text search - Logan, Rayford. The Betrayal of the Negro: From Rutherford B. Hayes to Woodrow Wilson (Originally Published as: The Negro in American Life and Thought: The Nadir: 1877-1901) (1970) excerpt and text search - McMillen, Neil R. Dark Journey: Black Mississippians in the Age of Jim Crow. (1989). excerpt and text search - Meier, August. Negro Thought in America, 1880-1915: Racial Ideologies in the Age of Booker T. Washington (1963), - Meier, August. "Toward a Reinterpretation of Booker T. Washington." 23 Journal of Southern History 22#2 (1957) in JSTOR - Myrdal, Gunnar. An American Dilemma: The Negro Problem and Modern Democracy (1944). Highly influential and detailed analysis of the Jim Crow system in operation. excerpt and text search - Norrell, Robert J. Up from History: The Life of Booker T. Washington (2009), new, favorable scholarly biography - Norrell, Robert J. "Booker T. Washington: Understanding the Wizard of Tuskegee" New Coalition News & Views Summer 2004 online edition - Sterner, Richard. The Negro's share: a study of income, consumption, housing, and public assistance (1943), statistical analysis of 1930s ACLS E-book - Walker, Juliet E. K. Encyclopedia of African American Business History (1999) online edition - Woodward, C. Vann. The Strange Career of Jim Crow (3d ed., 1974), in ACLS E-books - Woodward, C. Vann. Origins of the New South, 1877-1913 (1951) ACLS E-book - Wintz, Cary D. African American Political Thought, 1890-1930: Washington, Du Bois, Garvey, and Randolph (1996) online edition Civil Rights Era: 1954 - present - Branch, Taylor. Parting the Waters: America in the King Years 1954-63 (1989) excerpt and text search; Pillar of Fire: America in the King Years 1963-65 (1999) excerpt and text search; At Canaan's Edge: America in the King Years, 1965-68 (2007) - Carson, Clayborne. In Struggle: SNCC and the Black Awakening of the 1960s (1981) - Cashman, Sean Dennis. African-Americans and the Quest for Civil Rights, 1900-1990 (1991) - Collier-Thomas, Bettye, and V.P. Franklin. Sisters in the Struggle : African-American Women in the Civil Rights-Black Power Movement (2001) excerpt and text search - Eagles, Charles, ed. The Civil Rights Movement in America (1986), 200pp; 12 short essays by scholars and text search - Farley, Reynolds, and William H. Frey. "The Segregation of Whites from Blacks During the 1980s: Small Steps Toward a More Integrated Society," American Sociological Review, Vol. 59, No. 1 (Feb., 1994), pp. 23–45 heavily statistical; in JSTOR - Fredrickson, George M. Black Liberation: A Comparative History of Black Ideologies in the United States and South Africa (2nd ed. 1996)excerpt and text search - Garrow, David. Bearing the Cross: Martin Luther King, Jr., And The Southern Christian Leadership Conference (1989) excerpt and text search - Goldman, Peter. The Death and Life of Malcolm X, (2nd ed. 1979) - Graham, Hugh Davis. The Civil Rights Era: Origins and Development of National Policy, 1960-1972 (1990) - Harris, Fredrick C. "Something Within: Religion as a Mobilizer of African-American Political Activism," The Journal of Politics, Vol. 56, No. 1 (Feb., 1994), pp. 42–68 in JSTOR - Horne, Gerald. '"'Myth' and the Making of 'Malcolm X'", The American Historical Review, Vol. 98, No. 2 (Apr., 1993), pp. 440–450 in JSTOR - Kluger, Richard. Simple Justice: The History of Brown v. Board of Education and Black America's Struggle for Equality, (1975) excerpt and text search - Ling, Peter J. Martin Luther King, Jr. (2002) excerpt and text search - Meier, August, and Elliot Rudwick. CORE (1975). - Sitkoff, Harvard. The Struggle for Black Equality (1981). - Walton, Hanes, and Robert C. Smith. American Politics and the African American Quest for Universal Freedom (3rd ed 2005) excerpt and text search - Williams, Juan, and Julian Bond. Eyes on the Prize: America's Civil Rights Years, 1954-1965 (1988) excerpt and text search - Wolters, Raymond. The Burden of Brown: Thirty Years of Desegration (1984) excerpt and text search Historiography and teaching - Arnesen, Eric. "Up From Exclusion: Black and White Workers, Race, and the State of Labor History," Reviews in American History 26#1 March 1998, pp. 146–174 in Project Muse - Dagbovie, Pero. The Early Black History Movement, Carter G. Woodson, and Lorenzo Johnston Greene (2007) excerpt and text search - Dagbovie, Pero Gaglo. "Exploring a Century of Historical Scholarship on Booker T. Washington." Journal of African American History 2007 92(2): 239-264. Issn: 1548-1867 Fulltext: Ebsco - Dorsey, Allison. "Black History Is American History: Teaching African American History in the Twenty-first Century." Journal of American History 2007 93(4): 1171-1177. Issn: 0021-8723 Fulltext: History Cooperative - Ernest, John. "Liberation Historiography: African-American Historians before the Civil War," American Literary History 14#3, Fall 2002, pp. 413–443 in Project Muse - Eyerman, Ron. Cultural Trauma: Slavery and the Formation of African American Identity (2002) argues that slavery emerged as a central element of the collective identity of African Americans in the post-Reconstruction era. - Fields, Barbara J. "Ideology and Race in American History," in J. Morgan Kousser and James M. McPherson, eds., Region, Race, and Reconstruction: Essays in Honor of C. Vann Woodward (1982) - Franklin, John Hope. "Afro-American History: State of the Art," Journal of American History (June 1988): 163-173. in JSTOR - Goggin, Jacqueline. Carter G. Woodson: A Life in Black History (1993) - Hall, Stephen Gilroy. "'To Give a Faithful Account of the Race': History and Historical Consciousness in the African-American Community, 1827-1915." PhD disseratation, Ohio State U. 1999. 470 pp. DAI 2000 60(8): 3084-A. DA9941339 Fulltext: ProQuest Dissertations & Theses - Harris, Robert L., "Coming of Age: The Transformation of Afro-American Historiography," Journal of Negro History 57 (1982): 107-121. in JSTOR - Harris, Robert L., Jr. "The Flowering of Afro-American History." American Historical Review 1987 92(5): 1150-1161. Issn: 0002-8762 in Jstor - Higginbotham, Evelyn Brooks, "African-American Women’s History and the Metalanguage of Race," Signs: Journal of Women in Culture and Society 17 (1992): 251-274. - Hine, Darlene Clark, ed. Afro-American History: Past, Present, and Future. (1986). - Hine, Darlene Clark. Hine Sight: Black Women and the Re-Construction of American History (1994) excerpt and text search - Hornsby Jr., Alton, et al. eds. A Companion to African American History. (2005). 580 pp. 31 long essays by experts covering African and diasporic connections in the context of the transatlantic slave trade; colonial and antebellum African, European, and indigenous relations; processes of cultural exchange; war and emancipation; post-emancipation community and institution building; intersections of class and gender; migration; and struggles for civil rights. ISBN 0-631-23066-1 - McMillen, Neil R. "Up from Jim Crow: Black History Enters the Profession's Mainstream." Reviews in American History 1987 15(4): 543-549. Issn: 0048-7511 in Jstor - Meier, August, and Elliott Rudwick. Black History and the Historical Profession, 1915-1980 (1986) - Nelson, Hasker. Listening For Our Past: A Lay Guide To African American Oral History Interviewing (2000) excerpt and text search - Quarles, Benjamin. Black Mosaic: Essays in Afro-American History and Historiography (1988). - Rabinowitz, Howard N. "More Than the Woodward Thesis: Assessing The Strange Career of Jim Crow", Journal of American History 75 (Dec. 1988): 842-56. in JSTOR - Reidy, Joseph P. "Slave Emancipation Through the Prism of Archives Records" (1997) online - Roper, John Herbert. U. B. Phillips: A Southern Mind (1984), on the white historian of slavery - Trotter, Joe W. "African-American History: Origins, Development, and Current State of the Field," OAH Magazine of History 7#4 Summer 1993 online edition - Wright, William D. Black History and Black Identity: A Call for a New Historiography (2002), proposes new racial and ethnic terminology and classifications for the study of black people and history. excerpt and text search - Aptheker, Herbert, ed. A Documentary History of the Negro People in the United States. (7 vol 1951-1994), by a prominent Communist - Berlin, Ira, ed. Free at Last: A Documentary History of Slavery, Freedom, and the Civil War (1995) - Bracey, John H., and Manisha Sinha, eds. African American Mosaic: A Documentary History from the Slave Trade to the Twenty-First Century, (2 vol 2004) - Chafe, William Henry, Raymond Gavins, and Robert Korstad, eds. Remembering Jim Crow: African Americans Tell About Life in the Segregated South (2003) excerpt and text search - Finkenbine, Roy E. Sources of the African-American Past: Primary Sources in American History (2nd Edition) (2003) - Hampton, Henry, and Steve Fayer, eds. Voices of Freedom (1990), oral histories of civil rights movement - King, Martin Luther. I Have a Dream: Writings and Speeches That Changed the World, (1992) excerpt and text search - King, Martin Luther. Why We Can't Wait (1963; 2000) - King, Martin Luther. The Papers of Martin Luther King, Jr.: Volume VI: Advocate of the Social Gospel, September 1948-March 1963 (2007) excerpt and text search - Levy, Peter B. Let Freedom Ring: A Documentary History of the Modern Civil Rights Movement (1992) online edition - Rawick, George P. ed. The American Slave: A Composite Autobiography (19 vols., (1972) oral histories with ex-slaves conducted in 1930s by WPA - Sernett, Milton C. African American Religious History: A Documentary Witness (1999) excerpt and text search - Washington, Booker T. "The Awakening of the Negro," The Atlantic Monthly, 78 (September, 1896). - Washington, Booker T. Up from Slavery: An Autobiography (1901). - The Booker T. Washington Papers University of Illinois Press online version of complete fourteen volume set of all letters to and from Booker T. Washington. - Wright, Kai, ed. The African-American Archive: The History of the Black Experience Through Documents (2001) - Library of Congress - African American History and Culture - Encyclopedia Britannica - Guide to Black History - Tennessee Technological University - African-American History and Studies - Wood (1974) - Eugene Genovese, Roll Jordan Roll (1974) p. 283 - Michael L. Nicholls, "Strangers Setting Among Us: The Sources and Challenge of the Urban Free Black Population of Early Virginia". Virginia Magazine of History and Biography 2000 108(2): 155-179. 0042-6636 in JSTOR - Howard. Bodenhorn, "A Troublesome Caste: Height and Nutrition of Antebellum Virginia's Rural Free Blacks." Journal of Economic History 1999 59(4): 972-996. 0022-0507 in JSTOR - James MacPherson, Battle Cry of Freedom (1988) page 495 - Lincoln's letter to O. H. Browning, Sep 22, 1861 - Lincoln's Letter to A. G. Hodges, April 4, 1864 - It also freed 1,000 or so slaves in Delaware and some lifetime servants in West Virginia, as well as black slaves owned by Indians in Oklahoma. - Robert Terrill, "Protest, Prophecy, and Prudence in the Rhetoric of Malcolm X," Rhetoric & Public Affairs 4#1 Spring 2001, pp. 25-53 in Project Muse; Akinyele O. Umoja, "The Ballot and the Bullet," Journal of Black Studies 29 (1999): 558-79; Sean Dennis Cashman, African-Americans and the Quest for Civil Rights, 1900-1990 (1991), 184-215. - Edward I. Berry, "Doing Time: King's 'Letter from Birmingham Jail." Rhetoric & Public Affairs 9#1 Spring 2005, pp. 109-131 in Project Muse - Mark Vail, "The 'Integrative' Rhetoric of Martin Luther King Jr.'s 'I Have a Dream' Speech," Rhetoric & Public Affairs 9#1 Spring 2006, pp. 51-78 in Project Muse; Alexandra Alverez, "Martin Luther King's 'I Have a Dream': The Speech Event as Metaphor," Journal of Black Studies 3 (1998):337–57 - Akinyele O. Umoja, "1964: The Beginning of the End of Nonviolence in the Mississippi Freedom Movement," Radical History Review, Jan 2003; 2003: 201 - 226. online in Duke journals - McCain statement Nov. 4, 2008 - Pero Gaglo Dagbovie, "Making Black History Practical and Popular: Carter G. Woodson, the Proto Black Studies Movement, and the Struggle for Black Liberation." Western Journal of Black Studies 2004 28(2): 372-383. Issn: 0197-4327 Fulltext: Ebsco - Abul Pitre and Ruth Ray, "The Controversy Around Black History." Western Journal of Black Studies 2002 26(3): 149-154. Issn: 0197-4327 Fulltext: Ebsco - Sam Wineburg and Chauncey Monte-Sano, "'Famous Americans': The Changing Pantheon of American Heroes," Journal of American History (March 2008) 94#4 pp. 1186–1202.
The lives of stars. Credit: Wikimedia/cmglee/NASA Goddard Space Flight Center Our galaxy, the Milky Way, contains at least 100 billion stars. Over the centuries, astronomers have scoured the skies, developing a thorough understanding of the lives of those stars, from their formation in vast nebulae to their fiery and spectacular deaths. But how has our galaxy changed over time? Where did the stars we see today form, and which of them are siblings, formed together from the same cloud of material? To answer these questions we need to perform Galactic archaeology. To do this, an ambitious Australian-led observing survey, called Galah, is undertaking the immense task of capturing millions of rainbows to disentangle our galaxy’s story. Birds of a feather When we break the light from a star into its component colours, the spectrum is laced with dark lines. These are the telltale fingerprints of the various atomic and molecular species present in the star’s outer layers. By studying those lines we can learn a great deal about the star, such as how fast it spins, its temperature, and what elements it is made of. We can even use them to study stellar magnetic fields. In essence, stars turn hydrogen and helium into heavier elements. When they die, they return that material to the galaxy, to be incorporated in the next generation of stars. Most stars form in clusters, groups of hundreds to millions of stars that form at the same time in a vast nebula. Each nebula will have a unique composition, seeded by the death throes of the previous generation of stars in the distant past. The Fraunhofer lines – absorption lines in the sun’s spectrum that signpost the chemical composition of its outer atmosphere. Credit: Wikimedia/nl:Gebruiker:MaureenV/Phrood/Saperaud We also know that different types of stars return different elements to the galaxy at the end of their lifetimes. Because of this, astronomers can use the elemental patterns in present-day stars to explore what kinds of stars were in our galaxy in the past. On timescales of millions of years, stars escape from the clusters in which they formed and migrate around the disk of the galaxy. If we can use spectra to measure the compositions of many stars, we should be able to identify those that are made of the same stuff. The common origins of widely scattered stars is thus revealed by their matching compositions. That brings us to Galah. Hatching the idea for Galah Galactic archaeology with HERMES (Galah) is a massive observational project using the 3.9-metre Anglo-Australian Telescope at Siding Spring Observatory. Since its start, in late 2013, the survey has collected more than 250,000 spectra, and that number grows every month. To make such a large project possible, Galah uses robots to position fibre optic cables to catch starlight. These allow the Galah team to observe around 350 stars simultaneously in a region of sky four times the diameter of the full Moon. When a star like the sun comes to the end of its life, it blows off its outer layers to form a planetary nebula – ejecting gas that will form the next generation of stars. The Helix nebula (pictured) is one of the finest examples in the night sky. Credit: NASA, ESA, and C R O’Dell (Vanderbilt University) After about an hour staring at one group of stars, Galah moves on, scanning field after field to build its catalogue of stellar spectra. When the project is complete, more than a million rainbows will be caught, each in exquisite detail. In good company The past few years have seen a worldwide boom in galactic archaeology. Several survey projects are going on around the globe, each filling a unique niche, and even larger projects are planned for the future. While each of these surveys has a particular goal, when brought together they form a scientific superset that is greater than the sum of its parts. The APOGEE survey studies red giant stars throughout the Milky Way using the 3.5-metre Sloan telescope in the United States. Because it observes at infrared wavelengths, it is the only large survey that can peer through the dust that pervades our galaxy. This allows APOGEE to collect data on stars across the entire galaxy. The disk of our galaxy, which contains the great majority of stars, is surrounded by a roughly spherical halo which consists of ancient stars. The halo hosts the mysterious globular clusters – spherical swarms of millions of tightly packed stars. The video will load shortlyEach red and blue point shows an individual GALAH target, with the blue as dwarfs and red as giants.The Gaia-ESO Survey targets all these populations and more, using two different visible-light instruments at the 8-metre Very Large Telescope in Chile. Galah, by contrast, focuses mainly our galaxy’s disk, where the great bulk of its stars reside. By obtaining such a huge sample of stellar spectra, Galah is the perfect complement to these two more focused surveys, providing the context in which their results can be understood. Flying to the future with Gaia While Galah and its fellow archaeological surveys have been farming the night sky, the Gaia spacecraft has been busy pulling together a different, but complementary, data set. Launched in 2013 on an initial five-year mission, Gaia is continually scouring the sky, repeatedly observing more than a billion stars, measuring their positions with unprecedented precision. By observing the same star several times, Gaia can determine how it moves across the sky, giving us an incredibly precise measurement of the star’s distance from Earth. Gaia also reveals the kinematics of the stars – how they move with respect to one another through our galaxy. Even on its own, Gaia’s data will be an incredible resource. But when combined with data obtained by Galah and its siblings, it becomes far more powerful. Gaia will provide the distance to, and the precise motion of, a huge number of stars that will also have been surveyed by Galah. The motion of Barnard’s star, one of the sun’s nearest neighbours, against background stars over a 20 year period. Credit: Steve Quirk Our first steps The first public release of Gaia data earlier this year included precise sky positions and brightnesses for more than a billion stars and quasars. More importantly for our work, it also included the distances and space motions for 2 million stars that had been targeted by previous space missions. To coordinate with Gaia, Galah also made a subset of its data publicly available, including data for 9,860 stars. Of these, 7,894 are in the special subset released by the Gaia team, and hence have precisely known distances. Combining these data sets will allow the Galah team to investigate not just which stars formed together, but to examine whether they still follow similar paths around the galaxy. As the Gaia mission continues, it will provide precise distances and space motions for every single star in the Galah catalogue. By piecing Gaia’s data together with our own, we will paint a far more detailed picture of our galaxy’s past, present and future than has ever been seen before. Explore further:Virtual Milky Way Provided by:The Conversation This article was originally published on The Conversation. Read the original article.
In Session 6, we found the areas of different polygons (parallelograms, triangles) by dissecting the polygons and rearranging the pieces into a recognizable simpler shape. In this case, we transformed a parallelogram into a rectangle by slicing a triangle off one end and sliding it along to fit into the other end. In doing so, we established that the area of the parallelogram was the same as the area of the equivalent rectangle (its base multiplied by the perpendicular height). Can we use the same technique and transform a circle into a rectangular shape? Use a compass and draw a large circle. Fold the circle in half horizontally and vertically. Cut the circle into four wedges on the fold lines. Then fold each wedge into quarters. Cut each wedge on the fold lines. You will have 16 wedges. Tape the wedges to a piece of paper to form the following figure: Notice that we have a crude parallelogram with a height equal to the radius of the original circle and a base roughly equal to half the circumference of the original circle.
Habits of the Mind S5CS1. Students will be aware of the importance of curiosity, honesty, openness, and skepticism in science and will exhibit these traits in their own efforts to understand how the world works. a. Keep records of investigations and observations and do not alter the records later. b. Carefully distinguish observations from ideas and speculation about those observations. c. Offer reasons for findings and consider reasons suggested by others. d. Take responsibility for understanding the importance of being safety conscious. S5CS2. Students will have the computation and estimation skills necessary for analyzing data and following scientific explanations. a. Add, subtract, multiply, and divide whole numbers mentally, on paper, and with a calculator. b. Use fractions and decimals, and translate between decimals and commonly encountered fractions – halves, thirds, fourths, fifths, tenths, and hundredths (but not sixths, sevenths, and so on) – in scientific calculations. c. Judge whether measurements and computations of quantities, such as length, area, volume, weight, or time, are reasonable answers to scientific problems by comparing them to typical values. S5CS3. Students will use tools and instruments for observing, measuring, and manipulating objects in scientific activities. a. Choose appropriate common materials for making simple mechanical constructions and repairing things. b. Measure and mix dry and liquid materials in prescribed amounts, exercising reasonable safety. c. Use computers, cameras and recording devices for capturing information. d. Identify and practice accepted safety procedures in manipulating science materials and equipment. S5CS4. Students will use ideas of system, model, change, and scale in exploring scientific and technological matters. a. Observe and describe how parts influence one another in things with many parts. b. Use geometric figures, number sequences, graphs, diagrams, sketches, number lines, maps, and stories to represent corresponding features of objects, events, and processes in the real world. Identify ways in which the representations do not match their original counterparts. c. Identify patterns of change in things—such as steady, repetitive, or irregular change—using records, tables, or graphs of measurements where appropriate. d. Identify the biggest and the smallest possible values of something. S5CS5. Students will communicate scientific ideas and activities clearly. a. Write instructions that others can follow in carrying out a scientific procedure. b. Make sketches to aid in explaining scientific procedures or ideas. c. Use numerical data in describing and comparing objects and events. d. Locate scientific information in reference books, back issues of newspapers and magazines, CD-ROMs, and computer databases. S5CS6. Students will question scientific claims and arguments effectively. a. Support statements with facts found in books, articles, and databases, and identify the sources used. b. Identify when comparisons might not be fair because some conditions are different. The Nature of Science S5CS7. Students will be familiar with the character of scientific knowledge and how it is achieved. Students will recognize that: a. Similar scientific investigations seldom produce exactly the same results, which may differ due to unexpected differences in whatever is being investigated, unrecognized differences in the methods or circumstances of the investigation, or observational uncertainties. b. Some scientific knowledge is very old and yet is still applicable today. S5CS8. Students will understand important features of the process of scientific inquiry. Students will apply the following to inquiry learning practices: a. Scientific investigations may take many different forms, including observing what things are like or what is happening somewhere, collecting specimens for analysis, and doing experiments. b. Clear and active communication is an essential part of doing science. It enables scientists to inform others about their work, expose their ideas to criticism by other scientists, and stay informed about scientific discoveries around the world. c. Scientists use technology to increase their power to observe things and to measure and compare things accurately. d. Science involves many different kinds of work and engages men and women of all ages and backgrounds. Co-Requisite - Content S5E1. Students will identify surface features of the Earth caused by constructive and destructive processes. a. Identify surface features caused by constructive processes. • Deposition (Deltas, sand dunes, etc.) b. Identify and find examples of surface features caused by destructive processes. • Erosion (water—rivers and oceans, wind) • Impact of organisms c. Relate the role of technology and human intervention in the control of constructive and destructive processes. Examples include, but are not limited to • Seismological studies, • Flood control, (dams, levees, storm drain management, etc.) • Beach reclamation (Georgia coastal islands) S5P1. Students will verify that an object is the sum of its parts. a. Demonstrate that the mass of an object is equal to the sum of its parts by manipulating and measuring different objects made of various parts. b. Investigate how common items have parts that are too small to be seen without magnification. S5P2. Students will explain the difference between a physical change and a chemical change. a. Investigate physical changes by separating mixtures and manipulating (cutting, tearing, folding) paper to demonstrate examples of physical change. b. Recognize that the changes in state of water (water vapor/steam, liquid, ice) are due to temperature differences and are examples of physical change. c. Investigate the properties of a substance before, during, and after a chemical reaction to find evidence of change. S5P3. Students will investigate the electricity, magnetism, and their relationship. a. Investigate static electricity. b. Determine the necessary components for completing an electric circuit. c. Investigate common materials to determine if they are insulators or conductors of electricity. d. Compare a bar magnet to an electromagnet. S5L1. Students will classify organisms into groups and relate how they determined the groups with how and why scientists use classification. a. Demonstrate how animals are sorted into groups (vertebrate and invertebrate) and how vertebrates are sorted into groups (fish, amphibian, reptile, bird, and mammal). b. Demonstrate how plants are sorted into groups. S5L2. Students will recognize that offspring can resemble parents in inherited traits and learned behaviors. a. Compare and contrast the characteristics of learned behaviors and of inherited traits. b. Discuss what a gene is and the role genes play in the transfer of traits. Teacher note: Be sensitive to this topic since biological parents may be unavailable. S5L3. Students will diagram and label parts of various cells (plant, animal, single-celled, multi-celled). a. Use magnifiers such as microscopes or hand lenses to observe cells and their structure. b. Identify parts of a plant cell (membrane, wall, cytoplasm, nucleus, chloroplasts) and of an animal cell (membrane, cytoplasm, and nucleus) and determine the function of the parts. c. Explain how cells in multi-celled organisms are similar and different in structure and function to single-celled organisms. S5L4. Students will relate how microorganisms benefit or harm larger organisms. a. Identify beneficial microorganisms and explain why they are beneficial. b. Identify harmful microorganisms and explain why they are harmful. Standards Link: http://www.GeorgiaStandards.org
Morse code is a method of transmitting text information as a series of on-off tones, lights, or clicks that can be directly understood by a skilled listener or observer without special equipment. The International Morse Code encodes the ISO basic Latin alphabet, some extra Latin letters, the Arabic numerals and a small set of punctuation and procedural signals (prosigns) as standardized sequences of short and long signals called "dots" and "dashes", or "dits" and "dahs", as in amateur radio practice. Because many non-English natural languages use more than the 26 Roman letters, extensions to the Morse alphabet exist for those languages. Each Morse code symbol represents either a text character (letter or numeral) or a prosign and is represented by a unique sequence of dots and dashes. The duration of a dash is three times the duration of a dot. Each dot or dash is followed by a short silence, equal to the dot duration. The letters of a word are separated by a space equal to three dots (one dash), and the words are separated by a space equal to seven dots. The dot duration is the basic unit of time measurement in code transmission. To increase the speed of the communication, the code was designed so that the length of each character in Morse varies approximately inversely to its frequency of occurrence in English. Thus the most common letter in English, the letter "E", has the shortest code, a single dot. Morse code is used by some amateur radio operators, although knowledge of and proficiency with it is no longer required for licensing in most countries. Pilots and air traffic controllers usually need only a cursory understanding. Aeronautical navigational aids, such as VORs and NDBs, constantly identify in Morse code. Compared to voice, Morse code is less sensitive to poor signal conditions, yet still comprehensible to humans without a decoding device. Morse is therefore a useful alternative to synthesized speech for sending automated data to skilled listeners on voice channels. Many amateur radio repeaters, for example, identify with Morse, even though they are used for voice communications. SOS, the standard emergency signal, is a Morse code prosign In an emergency, Morse code can be sent by improvised methods that can be easily "keyed" on and off, making it one of the simplest and most versatile methods of telecommunication. The most common distress signal is SOS or three dots, three dashes and three dots, internationally recognized by treaty. A typical "straight key". This U.S. model, known as the J-38, was manufactured in huge quantities during World War II, and remains in widespread use today. In a straight key, the signal is "on" when the knob is pressed, and "off" when it is released. Length and timing of the dots and dashes are entirely controlled by the telegraphist. In 1837, William Cooke and Charles Wheatstone in England began using an electrical telegraph that also used electromagnets in its receivers. However, in contrast with any system of making sounds of clicks, their system used pointing needles that rotated above alphabetical charts to indicate the letters that were being sent. In 1841, Cooke and Wheatstone built a telegraph that printed the letters from a wheel of typefaces struck by a hammer. This machine was based on their 1840 telegraph and worked well; however, they failed to find customers for this system and only two examples were ever built. On the other hand, the three Americans' system for telegraphy, which was first used in about 1844, was designed to make indentations on a paper tape when electric currents were received. Morse's original telegraph receiver used a mechanical clockwork to move a paper tape. When an electrical current was received, an electromagnet engaged an armature that pushed a stylus onto the moving paper tape, making an indentation on the tape. When the current was interrupted, a spring retracted the stylus, and that portion of the moving tape remained unmarked. The Morse code was developed so that operators could translate the indentations marked on the paper tape into text messages. In his earliest code, Morse had planned to transmit only numerals, and to use a codebook to look up each word according to the number which had been sent. However, the code was soon expanded by Alfred Vail to include letters and special characters, so it could be used more generally. Vail estimated the frequency of use of letters in the English language by counting the movable type he found in the type-cases of a local newspaper in Morristown. The shorter marks were called "dots", and the longer ones "dashes", and the letters most commonly used were assigned the shorter sequences of dots and dashes. Comparison of historical versions of Morse code with the current standard. 1. American Morse code as originally defined. 2. The modified and rationalized version used by Gerke on German railways. 3. The current ITU standard. In the original Morse telegraphs, the receiver's armature made a clicking noise as it moved in and out of position to mark the paper tape. The telegraph operators soon learned that they could translate the clicks directly into dots and dashes, and write these down by hand, thus making the paper tape unnecessary. When Morse code was adapted to radio communication, the dots and dashes were sent as short and long tone pulses. It was later found that people become more proficient at receiving Morse code when it is taught as a language that is heard, instead of one read from a page. To reflect the sounds of Morse code receivers, the operators began to vocalize a dot as "dit", and a dash as "dah". Dots which are not the final element of a character became vocalized as "di". For example, the letter "c" was then vocalized as "dah-di-dah-dit". In the 1890s, Morse code began to be used extensively for early radio communication, before it was possible to transmit voice. In the late 19th and early 20th centuries, most high-speed international communication used Morse code on telegraph lines, undersea cables and radio circuits. In aviation, Morse code in radio systems started to be used on a regular basis in the 1920s. Although previous transmitters were bulky and the spark gap system of transmission was difficult to use, there had been some earlier attempts. In 1910 the US Navy experimented with sending Morse from an airplane. That same year a radio on the airship America had been instrumental in coordinating the rescue of its crew. Zeppelin airships equipped with radio were used for bombing and naval scouting during World War I, and ground-based radio direction finders were used for airship navigation. Allied airships and military aircraft also made some use of radiotelegraphy. However, there was little aeronautical radio in general use during World War I, and in the 1920s there was no radio system used by such important flights as that of Charles Lindbergh from New York to Paris in 1927. Once he and the Spirit of St. Louis were off the ground, Lindbergh was truly alone and incommunicado. On the other hand, when the first airplane flight was made from California to Australia in the 1930s on the Southern Cross, one of its four crewmen was its radio operator who communicated with ground stations via radio telegraph. Beginning in the 1930s, both civilian and military pilots were required to be able to use Morse code, both for use with early communications systems and for identification of navigational beacons which transmitted continuous two- or three-letter identifiers in Morse code. Aeronautical charts show the identifier of each navigational aid next to its location on the map. Radio telegraphy using Morse code was vital during World War II, especially in carrying messages between the warships and the naval bases of the belligerents. Long-range ship-to-ship communication was by radio telegraphy, using encrypted messages, because the voice radio systems on ships then were quite limited in both their range and their security. Radiotelegraphy was also extensively used by warplanes, especially by long-range patrol planes that were sent out by those navies to scout for enemy warships, cargo ships, and troop ships. Morse code was used as an international standard for maritime distress until 1999, when it was replaced by the Global Maritime Distress Safety System. When the French Navy ceased using Morse code on January 31, 1997, the final message transmitted was "Calling all. This is our last cry before our eternal silence." In the United States the final commercial Morse code transmission was on July 12, 1999, signing off with Samuel Morse's original 1844 message, "What hath God wrought", and the prosign "SK". A commercially manufactured iambic paddle used in conjunction with an electronic keyer to generate high-speed Morse code, the timing of which is controlled by the electronic keyer. Manipulation of dual-lever paddles is similar to the Vibroplex, but pressing the right paddle generates a series of dahs, and squeezing the paddles produces dit-dah-dit-dah sequence. The actions are reversed for left-handed operators. Morse code speed is measured in words per minute (wpm) or characters per minute (cpm). Characters have differing lengths because they contain differing numbers of dots and dashes. Consequently words also have different lengths in terms of dot duration, even when they contain the same number of characters. For this reason, a standard word is helpful to measure operator transmission speed. "PARIS" and "CODEX" are two such standard words. Operators skilled in Morse code can often understand ("copy") code in their heads at rates in excess of 40 wpm. In addition to knowing, understanding, and being able to copy the standard written alpha-numeric and punctuation characters or symbols at high speeds, skilled high speed operators must also be fully knowledgeable of all of the special unwritten Morse code symbols for the standard Prosigns for Morse code and the meanings of these special procedural signals in standard Morse code communications protocol. International contests in code copying are still occasionally held. In July 1939 at a contest in Asheville, North Carolina in the United StatesTed R. McElroy set a still-standing record for Morse copying, 75.2 wpm. William Pierpont N0HFF also notes that some operators may have passed 100 wpm. By this time they are "hearing" phrases and sentences rather than words. The fastest speed ever sent by a straight key was achieved in 1942 by Harry Turner W9YZE (d. 1992) who reached 35 wpm in a demonstration at a U.S. Army base. To accurately compare code copying speed records of different eras it is useful to keep in mind that different standard words (50 dot durations versus 60 dot durations) and different interword gaps (5 dot durations versus 7 dot durations) may have been used when determining such speed records. For example, speeds run with the CODEX standard word and the PARIS standard may differ by up to 20%. Today among amateur operators there are several organizations that recognize high speed code ability, one group consisting of those who can copy Morse at 60 wpm. Also, Certificates of Code Proficiency are issued by several amateur radio societies, including the American Radio Relay League. Their basic award starts at 10 wpm with endorsements as high as 40 wpm, and are available to anyone who can copy the transmitted text. Members of the Boy Scouts of America may put a Morse interpreter's strip on their uniforms if they meet the standards for translating code at 5 wpm. International Morse Code Morse code has been in use for more than 160 years—longer than any other electrical coding system. What is called Morse code today is actually somewhat different from what was originally developed by Vail and Morse. The Modern International Morse code, or continental code, was created by Friedrich Clemens Gerke in 1848 and initially used for telegraphy between Hamburg and Cuxhaven in Germany. Gerke changed nearly half of the alphabet and all of the numerals, providing the foundation for the modern form of the code. After some minor changes, International Morse Code was standardized at the International Telegraphy Congress in 1865 in Paris, and was later made the standard by the International Telecommunication Union (ITU). Morse's original code specification, largely limited to use in the United States and Canada, became known as American Morse code or railroad code. American Morse code is now seldom used except in historical re-enactments. In aviation, instrument pilots use radio navigation aids. To ensure that the stations the pilots are using are serviceable, the stations all transmit a short set of identification letters (usually a two-to-five-letter version of the station name) in Morse code. Station identification letters are shown on air navigation charts. For example, the VOR based at Manchester Airport in England is abbreviated as "MCT", and MCT in Morse code is transmitted on its radio frequency. In some countries, during periods of maintenance, the facility may radiate a T-E-S-T code (— · · · · —) or the code may be removed, which tells pilots and navigators that the station is unreliable. In Canada, the identification is removed entirely to signify the navigation aid is not to be used. In the aviation service Morse is typically sent at a very slow speed of about 5 words per minute. In the U.S., pilots do not actually have to know Morse to identify the transmitter because the dot/dash sequence is written out next to the transmitter's symbol on aeronautical charts. Some modern navigation receivers automatically translate the code into displayed letters. Vibroplex brand semiautomatic key (generically called a "bug"). The paddle, when pressed to the right by the thumb, generates a series of dits, the length and timing of which are controlled by a sliding weight toward the rear of the unit. When pressed to the left by the knuckle of the index finger, the paddle generates a single dah, the length of which is controlled by the operator. Multiple dahs require multiple presses. Left-handed operators use a key built as a mirror image of this one. International Morse code today is most popular among amateur radio operators, where it is used as the pattern to key a transmitter on and off in the radio communications mode commonly referred to as "continuous wave" or "CW" to distinguish it from spark transmissions, not because the transmission was continuous. Other keying methods are available in radio telegraphy, such as frequency shift keying. The original amateur radio operators used Morse code exclusively, since voice-capable radio transmitters did not become commonly available until around 1920. Until 2003 the International Telecommunication Union mandated Morse code proficiency as part of the amateur radio licensing procedure worldwide. However, the World Radiocommunication Conference of 2003 made the Morse code requirement for amateur radio licensing optional. Many countries subsequently removed the Morse requirement from their licence requirements. Until 1991 a demonstration of the ability to send and receive Morse code at a minimum of five words per minute (wpm) was required to receive an amateur radio license for use in the United States from the Federal Communications Commission. Demonstration of this ability was still required for the privilege to use the HF bands. Until 2000 proficiency at the 20 wpm level was required to receive the highest level of amateur license (Amateur Extra Class); effective April 15, 2000, the FCC reduced the Extra Class requirement to five wpm. Finally, effective on February 23, 2007 the FCC eliminated the Morse code proficiency requirements from all amateur radio licenses. While voice and data transmissions are limited to specific amateur radio bands under U.S. rules, Morse code is permitted on all amateur bands—LF, MF, HF, VHF, and UHF. In some countries, certain portions of the amateur radio bands are reserved for transmission of Morse code signals only. The relatively limited speed at which Morse code can be sent led to the development of an extensive number of abbreviations to speed communication. These include prosigns, Q codes, and a set of Morse code abbreviations for typical message components. For example, CQ is broadcast to be interpreted as "seek you" (I'd like to converse with anyone who can hear my signal). OM (old man), YL (young lady) and XYL ("ex-YL" – wife) are common abbreviations. YL or OM is used by an operator when referring to the other operator, XYL or OM is used by an operator when referring to his or her spouse. QTH is "location" ("My QTH" is "My location"). The use of abbreviations for common terms permits conversation even when the operators speak different languages. Although the traditional telegraph key (straight key) is still used by some amateurs, the use of mechanical semi-automatic keyers (known as "bugs") and of fully automatic electronic keyers is prevalent today. Software is also frequently employed to produce and decode Morse code radio signals. A U.S. Navy signalman sends Morse code signals in 2005. Through May 2013 the First, Second, and Third Class (commercial) Radiotelegraph Licenses using code tests based upon the CODEX standard word were still being issued in the United States by the Federal Communications Commission. The First Class license required 20 WPM code group and 25 WPM text code proficiency, the others 16 WPM code group test (five letter blocks sent as simulation of receiving encrypted text) and 20 WPM code text (plain language) test. It was also necessary to pass written tests on operating practice and electronics theory. A unique additional demand for the First Class was a requirement of a year of experience for operators of shipboard and coast stations using Morse. This allowed the holder to be chief operator on board a passenger ship. However, since 1999 the use of satellite and very high frequency maritime communications systems (GMDSS) has made them obsolete. (By that point meeting experience requirement for the First was very difficult.) Currently only one class of license, the Radiotelegraph Operator Certificate, is issued. This is granted either when the tests are passed or as the Second and First are renewed and become this lifetime license. For new applicants it requires passing a written examination on electronic theory, as well as 16 WPM code and 20 WPM text tests. However the code exams are currently waived for holders of Amateur Extra Class licenses who obtained their operating privileges under the old 20 WPM test requirement. Radio navigation aids such as VORs and NDBs for aeronautical use broadcast identifying information in the form of Morse Code, though many VOR stations now also provide voice identification. Warships, including those of the U.S. Navy, have long used signal lamps to exchange messages in Morse code. Modern use continues, in part, as a way to communicate while maintaining radio silence. Submarine periscopes include a signal lamp. An important application is signalling for help through SOS, "· · · — — — · · ·". This can be sent many ways: keying a radio on and off, flashing a mirror, toggling a flashlight and similar methods. SOS is not three separate characters, rather, it is a prosign SOS, and is keyed without gaps between characters. Some Nokia mobile phones offer an option to alert the user of an incoming text message with the Morse tone "· · · — — · · ·" (representing SMS or Short Message Service). In addition, applications are now available for mobile phones that enable short messages to be input in Morse Code. Morse code as an assistive technology Morse code has been employed as an assistive technology, helping people with a variety of disabilities to communicate. Morse can be sent by persons with severe motion disabilities, as long as they have some minimal motor control. An original solution to the problem that caretakers have to learn to decode has been an electronic typewriter with the codes written on the keys. Codes were sung by users; see the voice typewriter employing morse or votem, Newell and Nabarro, 1968. Morse code can also be translated by computer and used in a speaking communication aid. In some cases this means alternately blowing into and sucking on a plastic tube ("sip-and-puff" interface). An important advantage of Morse code over row column scanning is that, once learned, it does not require looking at a display. Also, it appears faster than scanning. People with severe motion disabilities in addition to sensory disabilities (e.g. people who are also deaf or blind) can receive Morse through a skin buzzer.. In one case reported in the radio amateur magazine QST, an old shipboard radio operator who had a stroke and lost the ability to speak or write could communicate with his physician (a radio amateur) by blinking his eyes in Morse. Another example occurred in 1966 when prisoner of warJeremiah Denton, brought on television by his North Vietnamese captors, Morse-blinked the word TORTURE. In these two cases interpreters were available to understand those series of eye-blinks. This section includes inline links to audio files. If you have trouble playing the files, see Wikipedia Media help. International Morse code is composed of five elements: 1. short mark, dot or "dit" (·) : "dot duration" is one time unit long 2. longer mark, dash or "dah" (–) : three time units long 3. inter-element gap between the dots and dashes within a character : one dot duration or one unit long 4. short gap (between letters) : three time units long 5. medium gap (between words) : seven time units long Morse code can be transmitted in a number of ways: originally as electrical pulses along a telegraph wire, but also as an audio tone, a radio signal with short and long tones, or as a mechanical, audible or visual signal (e.g. a flashing light) using devices like an Aldis lamp or a heliograph, a common flashlight, or even a car horn. Some mine rescues have used pulling on a rope - a short pull for a dot and a long pull for a dash. Morse code is transmitted using just two states (on and off). Historians have called it the first digital code. Morse code may be represented as a binary code, and that is what telegraph operators do when transmitting messages. Working from the above ITU definition and further defining a bit as a dot time, a Morse code sequence may be made from a combination of the following five bit strings: short mark, dot or "dit" (·) : 1 longer mark, dash or "dah" (–) : 111 intra-character gap (between the dots and dashes within a character) : 0 short gap (between letters) : 000 medium gap (between words) : 0000000 Note that the marks and gaps alternate: dots and dashes are always separated by one of the gaps, and that the gaps are always separated by a dot or a dash. Morse messages are generally transmitted by a hand-operated device such as a telegraph key, so there are variations introduced by the skill of the sender and receiver — more experienced operators can send and receive at faster speeds. In addition, individual operators differ slightly, for example using slightly longer or shorter dashes or gaps, perhaps only for particular characters. This is called their "fist", and experienced operators can recognize specific individuals by it alone. A good operator who sends clearly and is easy to copy is said to have a "good fist". A "poor fist" is a characteristic of sloppy or hard to copy Morse code. Below is an illustration of timing conventions. The phrase "MORSE CODE", in Morse code format, would normally be written something like this, where – represents dahs and · represents dits: −− −−− ·−· ··· · −·−· −−− −·· · M O R S E C O D E Next is the exact conventional timing for this phrase, with = representing "signal on", and . representing "signal off", each for the time length of exactly one dit: 1 2 3 4 5 6 7 8 M------ O---------- R------ S---- E C---------- O---------- D------ E ^ ^ ^ ^ ^ | dah dit | | symbol space letter space word space Morse code is often spoken or written with "dah" for dashes, "dit" for dots located at the end of a character, and "di" for dots located at the beginning or internally within the character. Thus, the following Morse code sequence: M O R S E C O D E −− −−− ·−· ··· · (space) −·−· −−− −·· · Note that there is little point in learning to read written Morse as above; rather, the sounds of all of the letters and symbols need to be learned, for both sending and receiving. Below is a chart/spreadsheet to ease the spoken representation/spoken form. Dit dah dit? All Morse code elements depend on the dot length. A dash is the length of 3 dots, and spacings are specified in number of dot lengths. An unambiguous method of specifying the transmission speed is to specify the dot duration as, for example, 50 milliseconds. Specifying the dot duration is, however, not the common practice. Usually, speeds are stated in words per minute. That introduces ambiguity because words have different numbers of characters, and characters have different dot lengths. It is not immediately clear how a specific word rate determines the dot duration in milliseconds. Some method to standardize the transformation of a word rate to a dot duration is useful. A simple way to do this is to choose a dot duration that would send a typical word the desired number of times in one minute. If, for example, the operator wanted a character speed of 13 words per minute, the operator would choose a dot rate that would send the typical word 13 times in exactly one minute. The typical word thus determines the dot length. It is common to assume that a word is 5 characters long. There are two common typical words: "PARIS" and "CODEX". PARIS mimics a word rate that is typical of natural language words and reflects the benefits of Morse code's shorter code durations for common characters such as "e" and "t". CODEX offers a word rate that is typical of 5-letter code groups (sequences of random letters). Using the word PARIS as a standard, the number of dot units is 50 and a simple calculation shows that the dot length at 20 words per minute is 60 milliseconds. Using the word CODEX with 60 dot units, the dot length at 20 words per minute is 50 milliseconds. Because Morse code is usually sent by hand, it is unlikely that an operator could be that precise with the dot length, and the individual characteristics and preferences of the operators usually override the standards. For commercial radiotelegraph licenses in the United States, the Federal Communications Commission specifies tests for Morse code proficiency in words per minute and in code groups per minute. The Commission specifies that a word is 5-characters long. The Commission specifies Morse code test elements at 16 code groups per minute, 20 words per minute, 20 code groups per minute, and 25 words per minute. The word per minute rate would be close to the PARIS standard, and the code groups per minute would be close to the CODEX standard. While the Federal Communications Commission no longer requires Morse code for amateur radio licenses, the old requirements were similar to the requirements for commercial radiotelegraph licenses. A difference between amateur radio licenses and commercial radiotelegraph licenses is that commercial operators must be able to receive code groups of random characters along with plain language text. For each class of license, the code group speed requirement is slower than the plain language text requirement. For example, for the Radiotelegraph Operator License, the examinee must pass a 20 word per minute plain text test and a 16 word per minute code group test. Based upon a 50 dot duration standard word such as PARIS, the time for one dot duration or one unit can be computed by the formula: T = 1200 / W Where: T is the unit time, or dot duration in milliseconds, and W is the speed in wpm. Sometimes, especially while teaching Morse code, the timing rules above are changed so two different speeds are used: a character speed and a text speed. The character speed is how fast each individual letter is sent. The text speed is how fast the entire message is sent. For example, individual characters may be sent at a 13 words-per-minute rate, but the intercharacter and interword gaps may be lengthened so the word rate is only 5 words per minute. Using different character and text speeds is, in fact, a common practice, and is used in the Farnsworth method of learning Morse code. Alternative display of common characters in International Morse code Graphical representation of the dichotomic search table. The graph branches left for each dot and right for each dash until the character representation is exhausted. Link budget issues Morse Code cannot be treated as a classical radioteletype (RTTY) signal when it comes to calculating a link margin or a link budget for the simple reason of it possessing variable length dots and dashes as well as variant timing between letters and words. For the purposes of Information Theory and Channel Coding comparisons the word PARIS is used to determine Morse Code's properties because it has an even number of dots and dashes. Morse Code when transmitted essentially creates an AM signal (even in on/off keying mode), assumptions about signal can be made with respect to similarly timed RTTY signalling. Because Morse code transmissions employ an on-off keyed radio signal, it requires less complex transmission equipment than other forms of radio communication. Morse code is usually heard at the receiver as a medium-pitched on/off audio tone (600–1000 Hz), so transmissions are easier to copy than voice through the noise on congested frequencies, and it can be used in very high noise / low signal environments. The transmitted power is concentrated into a limited bandwidth so narrow receiver filters can be used to suppress interference from adjacent frequencies. The audio tone is usually created by use of a beat frequency oscillator. The narrow signal bandwidth also takes advantage of the natural aural selectivity of the human brain, further enhancing weak signal readability. This efficiency makes CW extremely useful for DX (distance) transmissions, as well as for low-power transmissions (commonly called "QRP operation", from the Q-code for "reduce power"). The ARRL has a readability standard for robot encoders called ARRL Farnsworth Spacing that is supposed to have higher readability for both robot and human decoders. Some programs like WinMorse have implemented the standard. People learning Morse code using the Farnsworth method are taught to send and receive letters and other symbols at their full target speed, that is with normal relative timing of the dots, dashes and spaces within each symbol for that speed. The Farnsworth method is named for Donald R. "Russ" Farnsworth, also known by his call sign, W6TTB. However, initially exaggerated spaces between symbols and words are used, to give "thinking time" to make the sound "shape" of the letters and symbols easier to learn. The spacing can then be reduced with practice and familiarity. Another popular teaching method is the Koch method, named after German psychologist Ludwig Koch, which uses the full target speed from the outset, but begins with just two characters. Once strings containing those two characters can be copied with 90% accuracy, an additional character is added, and so on until the full character set is mastered. In North America, many thousands of individuals have increased their code recognition speed (after initial memorization of the characters) by listening to the regularly scheduled code practice transmissions broadcast by W1AW, the American Radio Relay League's headquarters station. In the United Kingdom many people learned the Morse code by means of a series of words or phrases that have the same rhythm as a Morse character. For instance, "Q" in Morse is dah-dah-di-dah, which can be memorized by the phrase "God save the Queen", and the Morse for "F" is di-di-dah-dit, which can be memorized as "Did she like it." A well-known Morse code rhythm from the Second World War period derives from Beethoven's Fifth Symphony, the opening phrase of which was regularly played at the beginning of BBC broadcasts. The timing of the notes corresponds to the Morse for "V"; di-di-di-dah and stood for "V for Victory" (as well as the Roman numeral for the number five). Letters, numbers, punctuation, prosigns for Morse code and non-English variants Prosigns for Morse code are special (usually) unwritten procedural signals or symbols that are used to indicate changes in communications protocol status or white space text formatting actions. The symbols !, $ and & are not defined inside the ITU recommendation on Morse code, but conventions for them exist. The @ symbol was formally added in 2004. There is no standard representation for the exclamation mark (!), although the KWdigraph (– · – · – –) was proposed in the 1980s by the Heathkit Company (a vendor of assembly kits for amateur radio equipment). While Morse code translation software prefers the Heathkit version, on-air use is not yet universal as some amateur radio operators in North America and the Caribbean continue to prefer the older MN digraph (– – – ·) carried over from American landline telegraphy code. The ITU has never codified formal Morse Code representations for currencies as the ISO 4217 Currency Codes are preferred for transmission. The $ sign code was represented in the Phillips Code, a huge collection of abbreviations used on land line telegraphy, as SX. The representation of the & sign given above, often shown as AS, is also the Morse prosign for wait. In addition, the American landline representation of an ampersand was similar to "ES" (· · · ·) and hams have carried over this usage as a synonym for "and" (WX HR COLD ES RAINY, "the weather here is cold & rainy"). Keyboard AT @ On May 24, 2004 — the 160th anniversary of the first public Morse telegraph transmission — the Radiocommunication Bureau of the International Telecommunication Union (ITU-R) formally added the @ ("commercial at" or "commat") character to the official Morse character set, using the sequence denoted by the AC digraph (· – – · – ·). This sequence was reportedly chosen to represent "A[T] C[OMMERCIAL]" or a letter "a" inside a swirl represented by a "C". The new character facilitates sending email addresses by Morse code and is notable since it is the first official addition to the Morse set of characters since World War I. For Chinese, Chinese telegraph code is used to map Chinese characters to four-digit codes and send these digits out using standard Morse code. Korean Morse code uses the SKATS mapping, originally developed to allow Korean to be typed on western typewriters. SKATS maps hangul characters to arbitrary letters of the Latin script and has no relationship to pronunciation in Korean. For Russian and Bulgarian, Russian Morse code is used to map the Cyrillic characters to four-element codes. Many of the characters are encoded the same way (A, O, E, I, T, M, N, R, K, etc.). Bulgarian alphabet contains 30 characters, which exactly match all possible combinations of 1, 2, 3 and 4 dots and dashes. Russian requires 1 extra character, "Ы" , which is encoded with 5 elements. During early World War I (1914–1916) Germany briefly experimented with 'dotty' and 'dashy' Morse, in essence adding a dot or a dash at the end of each Morse symbol. Each one was quickly broken by Allied SIGINT, and standard Morse was restored by Spring 1916. Only a small percentage of Western Front (North Atlantic and Mediterranean Sea) traffic was in 'dotty' or 'dashy' Morse during the entire war. In popular culture, this is mostly remembered in the book The Codebreakers by Kahn and in the national archives of the UK and Australia (whose SIGINT operators copied most of this Morse variant). Kahn's cited sources come from the popular press and wireless magazines of the time. Other forms of 'Fractional Morse' or 'Fractionated Morse' have emerged. Decoding software for Morse code ranges from software-defined wide-band radio receivers coupled to the Reverse Beacon Network, which decodes signals and detects CQ messages on ham bands, to smartphone applications.
For the first time, spacecraft have flown through the heart of a magnetic process that controls Earth's space weather and geomagnetic storms. Earth is surrounded by a magnetic bubble, called the magnetosphere, which protects us from harmful radiation from space. The magnetosphere is defined by magnetic field lines, stretching out into space from Earth. When these lines come up against field lines in different orientations -- for example from the Sun -- a process called magnetic reconnection occurs. Magnetic reconnection is when the field lines clash and rearrange themselves in an explosive reaction. The process throws out hot jets of particles, allowing them to cross boundaries normally created by the field lines. In the Earth system, this process plays the key role in causing geomagnetic storms that can disrupt communications systems on the surface and satellites in space. It also leads to the creation of the auroras in the northern and southern hemispheres. At the heart of magnetic reconnection is an extremely fast reaction that shoots out protons and electrons. The movement of protons had been observed before, but now spacecraft have been able to capture direct measurements of the movements of the electrons for the first time. The measurements showed that the electrons shot away in straight lines from the original event at hundreds of miles per second across the magnetic boundaries that would normally deflect them. Once across the boundary, the particles curved back around in response to the new magnetic fields they encountered, making a U-turn. The measurements, published today in the journal Science, come from NASA's Magnetospheric Multiscale (MMS) Mission, launched in March 2015. On October 16, 2015, the spacecraft travelled straight through a magnetic reconnection event at the boundary where Earth's field lines bump up against the Sun's field lines. Dr Jonathan Eastwood from Imperial's Department of Physics, who was part of the team searching the data for reconnection events and analysing the results, said: "There have been theories about the movement of electrons in magnetic reconnection for decades, but this is the first real proof of what they do. We have known what should be there -- but knowing and actually measuring are two very different things." The scientists' observations coincide with a computer simulation known as the crescent model, named for the characteristic crescent shapes that the graphs show to represent how far across the magnetic boundary the electrons can be expected to travel before turning around again. "This is the first time we have flown through the heart of a reconnection event. While the data support the crescent theory, some measurements do differ from what we might expect, throwing up new questions about the dynamics," said Dr Eastwood. "Understanding the fundamental physics of this process will have a real impact on our understanding of disruptive space weather events." MMS is made up of four satellites flown in a precise formation using GPS that keeps them only around 10 kilometres apart. This allows them to map the evolution of the event as it happens, and to track the small-scale movements of electrons. It is also able to track the electrons much faster than previous satellites. Dr Tom Moore, the mission scientist for MMS at NASA said: "Satellite measurements of electrons have been too slow by a factor of 100 to sample the magnetic reconnection region. The precision and speed of MMS, however, opened up a new window on the universe, a new 'microscope' to see reconnection." MMS has made more than 4,000 trips through the magnetic boundaries around Earth, each time gathering information about the way the magnetic fields and particles move. Not every reconnection event is explosive -- some are steady and sometimes when field lines meet it doesn't happen at all. The variation has long been a mystery, which this new data could help solve. The first phase of the mission flew through magnetic boundaries on the day side of Earth, facing the Sun. Magnetic reconnection also occurs at the night side, in a region called the magnetotail, where it affects the aurora. Reconnection events here are expected to be 'more explosive', according to Dr Eastwood, as they brighten the aurora and create magnetic storms. Cite This Page:
Mathematics_Notes 2016 HSC Good notes for HSC Mathematics 2016.... NOTES | KEVIN LUU | 5.2 – 5.3 MATHEMATICS ASSESSMENT Review types of data, collecting data, sorting data, measures of central tendency, measures of spread and displaying data. Interpreting results from two sets of data (i.e. back to back stem and leaf displays, histograms, double column graphs, or box and whiskers plots). Find the range, interquartile range and standard deviation as measures of spread of data sets - Find the mean and standard deviation of a set of data using digital technologies – calculators - Compare and describe the spread of sets of data with the same mean but different standard deviations Bivariate Data: recognises the difference between dependent and independent variables. Describes the strength and direction of the relationship between two variables displayed in a scatter plot, e.g. Strong positive relationships, weak negative relationships with justifications. Uses lines of best fit to predict what might happen between known data values (interpolation) and predict what might happen beyond known data values (extrapolation). Know the six processes to setting up statistical investigations. Identify reasons why data in a display may be misrepresented. EXPRECTATIONS Use measures of central tendency (mean, mode, median) and the range to analyse data that is displayed in a frequency table, stemand-leaf plot or dot plot. Use terms ‘skewed’ or ‘symmetrical’ when describing the shape of a distribution. Compare two sets of data and draw conclusions by finding the mean, mode and/ or median, and the range of both sets. Construct a cumulative frequency table, histogram and polygon (ogive) for ungrouped data. Use cumulative frequency to find the median. Group data into class intervals. Construct a cumulative frequency table and histogram for grouped data. Find the mean and modal class of grouped data. Determine the upper and lower quartiles for a set of scores. Construct a box-and-whisker plot using the five-point summary. Use a calculator to find the standard deviation of a set of scores. Use the mean and standard deviation to compare two sets of data. Compare the relative merits of measures of spread (range, interquartile range and standard deviation). STATISTIC TERMANOLOGY BIVARIATE DATA - data that has to variables BOX PLOT (CAT-AND-WHISKERS PLOT) - a diagram obtained from the five number summary - the box shows the middle 50% of scores (the interquartile range) - the whiskers show us the extent of the bottom and top quartiles as well as the range CENSUS - a survey of a whole population CUMULATIVE FREQUENCY - the number of scores less than or equal to a particular outcome - e.g. For the data 3,6,5,3,5,5,4,3,3,6 the cumulative frequency of 5 is 8 (there are 8 scores of 5 or less) CUMULATIVE FREQUENCY HISTORGRAM (AND POLYGON) - these show the outcomes and their cumulative frequencies DATA - the pieces of information (or ‘scores’) to be examined - categorical: data that uses non-numerical categories - ordered data involves a ranking, e.g. exam grades, garment sizes - distinct data has no order, e.g. colours, types of cars - numerical: data that uses numbers to show ‘how much’ - continuous data can have any numerical value within a range, e.g. height - discrete data is restricted to certain numerical values, e.g. number of pets DOT PLOT - a type of graph that uses one axis and a number of dots above the axis EXTRAPOLATION - predicting a data beyond the range of values given FIVE NUMBER SUMMARY - a set of numbers consisting of the minimum score, the three quartiles and the maximum score FREQUENCY - the number of times an outcome occurs in the data - e.g. for the data 3,6,5,3,5,5,4,3,3,6 the outcome 5 has a frequency of 3 FREQUENCY DISTRIBUTION TABLE - a table that shows all the possible outcomes and their frequencies (it usually is extended by adding other columns such as the cumulative frequency) FREQUENCY HISTROGRAM - a type of column graph showing the outcomes and their frequencies. FREQUENCY POLYGON - a type of line graph showing outcomes and their frequencies - to complete the polygon, the outcomes immediately above and below the actual outcomes are used (the height of these columns is zero) GROUPED DATA data that is organised into groups or classes class intervals: the size of the groups into which the data is organised e.g. 1-5 (5 scores); 11-20 (10 scores) - class centre: the middle outcome of a class e.g. the class 1-5 has a class centre of 3 INTERPOLATION - estimating data that lie within the domain of the values given INTERQUARTILE RANGE - the range of the middle 50% of scores - the difference between the median of the upper half of scores and the median of the lower half of scores - IQR = Q3-Q1 LINE OF BEST FIT - a line that ‘best fits; the data on a scatter plot mean MEAN - the number obtained by ‘evening out’ all the scores until they are equal - e.g. if the scores 3,6,5,3,5,5,4,3,3,6 were ‘evened out’, the number obtained would be 4.3 - to obtain the mean, we divide the sum of the scores with the total number of scores MEDIAN - the middle score for an odd number of scores or the mean of the middle two scores for an even number of scores - the median class is grouped data containing the median MODE (MODAL CLASS) - the outcome or class that contains the most scores OGIVE - this is another name for the cumulative frequency polygon OUTCOME - a possible value of the data OUTLIER - a score that is separated from the main body of scores QUARTILES - the points that divide the scores the scores up into quarters - the second quartile, Q2, divides the scores into halves (Q2 = median) - the first quartile, Q1, is the median of the lower half of scores - the third quartile, Q3, is the median of the upper half of scores RANGE - the difference between the highest and lowest scores SAMPLE - a part (usually a small part) of a large population - random sample: a sample taken so that each member of the population has the same change of being included - systematic sample: a sample selected according to some ordering scheme, e.g. every tenth member - stratified sample: a sample is proportionally taken from each subgroup in a population SCATTER PLOT - a graph that uses points on a number plane to show the relationship between two categories. SHAPE (OF A DISTRIBUTION) - a set of scores can be symmetrical or skewed SOURCES OF DATA - primary: the data has been collected by yourself - secondary: the data has come from an external source, e.g. newspapers, internet STANDARD DEVIATION - a measure of spread that can be thought of as the average distance of scores from the mean - the larger the standard deviation, the larger the spread STATISTICS - the collection, organisation and interpretation of numerical data STEM-AND-LEAF PLOT - a graph that shows the spread of scores without losing the identity of the data - ordered stem-and-leaf plot: the leaves are placed in order - back-to-back stem-and-leaf plot: this can be used to compare two sets of scores, one set on each side VARIABLE - something that can be observed, measured or counted to provide data 1 STATISTICS TYPES OF DATA The data we collect is made up of variables. These are pieces of information like a quantity or a characteristic that can be observed or measured. They may change either over time or between individual observations. The main types of data are: CATEGORICAL – VARIABLES ARE CATEGORIES - ordered | e.g. exam grades, garment sizes - distinct | e.g. types of cars, eye colour NUMERICAL – VARIABLES ARE NUMBERS - discrete | e.g. goals scored, number of pets continuous | e.g. height of a person, distance thrown COLLECTING DATA There are three main ways of collecting data, including: CENSUS - a whole population is surveyed, e.g. every student in the school is questioned SAMPLE - a selected group of a population is surveyed, e.g. a small number in each class is questioned OBSERVATION - numerical facts are collected and tabulated, e.g. sports data, weather, sales figures, etc. A sample is usually random to limit the chances of bias occurring. However, it may be systematic if the members of the sample are chosen according to a rule, such as every 10th member of a population. If a population is composed of various sub-groups, the sample could be stratified to ensure a proportionate representation of each group in the sample. Primary source data is collected first hand by observation or survey. Secondary source data is obtained from an external source such as a newspaper, website or another person’s research. SORTING DATA A large amount of data needs to be tabulated (organised into a table) so that it can be analysed. A common form of table is the frequency distribution table. DISCRETE DATA OUTCOME ( TALLY x ) 1 ||| 2 |||| 3 ||||||| 4 ||||||||| 5 ||||| 6 || FREQUENCY ( f ) 3 4 7 9 5 2 TOTAL | 30 CUMULATIVE FREQUENCY 3 7 14 23 28 30 3 8 21 36 25 12 | 105 GROUPED DATA Used to cluster discrete data into groups or to divide continuous data into adjoining groups. f × c . c . CUMULATIV CLASS CLASS TALLY FREQUENCY
[NOTE: This article follows the article, "Epistemology, Concepts," which should be read before this one. It is not possible to understand what propositions are without understanding what concepts are.] All knowledge is propositional. All we know is in the form of propositions. The word, "propositions," is used throughout philosophy with many different definitions especially in relationship to logic. In this article a proposition is defined as follows: A proposition is a verbal statement or sentence that asserts something about something else. The Simple Basic Structure Of Propositions Very few of the sentences we use in every day speech, writing, or thinking will very closely follow the basic structure of propositions. While there are no rules for how propositions must be formed, because language is a human invention, a description of how basic propositions are formed will explain what a proposition must do to be knowledge. Propositions are sentences or statements that assert a relationship or relationships between two or more existents. The existents and the relationships are identified by their concepts or descriptions, which are designated by the words, or phrases that designate them. The designating words in a proposition are called terms. Every basic proposition consists of three terms, a subject, about which the assertion is being made; a predicate which is being asserted about the subject, and a copula, which specifies the exact relationship between the subject and predicate. In the proposition, "coffee is a beverage," the terms are, "coffee," "is," and "a beverage." "Coffee," is the subject, "a beverage," is the predicate, and "is" is the copula. In basic propositions, the copula is usually a form of the verbs, "to be" or "has." The copula, "is," in basic propositions does not mean "equal to," or, "identical with," but simply that the existent identified by the subject term, "is or has," whatever quality, relationship, action, or category is indicated by the predicate term. A proposition does not assert a relationship between the concepts or the terms of the proposition, but between the existents the concepts or descriptions identify. In the example proposition, it is not the term, "coffee," that is the term, "beverage, and it is not the "concept coffee" that is "the concept beverage" (you cannot drink concepts) it is the "actual black liquid" identified by, "coffee" that is "something you drink" identified by the universal concept "beverage." The terms, "subject" and "predicate" in basic propositions is not identical with the same terms in English grammar. The subject may consist of any number of words as may the predicate. In the proposition, "the last person leaving the room is responsible for turning out the lights," "the last person leaving the room," is the subject, and, "responsible for turning out the lights," is the predicate. Knowledge and Propositions All knowledge consists of propositions which may be actual propositions or implied. All knowledge is knowledge about things. In propositions, the things that are known about are subjects, and the things known about subjects are predicates. The subject, "thing," is whatever existent or existents are identified by the concept or description identifying what it is the predicate is about. The predicate that is "what is asserted," about the subject, is a concept or description identifying what is asserted about the subject. In the simplest propositions, the subject and predicate consist of single terms, such as, "plants are living," and "water is liquid." Every proposition asserts something (the predicate) about something else (the subject). The usual form of a proposition is, "the subject is the predicate," or, "the subject has the predicate." The subject (A) of a proposition may be a concept for or description of a single existent, a combination of existents, a category of existents (universal), material or epistemological including: existents, qualities, actions (events, behavior), or relationships. The predicate (B) may be a concept for or description of a category of existents (universal), a quality, an action (events, behavior), or relationship. A relationship (x) may be any kind of relationship, material or epistemological. The following are all the possible forms of propositions: something is a referent of a universal. (A is a B) something has a specific quality (A has quality B) something is doing (or does) some action (A does B) something has a specific relationship to something else (A has relationship x to B) [NOTE: Those propositions using the copula, "has, or, "have," mean the same as, "is," or, "are" in the sense that "A has the quality 'red'," means the same as, "A is red," and "A has the relationship 'above' to B," means the same as "A is above B."] [NOTE: Since the subject term can identify a collection of existents all propositions may also have a plural form as, "some things are ...," or, "some things have ...."] A proposition may assert the negative of any proposition: something is not a referent of a universal. (A is not a B) something does not have a specific quality (A has no quality B) something is not doing (or does not do) some action (A does not do B) something does not have a specific relationship to something else (A has no relationship x to B) A proposition may assert a past version of any proposition: something was a referent of a universal. (A was a B) something had a specific quality (A had quality B) something was doing (or did) some action (A did B) something had a specific relationship to something else (A had relationship x to B) A proposition may assert a past negative version of any proposition: something was not a referent of a universal. (A was not a B) something did not have a specific quality (A had no quality B) something was not doing (or did not do) some action (A did not do B) something did not have a specific relationship to something else (A has relationship x to B) Since everything a human being does must be consciously chosen, the thought processes used to make choices are always about the future, whether that future is the next moment or many years later. Propositions about the future are different from all others, because they are neither true or false. Every future proposition is hypothetical. It is hypothetical in the sense that it is conditional, contingent on all things being what they are presently known to be. Future propositions are certain to the degree they are based on principles and whatever they are contingent on is known. A proposition that states the velocity of an object falling toward the earth is almost perfectly certain while a proposition about the tomorrow's weather is much less certain. What Future Propositions Assert Future propositions assert the same kind of relationships all propositions assert. All future propositions, however, imply the contingent context of what they assert, as, for example: "within the context of what is currently known," or, "all things remaining as they are currently known to be." something will be a referent of a universal. (A will be a B) something will have a specific quality (A will have quality B) something will be doing (or will do) some action (A will do B) something will have a specific relationship to something else (A will have relationship x to B) A proposition may assert a future negative version of any proposition: something will not be a referent of a universal. (A will not be a B) something will not have a specific quality (A will not have quality B) something will not be doing (or will not do) some action (A will do B) something will not have a specific relationship to something else (A will not have relationship x to B) Universal and conditional propositions based on principles are sometimes stated in future form but are not really future propositions. For example, "triangular braces will provide rigid support," or, "water will freeze at temperatures below minus 32 degrees F. are not future, but "timeless" propositions. Propositions of intention may be in future form, "Tomorrow I'll see the doctor." Such propositions are not true if they are predictions, but are true if they are only intentions and really are what one intends, because they are then present propositions. Variations Of Propositional Forms Propositions must assert one of the above, but do not have to be in the exact form described. One of the most common forms of propositions uses the copula "equals" (=). In propositions, "equals," means, "has the specific quantitative quality." For example, "A equals B," means, "something A has the specific quantitative quality B," or, "something A has the same specific quantitative quality as B." "A equals B," may also be a relationship x, where relationship x is "has the same value as," "A = B" like the relationships, "has a greater value than," "A > B" and "has a lesser value than, "A < B." Quantitative propositions assume values are "counts" or "measurements in commensurable units." The proposition, "something A is something B," (A is B), if A and B are both existents, the proposition is not possible, because no two things can be the same thing. Propositions of the form A is B, where A or B is a descriptive attribute (a quality, an action, or a relationship) like Bill is the "culprit" (a quality), or Fitzgerald is "the author of Gatsby" (an action), are not, as some ignorant philosophers have tried to claim, tautologies. A true tautology would be "A is B" where A and B are simply different "words" for the same concept, such as, "a home is a casa," which are just the English word and Spanish word for the same concept and is exactly the same in meaning as, "a home is a home," which might be interesting rhetoric, but is not a legitimate proposition. The reason these twenty four basic propositions are all the possible propositions there are, is because existents, (physical entities and epistemological existents), events, attributes, and relationships are all there is. Events, attributes, and relationships exist, so are also existents, and can be identified by concepts and related to other existents in propositions. No attributes, events, or relationships, however, exists independently of the existents they are the actions of, the attributes of, or the relationships between. Any proposition that treats any attribute, action, or relationship as though it existed independently is an invalid proposition. The basic propositions described are not some kind of law or ontological principles, they are the sum of reasoned observation. Epistemology is not dictated by some authority, it is discovered by human beings, just as all other disciplines, like the sciences, geography, or history. If some other suitable way of identifying the nature of propositions were devised, so long as it did not contradict how they are actually used and how they are actually constructed, it could be just as valid as the way they are here described and classified. Very few of the propositions we use when thinking, writing, or speaking will be in the exact form of basic propositions, but any proposition or statement we make, if it is true, will be able to be put into the form of a basic proposition or a set of basic propositions. [NOTE: This is not some kind of philosophical rule or principle, but a way of understanding if what we think, write, and say is true or not. It is not only possible, but very common, to say things in ways that are so complex, convoluted, and ambiguous that, whether they are true or not, or even if they actually say anything, is difficult to determine. The virtue of the formal propositions is that what they assert is always explicit and whether what they assert is true or not is much easier to determine.] Every proposition, basic or common, is an expression of a relationship or relationships. It is a statement that asserts something about something else, or simply a relationship between two things. The "things" may be single things, groups of things, classes of things, and may be any existent or existents identified by a concept or a description. In the proposition, "the concert begins at eight o'clock," what is being asserted is not about the concert, but at what time the concert starts. To put the proposition in formal form it might be rewritten, "the start of the concert is eight o'clock," or, "the concert's beginning is at eight o'clock." It is now clear the subject is, "the concert's beginning, and the predicate is, "at eight o'clock." Whether the proposition is true or not is determined by whether or not the scheduled time for the beginning of the concert is really eight o'clock, which can be determined by checking the concert program schedule. Since everything has an ontological or epistemological context most of the propositions we use will either assume that context, or specify it with such terms as, "if (the context) then," which means under these conditions or within these limits. Any of the terms of a proposition may be limited or further defined by such modifiers as "all," "every," "some," "most," "many," "few," "only," "not all," "not many," "not a few," "always," "often," "before," "after," and "during." Terms of propositions can be combined using "or," "either...or," "not," "neither...nor," "and," "both...and," "not both," "if...then," and, "if and only if." ["A and X are B," "A has both qualities B and C," "A has neither relationship x or y to B"] Propositions can also be combined using "or," "either...or," "not," "neither...nor," "and," "both...and," "not both," "if...then," and, "if and only if." ["A is X or B is K," "Either A is X or A is K," "If A is X, then B is K."] When propositions are combined, the combined propositions are only true if each individual proposition is true. See, "The Meaning of Propositions" and the only conditions in which they are true, below. Propositions Only Epistemological Propositions, like concepts, have no ontological or material existence. They only exist as the creation of and within the consciousness of human minds. Such arguments as, "since rocks actually exist, the proposition, 'rocks exist,' is true, independent of any mind," ignores the fact that propositions do not exist independent of any mind. Rocks, like all material existents exist and are what they are whether anyone is conscious of them or knows what they are, but that they exist and are what they are can only be known and stated by a conscious mind. A similar spurious argument is sometimes made about mathematical concepts and propositions. The argument is that, "two plus two equals four," is a true proposition whether anyone knows it or not and is therefore, "mind independent." Actually 2+2=4 means nothing. 2, 4, +, and = are symbols, like words, for concepts, specifically for the two numbers, 2, which is how far a count gets if counting only two things, and 4, which is how far count gets if counting four things, and +, which is the symbol for adding things together and counting them, and =, which is the symbol meaning the same numeric value. There are no wild 2s, 4s, +s, or =s running around, they only exist in human minds. Valid propositions must be about existents, they are not about the concepts that identify the existents. 2+2=4 means, any existents of which there are two, added to two other existents, when counted will be four existents. 2+2=4 sans existents means nothing. (This is perhaps the biggest mistake in math theory.) All Knowledge Propositional Some attribute the human intellect to the ability to form concepts, and it is true, without that ability the intellect would be impossible. But philosophically, concepts are not knowledge, and concepts alone are not language. All human knowledge is made possible by language and consists of propositions. Knowledge is about things: about existence itself, about the existents that are existence, and about their nature, their attributes, their actions, and their relationships to each other. It is by means of propositions that state what the nature of existents, attributes, actions, and relationships to each other are that all knowledge is expressed and held. Though most philosophers consider concepts knowledge, and even though no knowledge is possible without concepts, concepts alone are not knowledge. All supposed knowledge must be either true or false, and is only knowledge if it is true. Except by implication, no concept is either true or false. Concepts can be good or bad, that is, they may identify confused ideas, or be vague and poorly defined, or may identify what does not materially exist, (as though it did), as most mystic concepts do. What those concepts identify are fictions, but the concepts are neither true nor false. A concept only identifies things, and is just as valid when identifying fictional things as when identifying actual things. Only propositions can be true or false. A proposition is a statement that asserts something about an existent or class of existents. For example, "Zeus is a god worshiped by the ancient Greeks," asserts something about Zeus. If what is being asserted is correct, the proposition is true; if what is being asserted is incorrect, the proposition is false. The assertion, in this case, and therefore the proposition, is true, even though the concept "Zeus" identifies a fictional existent. The same concept can be use in both true and false propositions. "The phoenix is a common bird found in the forests of Colorado," is false, but, "the phoenix is a mythical bird of ancient Egypt," is true. Since only propositions can be true or false, knowledge consists entirely of propositions; but all propositions are constructed of concepts, without which no knowledge would be possible. Concepts identify the existents all our knowledge is about. Technically, concepts are not knowledge, but a definition, if correct, is knowledge because it is stated as a proposition. One might say, all correctly defined concepts constitute a kind of knowledge, but notice, it is really only the definitions that are the knowledge, not concepts as identifiers, which is their only function. Concepts imply knowledge, and most concepts would be impossible without knowledge, but attributing knowledge to concepts themselves is an epistemological mistake. It is that mistake that is the source of such confused ideas as those that suggest knowledge somehow changes the meaning of concepts, so that what a child means by an apple, and what a botanist means by an apple are different things. Our knowledge, then, consists of all the propositions we understand and have stored in our memory that are true statements about any aspect of existence. By the time we are adults we have learned and stored thousands, possibly millions of propositions in memory. Conceptual Relationships to Knowledge A concept itself only identifies existents. It is the existents all our knowledge is about, not the concepts that identify those existents. Nevertheless, because a concept identifies existents and means those existents with all their attributes and all that can be known about them, the concept acts like a reference to all that we know about those existents. The concept itself does not hold any of that knowledge or actually do the referencing, but our ability to ask and answer the question, "what do I know about the existents this concept identifies?" is made possible by the concept. In that sense every concept can be used like a key-word in a search engine that will find all the propositions we have in memory that begin, "this existent is ..." where the predicate of the proposition is something known about the concept's referents. [NOTE: The key-word/search-engine explanation is only an analogy for the relationship between consciousness and memory. It is always what we are conscious of that prompts recall from memory. It is when we are consciously considering a concept that propositions we have in memory will be recalled, in most cases the ones we use or consider most often first followed by lesser used ones. Some propositions are very difficult to recall if not often used. An example is that case of thinking, "I'm sure there is something else I know about this but can't remember it."] Using the concept "dog" for example, the answer to the question, "what do I know about dogs," can call up every proposition we can remembered that begins, "a dog is ..." where the predicate is some concept that is true about dogs in general, or any dog in particular. Some of the propositions regarding dogs might be, "a dog is a mammal," "some dogs are dangerous," "some dogs are used to help people," "some dogs are pets," "dogs are not allowed in this building," "that dog bites." As each proposition is recalled, the concepts from which the propositions are constructed can begin a new series of recalled propositions. The concept "mammal" in the proposition, "a dog is a mammal," may act as a key-word to search for all that is known about "mammal" by means of the propositions, "a mammal is ...." Since there will be an indefinite number of possible such propositions for every concept indicating what is known about them in terms of other propositions, the interrelationships between concepts and propositions in this manner is indefinitely complex. It is neither necessary or possible to identify or "unscramble" the nearly infinite complexity of the cognitive relationships between concepts and propositions, however, because concepts themselves are the means of maintaining the order and understanding those relationships. It is because all our propositional knowledge is only recalled in relation to concepts we are currently conscious of that propositional ideas always relate to what is currently important to our own thinking. The Meaning of Propositions What concepts mean are the existents they identify which are called their units, referents, or particulars. Since propositions assert something about something else, which specifically attributes the predicate of the proposition to the subject, the proposition means: "whatever is specified by the predicate concept is true of the existents identified by the subject concept." A proposition is a "logical connection" between the existent or existents that are the referents of the subject concept and the existent or existents that are the referents of the predicate concept. A proposition is true if and only if the relationship described by the proposition is the actual case. A proposition is true if: - ... the predicate is a universal concept, and the existent or existents identified by the subject concept really are referents of that concept. - ... the predicate is a concept of a quality or qualities, and the existent or existents identified by the subject concept really have that quality or those qualities. - ... the predicate is a concept of action, actions, behavior, or behaviors, and the existent or existents identified by the subject concept really exhibit the action, actions, behavior, or behaviors. - ... the predicate is a concept for a specified relationship or relationships, and the existent or existents identified by the subject concept really have the specified relationship or relationships. These, of course, apply to all the negative, past, and future forms as well.] In most general terms, therefore, a proposition means the actual connection between the existents identified, and that the predicate is true of the subject. A concept identifies existents. A proposition specifies a connection between existents. [NOTE: It would not be incorrect to say a proposition "identifies" a "relationship" between existents, but I prefer "specify" to distinguish the operation from the function of concepts to "identify" existents, and I also prefer "connection" to "relationship" because one of the possible connections is relationship.] False Dichotomies Of Propositions There is a very bad idea in much of philosophy that is an assault on the nature of propositions by that class of philosophers who are the enemies of knowledge and truth. It is an attempt to invalidate propositions by dividing them into two classes, both wrong and both intellectually destructive. I'll call the two classes certain and unknowable. Each class contains three subclasses of propositions, as follows: Certain: Analytic, A Priori, and Necessary Unknowable: Synthetic, A Posteriori, and Contingent These supposed subclasses of propositions are usually paired as, "certain vs. unknowable," as follows: Analytic (certain) vs. Synthetic (unknowable), A Priori (certain) vs. A Posteriori (unknowable), and Necessary (certain) vs. Contingent (unknowable). These three false dichotomies of propositions may be briefly described as follows: Pertaining to language and the meaning of concepts: Analytic propositions are those it is supposed must be true (certain) because the predicate is contained in the subject. Such propositions are true, it is claimed, "by definition." One frequent example is, "all bachelors are single" must be true because the predicate (single) is contained in the definition of the subject (bachelor). Such propositions are called, "analytic," because they can be known to be true simply by analyzing the definitions of the words. Synthetic propositions, it is claimed, cannot be known to be true (unknowable) because they depend on experience, which is never certain. Examples are "Americans eat less rice than Asians," "The cat is sick," and, "the light is red," which of course can only be true if correctly observe, so could be false (mistaken). Pertaining to logic and knowledge (epistemology): A priori propositions are those one can know are true (certain) independent of experience. The propositions, "The sum of the interior angles of a triangle is 180 degrees," and, "two plus two is four," are known to be true without measuring every triangle or observing actual addition. These are supposedly known independently of, or prior to, any experience. A posteriori propositions cannot be known to be true (unknowable) because they depend again on experience. Examples are, "The light is yellow," "Tom is heavier than Sue," "The car has crashed," which can only be known if those facts are observed without error. Pertaining to the nature of existence (ontology): Necessary truths cannot be false (certain), it is supposed, because to deny them leads to a contradiction. Examples are, "It is either night or day," "Cows are mammals," and "Ice is solid." These propositions are true, it is said, because they're not being true cannot be imagined and they are true "in all possible worlds." Contingent truths are those that are not necessary and whose opposites or contradictions are possible, so are unknowable. Examples are, "I ate a burger for lunch," "Today is the hottest day on record," and, "The cat is in cupboard." while any of these may be true, they are contingent, it is argued, because their opposite can be imagined and could have been different, in another universe, for example. All these false dichotomies contradict all sound philosophical principles of epistemology and ontology: The Analytic vs Synthetic dichotomy evades the epistemological fact that a "word" is not a concept and a concept does not mean its definition. A proposition, like, "all bachelors are single," cannot be known to be true because the word, "bachelor," is defined as, "a never married man." The definition may be wrong, and unless one knows what a man is (a male human being), and what, "married," and, "single," mean, whether or not the proposition is true cannot be known. According to the, "analytic," perversion of propositions, if the definition of "duck," was, "a four-footed snake," the proposition, "all ducks are four-footed," would be true. Synthetic propositions supposedly cannot be known with certainty because they depend on unreliable observation. This absurd idea would mean the proposition, "the man is dead," is doubtful, even though the man is lying there beheaded, with his head lying on his chest. The A Priori vs A Posteriori false dichotomy confuses the epistemological with the ontological. That propositions, "The sum of the interior angles of a triangle is 180 degrees," and, "two plus two is four," are statements based on human developed knowledge methods of geometry and mathematics. None of the concepts, sum, angle, triangle, degrees, two, plus, or, four exist outside human minds. The propositions are known to be true because they are epistemologically correct descriptions of a human method. Note, that the interior angles of a triangle are only 180 degrees, because the human convention for subdividing a circle is 360 degrees. If the convention were 100 degrees, the interior angles of a triangle would be 50 degrees, or if 800 degrees, the interior angles of a triangle would be 400 degrees. None of these things could have been known a priori, because they could not be known until some human being invented them. A posteriori propositions are those that depend on the actual observation of ontological facts, not epistemological methods. If either a priori or a posteriori proportions were doubtful, it would be a priori propositions which are dependent on the arbitrary invention of human beings, while a posteriori propositions depend only the actual facts of existence. The Necessary vs Contingent, false dichotomy confuses the ontological with imaginary or fictional as well as what actually is with what supposedly could or might have been. Ontological (material) existence is not contingent on anything and cannot be anything other than what it is. The idea of a contingent universe is a fairy tale, the invention of superstition and the supernatural. The only contingencies in this world are future and only those things that depend on human choice, because everything else is determined by the nature of reality itself. All past events and all currently existing entities could not be anything other than what they are.
Presentation on theme: "Objectives 1.Identify several forms of energy. 2.Calculate kinetic energy for an object. 3.Apply the work-kinetic energy theorem to solve problems. 4.Distinguish."— Presentation transcript: Objectives 1.Identify several forms of energy. 2.Calculate kinetic energy for an object. 3.Apply the work-kinetic energy theorem to solve problems. 4.Distinguish between kinetic and potential energy. 5.Classify different types of potential energy. 6.Calculate the potential energy associated with an objects position. Kinetic energy Kinetic energy is the energy of motion. An object which has motion - whether it be vertical or horizontal motion - has kinetic energy. The equation for kinetic energy is: – Where KE is kinetic energy, in joules – v is the speed of the object, in m/s – m is the mass of the object, in kg Kinetic energy is a scalar quantity. KE = ½ mv 2 Questions 1.Which of the following has kinetic energy? a.a falling sky diver b.a parked car c.a shark chasing a fish d.a calculator sitting on a desk 2.If a bowling ball and a volleyball are traveling at the same speed, do they have the same kinetic energy? 3.Car A and car B are identical and are traveling at the same speed. Car A is going north while car B is going east. Which car has greater kinetic energy? Kinetic Energy depends on mass and speed KE = ½ m v 2 The equation shows that...... the more kinetic energy it has. Speed has more effect on Kinetic energy than mass. the more mass a body has or the faster its moving KE is proportional to v 2, so doubling the speed quadruples kinetic energy, and tripling the speed makes it nine times greater. speed Kinetic energy mass Kinetic energy KE is directly proportional to m, so doubling the mass doubles kinetic energy, and tripling the mass makes it three times greater. Sample Problem 5B A 7.00 kg bowling ball moves at 3.00 m/s. How much kinetic energy does the bowling ball heave? How fast must a 2.45 g table-tennis ball move in order to have the same kinetic energy as the bowling ball? Is this speed reasonable for a table-tennis ball? Example 1 An object moving at a constant speed of 25 meters per second possesses 450 joules of kinetic energy. What is the object's mass? Known: KE = 450 J v = 25 m/s Unknown: m = ? kg Solve: KE = ½ mv 2 450 J = ½ (m)(25 m/s) 2 m = 1.4 kg example 2 A cart of mass m traveling at a speed v has kinetic energy KE. If the mass of the cart is doubled and its speed is halved, the kinetic energy of the cart will be a.half as great b.twice as great c.one-fourth as great d.four times as great Example 3 Which graph best represents the relationship between the kinetic energy, KE, and the velocity of an object accelerating in a straight line? ab c d Class work Page 174 - #1-5 1.170 m/s 2.38.8 m/s 3.The bullet with the greater mass; 2 to 1 4.2.4 J, 9.6 J; the bullet with the greater speed; 1 to 4 5.1600 kg Work-kinetic energy theorem The net work done on an object is equal to the change in the kinetic energy. Practice Problem #1 A 1000-kg car traveling with a speed of 25 m/s skids to a stop. The car experiences an 8000 N force of friction. Determine the stopping distance of the car. W net = KE = KE f - KE i (-8000N) d = -312 500 0 J d = 39.1 m Practice Problem #2 At the end of the Shock Wave roller coaster ride, the 6000-kg train of cars (includes passengers) is slowed from a speed of 20 m/s to a speed of 5 m/s over a distance of 20 meters. Determine the braking force required to slow the train of cars by this amount. W net = KE = KE f - KE i The above problems have one thing in common: there is a force which does work over a distance in order to remove mechanical energy from an object. The force acts opposite the object's motion and thus does negative work which results in a loss of the object's total amount of mechanical energy. In each situation, the work is related to the kinetic energy change. negative work which results in a loss of the object's total amount of mechanical energy TME i + W ext = TME f KE i + W ext = 0 J ½ mv i 2 + Fdcos(180 o ) = 0 J Fd = ½ mv i 2 d ~ v i 2 Stopping distance is dependent upon the square of the velocity. Stopping distance and initial velocity F f = μF norm = μmg W net = KE W net = 0 - ½ mv i 2 F f d = - ½ mv i 2 d ~ v i 2 - μmgd = - ½ mv i 2 d = v i 2 / 2μg practice (m/s)Stopping Distance (m) 0 m/s0 5 m/s4 m 10 m/s 15 m/s 20 m/s 25 m/s 4 m x 2 2 = 16 m 4 m x 3 2 = 36 m 4 m x 4 2 = 64 m 4 m x 5 2 = 100 m Example 5C On a frozen pond, a person kicks a 10.0 kg sled, giving it an initial speed of 2.2 m/s. How far does the sled move if the coefficient of kinetic friction between and sled and the ice is 0.10? Class work Page 176 practice 5C: #1-5 1.7.8 m 2.21 m 3.5.1 m 4.300 N 5.a.-190 J; b. -280 J; c. 750 J; d. 280 J; e. 7.6 m/s Potential energy An object can store energy as the result of its position. Potential energy is the stored energy of position possessed by an object. Two form: – Gravitational – Elastic Gravitational potential energy Gravitational potential energy is the energy stored in an object as the result of its vertical position (height) The energy is stored as the result of the gravitational attraction of the Earth for the object.. Gravitational attraction between Earth and the object: m: mass, in kilograms g: acceleration due to gravity = 9.81 m/s 2 height, in meters GPE and gravity When an object falls, gravity does positive work. Object loses GPE. When an object is raised, gravity does negative work. Object gains GPE. Change in GPE only depends on change in height, not path As long as the object starts and ends at the same height, the object has the same change in GPE because gravity does the same amount of work regardless of which path is taken. example The diagram shows points A, B, and C at or near Earths surface. As a mass is moved from A to B, 100. joules of work are done against gravity. What is the amount of work done against gravity as an identical mass is moved from A to C? As long as the object starts and ends at the same height, the object has the same change in GPE because gravity does the same amount of work regardless of which path is taken. 100 J Gravitational Potential Energy is relative: To determine the gravitational potential energy of an object, a zero height position must first be assigned. Typically, the ground is considered to be a position of zero height. But, it doesnt have to be: – It could be relative to the height above the lab table. – It could be relative to the bottom of a mountain – It could be the lowest position on a roller coaster Unit of energy The unit of energy is the same as work: Joules 1 joule = 1 (kg)(m/s 2 )(m) = 1 Newton meter 1 joule = 1 (kg)(m 2 /s 2 ) Work and energy has the same unit example How much potential energy is gained by an object with a mass of 2.00 kg that is lifted from the floor to the top of 0.92 m high table? Known: m = 2.00 kg h = 0.92 m g = 9.81 m/s 2 unknown: PE = ? J Solve: PE = mgh PE = (2.00 kg)(9.81m/s 2 )(0.92 m) = 18 J The graph of gravitational potential energy vs. vertical height for an object near Earth's surface gives the weight of the object. The weight of the object is the slope of the line. Weight = 25 J/1.0 m = 25 N m = weight / g = 2.5 kg Springs are a special instance of a device that can store elastic potential energy due to either compression or stretching. A force is required to compress or stretch a spring; the more compression/stretch there is, the more force that is required to compress it further. For certain springs, the amount of force is directly proportional to the amount of stretch or compression (x); the constant of proportionality is known as the spring constant (k). Hookes Law F = kx Spring force = spring constant x displacement F in the force needed to displace (by stretching or compressing) a spring x meters from the equilibrium (relaxed) position. The SI unit of F is Newton. k is spring constant. It is a measure of stiffness of the spring. The greater value of k means a stiffer spring because more force is needed to stretch or compress it that spring. The SI units of k are N/m. x the distance difference between the length of stretched/compressed spring and its relaxed (equilibrium) spring. example Determine the x in F = kx F = kx Spring force is directly proportional to the elongation of the spring (displacement) elongation force The slope represents spring constant: k = F / x caution Sometimes, we might see a graph such as this: force elongation The slope represents the inverse of spring constant: Slope = 1/k = x / F example Given the following data table and corresponding graph, calculate the spring constant of this spring. example A 20.-newton weight is attached to a spring, causing it to stretch, as shown in the diagram. What is the spring constant of this spring? example The graph below shows elongation as a function of the applied force for two springs, A and B. Compared to the spring constant for spring A, the spring constant for spring B is 1.smaller 2.larger 3.the same Example 12A If a mass of 0.55 kg attached to a vertical spring stretches the spring 2.0 cm from its original equilibrium position, what is the spring constant? Class work Page 441, Practice 12A #1-4 1.a. 15 N/m; b. less stiff 2.320 N/m 3.2700 N/m 4.81 N Lab 14 – Hookes Law (1) 1.Purpose: To determine the spring constant of a given spring. 2.Material: spring, masses, meter stick. 3.Procedure: Hook different masses on the spring, record the force F s (mg) and corresponding elongation x. Plot the graph of Force vs. elongation 4.Data section: should contain colomns: force applied, elongation. –Data measured directly from the experiment. The units of measurements in a data table should be specified in column heading only. 5.Data analysis: Graph force vs. elongation on graph paper, answer following questions: –What does the slope mean in Force vs. elongation graph? –Determine the spring constant Force (N)Elongation (m) Force (N) Elongation (m) Force vs. elongation Elastic potential energy Elastic potential energy is the energy stored in elastic materials as the result of their stretching or compressing. Elastic potential energy can be stored in – Rubber bands – Bungee cores – Springs – trampolines Elastic potential energy in a spring Elastic potential energy is the Work done on the spring. k: spring constant x: amount of compression or extension relative to equilibrium position Elastic potential energy is directly proportional to x 2 elongation Elastic potential energy Example 1 As shown in the diagram, a 0.50-meter-long spring is stretched from its equilibrium position to a length of 1.00 meter by a weight. If 15 joules of energy are stored in the stretched spring, what is the value of the spring constant? PE = ½ kx 2 15 J = ½ k (0.50 m) 2 k = 120 N/m Example 2 The unstretched spring in the diagram has a length of 0.40 meter and a spring constant k. A weight is hung from the spring, causing it to stretch to a length of 0.60 meter. In terms of k, how many joules of elastic potential energy are stored in this stretched spring? PE s = ½ kx 2 PE s = ½ k(0.20 m) 2 PE s = (0.020 k) J Example 3 Determine the potential energy stored in the spring with a spring constant of 25.0 N/m when a force of 2.50 N is applied to it. Given: F s = 2.50 N k = 25.0 N/m Unknown: PEs = ? J Solve: PE s = ½ kx 2 To find x, use F s = kx, (2.50 N) = (25.0 N/m)(x) x = 0.100 m PE s = ½ (25.0 N/m)(0.100 m) 2 PE s = 0.125 J Example 4 A 10.-newton force is required to hold a stretched spring 0.20 meter from its rest position. What is the potential energy stored in the stretched spring? PE s = F avg x PE s = (½ F)x PE s = ½ (10. N)(.20 m) PE s = 1.0 J Given: F = 10. N x = 0.20 m Unknown: PEs = ? J Sample Problem 5D A 70.0 kg stuntman is attached to a bungee cord with an unstretched length of 15.0 m. He jumps off a bridge spanning a river from a height of 50.0 m. When he finally stops, the cord has a stretched length of 44.0 m. Treat the stuntman as a point mass, and disregard the weight of the bungee cord. Assuming the spring constant of the bungee cord is 71.8 N/m, what is the total potential energy relative to the water when the man stops falling? Class work Page 180 – practice 5D #1-3 1.3.3 J 2.0.031 J 3.a. 785 J; b. 105 J; c. 0.00 J
Throughout the 60-year history of the U.S. space program—from the Mercury capsules of the 1960s to today’s International Space Station—astronauts have been getting sick. Researchers know being in orbit seems to suppress their immune systems, creating a more fertile ground for infections to grow. But nobody really understands why. Early on the morning of May 4, a SpaceX Falcon 9 rocket will launch a cargo mission to the ISS from Cape Canaveral Air Force Station. Along with fresh water, food, and other necessities for the crew, the craft will be carrying two experiments designed by Penn scientists that could help shed light on why bugs have bedeviled space travelers. For more than a decade, Dan Huh, a professor in the School of Engineering and Applied Science, has been developing super-small devices that use living cells to stand in for larger organs. These organs-on-a-chip hold great promise for all kinds of research, from diagnosing disease to curing them. They’re also a way to test things, including drugs and cosmetics, in a way that mimics real life without relying on animal subjects. Last year, Huh and G. Scott Worthen, a professor of pediatrics at the Perelman School of Medicine and a neonatologist at Children’s Hospital of Philadelphia, got a $2 million grant to make the chips, then launch them into orbit, this year and again in 2021. The grant, from the National Center for Advancing Translational Sciences at the National Institutes of Health (NIH), NASA, and the Center for the Advancement of Science in Space, is part of a larger program to understand the physical and biological effects of space travel on humans. “We are thrilled with this rare opportunity to probe one of the potential major health issues in space using our organ-on-a-chip technology,” Huh says. “It has been quite a journey to get to this point and we are all eager to see the successful launch of our devices.” Solving the puzzle of illness in orbit is an important part of NASA’s long-term plan to send people well beyond the Moon, particularly to Mars. There are a number of concerns, including bone and muscle loss from extended periods of microgravity, DNA damage from radiation exposure, and even common motion sickness. Complete data on infections aren’t available, but NASA has reported that 15 of the 29 Apollo astronauts had bacterial or viral infections. Between 1989 and 1999, more than 26 space shuttle astronauts had infections. For a short trip, an infection might not be a big deal. For ISS astronauts who have stayed in orbit for as long as a year or those assigned to future long-distance missions, understanding what’s happening is much more important. The project’s Earth-bound impact will be significant, too. New insights gleaned from this study will deepen our understanding into the complex inner workings of immunity in the human body. This may also help scientists and drug companies develop more effective medical countermeasures for infectious and inflammatory diseases. In general, organ chip devices can offer drug companies a way to test new treatments on human cells without risking harm to an actual person, saving money and improving accuracy in the process. They’re also an alternative to animal testing, which won Huh the Lush Prize last year. And they hold promise for the move toward personalized medicine. Huh’s lab also received a $1 million grant from the Cancer Research Institute to support work creating chips that mimic the interface of cancer and immune cells. Ultimately, scientists want to start linking these organs-on-a-chip together, to be able to see how a drug, chemical, or other substance acts all over the human body. For the time being, the breakthroughs in the Huh lab are individual: Huh and his partners have developed models of the eye, lung, placenta, pancreas, cervix, and fat, opening the door to new studies on conditions ranging from preterm birth to diabetes. “Research in my lab over the last six years or so has been driven towards the goal of advancing our ability to emulate the structural and functional complexity of human tissues and organs. As a result, there has been a lot of progress in developing novel devices and in vitro platforms for microfluidic cell culture and tissue engineering,” Huh says. “We are now trying to exploit these advanced systems to model and interrogate biological processes underlying complex human diseases. In collaboration with pharmaceutical companies, we are also beginning to use our organ-chip models for screening the efficacy and safety of therapeutic compounds.” Huh and his team have created two separate experiments for this first launch. The first essentially mimics an infection inside a human airway, to see what happens to the bacteria, and the surrounding cells, in orbit. Huh’s BIOLines lab created the actual chips. Graduate students Andrei Georgescu and Jeongyun Seo work with Huh on the project, while Worthen and postdoc Dipasri Konar handle the lung immunology questions. The lung chip is made of a polymer, and a permeable membrane is the platform for the human cells. For the lung-on-a-chip, one side of the membrane is coated with lung cells, to process the air, and capillary cells on the other, to provide the blood flow. The membrane is stretched and released to provide the bellows-like effect of real lungs. The bone marrow chip contains whole human bone marrow cells, and blood vessels that have been created to mimic what’s in the body. Organic blood vessels, Georgescu says, have signature crimps that keep them together, sort of like the edges of a pie crust. Looking at the cultured vessels through a scanning electron microscope, he says, you can see those crimps. “We’re making real vessels, not just what a vessel should look like,” he says. The bone marrow test aims to observe how the marrow the source of the white blood cells the body sends out as the first line of defense against infection behaves in space. The team is looking for the speed of the activation and movement of neutrophils, the most abundant type of white blood cells, in response to the same bacteria, Pseudomonas aeruginosa, that is used to infect the airway cells. The results will help researchers better understand what’s happening. Do the bacteria multiply faster in the airway? Do the neutrophils respond more slowly? Or is there something else going on? During the two weeks that the experiments are active, control experiments will be unfolding back on Earth. Then, once the ISS chips come back a month later, researchers can see what, if anything, happened differently to the two groups. Analyzing the results will take months. “We want to see, for each of these tissues, do they respond differently to these bacteria? It could be at Zero-G there’s an effect on the lung cells, or on the bone marrow, or both,” Georgescu says. “We’re looking at the two separately, giving the bone marrow a cocktail of what it would see with an infection, and giving the airway a real bacterial infection, so we can see which responses behave differently in space. “Then, we’ll connect the two together, so we can look at the process and see what’s happening.” That second combined experiment will launch in 2021, and also feature a control on Earth. Breaking new ground Huh has been working on these projects for many years, since his postdoctoral work at Harvard’s Wyss Institute for Biologically Inspired Engineering, where organs-on-chips were first created. Since coming to Penn in 2013, he has continued to push the work forward, developing other organs and refining the lung-on-a-chip he and colleagues presented in 2010. As new developments are made, the lab is also working on scaling up the production of the existing chips, to make them more viable for everyday use. For example, the polymer plates that are the backbone of the chips need lots of holes punched in them so that the microfluids containing the cells can be injected. The automated hole punch developed in the lab has not just made the process faster, but more accurate, since the hole is cleaner and more uniform. The ISS experiment created a number of additional engineering challenges, Georgescu says, that have kept the team busy even as the launch grows near. Huh and Worthen brought in two microgravity research companies, SpacePharma and Space Tango, to help address some of them. “The space project has kind of forced us to develop solutions on the technical side that otherwise we wouldn’t have needed to do,” Georgescu says. One major obstacle was figuring out how to keep the experiments going, without human intervention, during the two active weeks in orbit. The team built special syringe pumps to help them keep the chips fed. Another challenge was adapting the process of growing the chips to the timetable of the launch, docking with the ISS, and other mission logistics. The chips must be pre-grown, and “seeded” with cells two weeks in advance to give the cells time to grow. The final versions are ready four days before the launch. But because launches are notorious for delays, the team has been working furiously. “We have such a strict timeline,” Georgescu says. “We’ve been making replicas of these over and over in these weeks before the launch, just in case today’s chips are the ones to go up.” Dan Huh is the Wilf Family Term Assistant Professor of Bioengineering in the School of Engineering and Applied Science at the University of Pennsylvania. G. Scott Worthen is Physician-Scientist in Neonatology at Children's Hospital of Philadelphia and a Professor of Pediatrics at the Perelman School of Medicine. This research was supported by National Institutes of Health grant 1UG3TR002198. Images courtesy of BIOLines Laboratory.
Matrix is a rectangular arrangement of m × n numbers in the form of m horizontal lines and n vertical lines. These numbers can be real or complex. We call the horizontal lines rows and the vertical lines columns. As far as any entrance exam is concerned, matrices are an important topic. 2-3 questions can be expected from this topic for any entrance exam. So it is recommended that students should learn this topic thoroughly. The rectangular array is enclosed by bracket [ ] or ( ). A matrix is denoted by A = [aij] mxn. Here a11, a12, ….. etc., are the elements of the matrix A, where aij belongs to the ith row and jth column and is called the (i, j)th element of the matrix A = [aij]. The three algebraic operations are involved in matrix operations. They are addition, subtraction, and multiplication. We can find the transpose of a matrix by changing the rows and columns of the matrix. We can denote the transpose of a matrix by AT. If A = [aij], then AT = [aji]. Different types of matrices are given below. - Zero matrix - Row matrix - Column matrix - Square matrix - Singleton matrix - Diagonal matrix - Symmetric Matrix - Skew-Symmetric Matrix - Skew Hermitian Matrix - Orthogonal matrix - Idempotent matrix - Nilpotent matrix A matrix with all elements zero is called zero or null matrix. A row matrix contains only one row and a column matrix has only one column. A matrix with only one element is termed a singleton matrix. A matrix with an equal number of rows and columns is called a square matrix. A diagonal matrix is a square matrix in which all the elements except diagonal elements are zeros. Operations on Matrix The three basic operations on the matrix are addition, subtraction, and multiplication. For addition and subtraction, the order of the matrix should be identical. Let A and B be two matrices. To multiply A and B, the number of columns in the A should be equal to the number of rows in B. Multiplication is not commutative in the case of matrices. Addition is commutative in the case of a matrix. This means A+B = B+A. Permutations deal with arranging those items in a definite order. The selection of items from a group of items is termed as combinations. In permutation, order matters. In short, we can say that combination is about selection and permutations is about the arrangement of objects without actually listing them. We can select items in any order, in combination. Two basic principles of counting are the fundamental principle of counting and addition principle. As per the fundamental principle, if the event P occurs in n different ways and another event Q occurs in m different ways, then the total number of occurrences of two events is = m x n. According to the addition principle, if an event P occurs in m different ways and event Q occurs in n different ways and both the events cannot occur together, then the occurrence of events P or Q is given by m + n. Visit BYJU’S for more information regarding matrix, permutation, and combination.
One year ago, on Feb. 15, 2013, the world was witness to the dangers presented by near-Earth Objects (NEOs) when a relatively small asteroid entered Earth's atmosphere, exploding over Chelyabinsk, Russia, and releasing more energy than a large atomic bomb. Tracking near-Earth asteroids has been a significant endeavor for NASA and the broader astronomical community, which has discovered 10,713 known near-Earth objects to date. NASA is now pursuing new partnerships and collaborations in an Asteroid Grand Challenge to accelerate NASA’s existing planetary defense work, which will help find all asteroid threats to human population and know what to do about them. In parallel, NASA is developing an Asteroid Redirect Mission (ARM) -- a first-ever mission to identify, capture and redirect an asteroid to a safe orbit of Earth's moon for future exploration by astronauts in the 2020s. ARM will use capabilities in development, including the new Orion spacecraft and Space Launch System (SLS) rocket, and high-power Solar Electric Propulsion. All are critical components of deep-space exploration and essential to meet NASA's goal of sending humans to Mars in the 2030s. The mission represents an unprecedented technological feat, raising the bar for human exploration and discovery, while helping protect our home planet and bringing us closer to a human mission to one of these intriguing objects. NASA is assessing two concepts to robotically capture and redirect an asteroid mass into a stable orbit around the moon. In the first proposed concept, NASA would capture and redirect an entire very small asteroid. In the alternative concept, NASA would retrieve a large, boulder-like mass from a larger asteroid and return it to this same lunar orbit. In both cases, astronauts aboard an Orion spacecraft would then study the redirected asteroid mass in the vicinity of the moon and bring back samples. Very few known near-Earth objects are ARM candidates. Most known asteroids are too big to be fully captured and have orbits unsuitable for a spacecraft to redirect them into orbit around the moon. Some are so distant when discovered that their size and makeup are difficult for even our most powerful telescopes to discern. Still others could be potential targets, but go from newly discovered to out of range of our telescopes so quickly there is not enough time to observe them adequately. For the small asteroids that do closely approach Earth, NASA's Near-Earth Object Program has developed a rapid response system whose chief goal is to mobilize NEO-observing assets when an asteroid first appears that could qualify as a potential candidate for the ARM mission. "There are other elements involved, but if size were the only factor, we'd be looking for an asteroid smaller than about 40 feet (12 meters) across," said Paul Chodas, a senior scientist in the Near-Earth Object Program Office at NASA's Jet Propulsion Laboratory, Pasadena, Calif. "There are hundreds of millions of objects out there in this size range, but they are small and don't reflect a lot of sunlight, so they can be hard to spot. The best time to discover them is when they are brightest, when they are close to Earth." Asteroids are discovered by small, dedicated teams of astronomers using optical telescopes that repeatedly scan the sky looking for star-like objects, which change location in the sky slightly over the course of an hour or so. Asteroid surveys detect hundreds of such moving objects in a single night, but only a fraction of these will turn out to be new discoveries. The coordinates of detected moving objects are passed along to the Minor Planet Center in Cambridge, Mass., which either identifies each as a previously known object or assigns it a new designation. The observations are collated and then electronically published, along with an estimate of the object's orbit and intrinsic brightness. Automatic systems at NASA's Near-Earth Object Program Office at JPL take the Minor Planet Center data, compute refined orbit and brightness estimates, and update its online small-body database. A new screening process for the asteroid redirect mission has been set up which regularly checks the small-body database, looking for potential new candidates for the ARM mission. "If an asteroid looks as if it could meet the criteria of size and orbit, our automated system sends us an email with the subject "'New ARM Candidate,'" said Chodas. "When that happens, and it has happened several dozen times since we implemented the system in March of 2013, I know we'll have a busy day." Remember, things have to happen quickly because these small NEOs are only visible to even the most powerful of telescopes for a short period of a few days during their flyby of Earth. After receiving such an email, Chodas contacts the scientists coordinating radar observations at NASA's Deep Space Network station at Goldstone, Calif., and the Arecibo Observatory in Puerto Rico, to check on their availability. These are massive radar telescopes (the width of the Goldstone dish is 230 feet, or 70 meters, and the Arecibo dish is a whopping 1,000 feet, or 305 meters, wide). They have the capability of bouncing powerful microwaves off nearby asteroids, providing size and rotation information, and at times, even generating detailed images of an asteroid's surface. If these radar telescopes can see an asteroid and track it, definitive data on its orbit and size will quickly follow. Chodas may also contact selected optical observatories run by professionals or sophisticated amateurs, who may be able to quickly turn their telescopes to observe the small space rock. "The optical telescopes play an important role, as their observations can be used to improve our prediction of the orbital path, as well as provide data that helps us establish the rotation rate of an asteroid," said Chodas. Chodas also reaches out to the NASA-funded Infrared Telescope Facility (IRTF) in Mauna Kea, Hawaii. If the IRTF can detect the space rock, it can provide a wealth of detailed data on spectral type, reflectivity and expected composition. "After one of these alerts, there is a lot of calling and emailing going on in the beginning," said Chodas. "Then, we just simply have to wait to see what this worldwide network of assets can do to characterize the physical attributes of the potential ARM target." Scientists estimate that several dozen asteroids in the 20-to-40-foot (6-to-12-meter) size range fly by Earth at a distance even closer than the moon every year. But only a fraction of these are actually detected, and even fewer are in orbits that are good candidates for ARM. Roughly half will pass Earth on the daytime side and are impossible to find in the bright glare of sunlight. Even so, current asteroid surveys are finding tens of asteroids in this size range every year, and new technology is coming online to make detection of these objects even more likely. "The NASA-funded Catalina Sky Survey, which has made the majority of NEO discoveries since its inception in 2004, is getting an upgrade," said Lindley Johnson, program executive for the Near-Earth Objects Program at NASA Headquarters in Washington. "We also will have new telescopes with an upgraded detection capability, like PanSTARRS 2 and ATLAS, coming online soon, and the Defense Advanced Research Projects Agency's new Space Surveillance Telescope will give us a hand as well." As part of its effort to find asteroids hazardous to Earth and destinations for future robotic and human exploration, NASA's NEO program will continue to search for even better potential targets for ARM. Also, NASA's WISE spacecraft has been reactivated and rechristened NEOWISE (http://www.jpl.nasa.gov/news/news.php?release=2014-006) and could be used to characterize potential ARM targets. In an attempt to leave no space-stone unturned, the agency is also combining public-private partnerships, crowdsourcing and incentive prizes to enhance existing efforts. Through its Asteroid Grand Challenge, NASA is reaching out to any and all who may have the next pioneering idea in asteroid research. Of course, all this looking up and out and into the dim recesses of the solar system requires funding. NASA is already spending $20 million per year in the search for potentially hazardous asteroids through the Near Earth Object Observation Program. NASA's FY 14 budget included $105 million to plan for the capture and redirection of an asteroid, increase innovative partnerships and approaches to help us amplify efforts to identify and track and characterize asteroids, and conduct studies for mitigating potential threats. We are learning a lot more about space rocks than we ever had before and along with that the rate of discoveries will continue to climb. And of those, only a portion of the new asteroids discovered is destined to have the right stuff for an asteroid retrieval mission -- the right size and the right orbit to satisfy mission requirements for the asteroid redirect mission. The Near-Earth Object Program Office reports that, with current asteroid surveys already in place, about two potential candidates suitable for the asteroid redirect mission are discovered every year. The rate of discovery is projected to at least double as new imaging assets come online. Does Chodas think there is a perfect target asteroid out there for an asteroid redirect mission? "Absolutely. There are a lot of asteroids out there, and there are a lot of dedicated people down here, looking for them," said Chodas. "You put the two together and it's only a matter of time before we find some space rocks that fit our needs."
NASA must avoid spreading Earth microbes to suspected water in hillside streaks. Four years into its travels across Mars, NASA’s Curiosity rover faces an unexpected challenge: wending its way safely among dozens of dark streaks that could indicate water seeping from the red planet’s hillsides. Although scientists might love to investigate the streaks at close range, strict international rules prohibit Curiosity from touching any part of Mars that could host liquid water, to prevent contamination. But as the rover begins climbing the mountain Aeolis Mons next month, it will probably pass within a few kilometres of a dark streak that grew and shifted between February and July 2012 in ways suggestive of flowing water. NASA officials are trying to determine whether Earth microbes aboard Curiosity could contaminate the potential Martian seeps from a distance. If the risk is too high, NASA could shift the rover’s course — but that would present a daunting geographical challenge. There is only one obvious path to the ancient geological formations that Curiosity scientists have been yearning to sample for years (see ‘All wet?’). “We’re very excited to get up to these layers and find the 3-billion-year-old water,” says Ashwin Vasavada, Curiosity’s project scientist at NASA’s Jet Propulsion Laboratory (JPL) in Pasadena, California. “Not the ten-day-old water.” The streaks — dubbed recurring slope lineae (RSLs) because they appear, fade away and reappear seasonally on steep slopes — were first reported1 on Mars five years ago in a handful of places. The total count is now up to 452 possible RSLs. More than half of those are in the enormous equatorial canyon of Valles Marineris, but they also appear at other latitudes and longitudes. “We’re just finding them all over the place,” says David Stillman, a planetary scientist at the Southwest Research Institute in Boulder, Colorado, who leads the cataloguing. RSLs typically measure a few metres across and hundreds of metres long. One leading idea is that they form when the chilly Martian surface warms just enough to thaw an ice dam in the soil, allowing water to begin seeping downhill. When temperatures drop, the water freezes and the hillside lightens again until next season. But the picture is complicated by factors such as potential salt in the water; brines may seep at lower temperatures than fresher water2. Other possible explanations for the streaks include water condensing from the atmosphere, or the flow of bone-dry debris. “They have a lot of behaviours that resemble liquid water,” says Colin Dundas, a planetary geologist at the US Geological Survey in Flagstaff, Arizona. “But Mars is a strange place, and it’s worth considering the possibility there are dry processes that could surprise us.” A study published last month used orbital infrared data to suggest that typical RSLs contain no more than 3% water3. And other streaky-slope Martian features, known as gullies, were initially thought to be caused by liquid water but are now thought to be formed mostly by carbon dioxide frost. Dundas and his colleagues have counted 58 possible RSLs near Curiosity’s landing site in Gale Crater4. Many of them appeared after a planet-wide dust storm in 2007 — possibly because the dust acted as a greenhouse and temporarily warmed the surface, Stillman says. Since January, mission scientists have used the ChemCam instrument aboard the rover — which includes a small telescope — to photograph nearby streaks whenever possible. So far, the rover has taken pictures of 8 of the 58 locations and seen no changes. The features are lines on slopes, but they have not yet recurred. “We’ve got two of the three letters in the acronym,” says Ryan Anderson, a geologist at the US Geological Survey who leads the imaging campaign. Curiosity is currently about 5 kilometres away from the potential RSLs; on its current projected path, it would never get any closer than about 2 kilometres, Vasavada says. The rover could not physically drive up and touch the streaks if it wanted to, because it cannot navigate the slopes of 25 degrees or greater on which they appear. But the rover’s sheer unexpected proximity to potential RSLs has NASA re-evaluating its planetary-protection protocols. Curiosity was only partly sterilized before going to Mars, and experts at JPL and NASA headquarters in Washington DC are calculating how long the remaining microbes could survive in Mars’s harsh atmosphere — as well as what weather conditions could transport them several kilometres away and possibly contaminate a water seep. “That hasn’t been well quantified for any mission,” says Vasavada. The work is an early test for the NASA Mars rover slated to launch in 2020, which will look for life and collect and stash samples for possible return to Earth. RSLs exist at several of the rover’s eight possible landing sites. For now, Curiosity is finishing exploring the Murray formation. This area is made of sediments from the bottom of ancient lakes — the sort of potentially life-supporting environment the rover was sent to find. Curiosity’s second extended mission begins on 1 October. Barring disaster, the rover’s lifespan will be set by its nuclear-power source, which will continue to dwindle in coming years through radioactive decay. Curiosity still has kilometres to scale on Aeolis Mons as it moves towards its final destination, a sulfate-rich group of rocks. McEwen, A. S. et al. Science 333, 740–743 (2011). Ojha, L. et al. Nature Geosci. 8, 829–832 (2015). Edwards, C. S. & Piqueux, S. Geophys. Res. Lett. http://dx.doi.org/10.1002/2016GL070179 (2016). Dundas, C. M. & McEwen, A. S. Icarus 254, 213–218 (2015). Related links in Nature Research Why hunting for life in Martian water will be a tricky task 2015-Sep-28 Water seems to flow freely on Mars 2013-Dec-10 Mystery of slick Martian slopes gets less slippery 2012-Mar-20 Dark streaks guide search for life on Mars 2011-Aug-04 Related external links Rights and permissions About this article Cite this article Witze, A. Mars contamination fear could divert Curiosity rover. Nature 537, 145–146 (2016). https://doi.org/10.1038/537145a This article is cited by Martian dance of fiction and fact
School debates are a rising star among the interests of teachers, and the reasons for this are numerous and clear: debates increase student engagement and sensitivity towards delicate issues, besides helping them acquire new research skills. However, although this activity has always had space in many education systems around the world, it is fairly new for European schools. This course provides specific tools for teachers to activate, facilitate, and dig deeper into meaningful discussions that lead to learning. As a participant, you will acquire all the tools to introduce debates in your teaching practice, both as a “stand-alone” activity meant to enhance soft skills (such as active listening, researching, empathy, and public speaking) and as part of your curriculum. You will learn to energize discussions that will improve student engagement, develop critical thinking skills, and deepen understanding and appreciation of diverse views. The instructor will provide discussion tools and model exercises to enliven classrooms and ensure that all voices are heard by demonstrating ways to listen to one another and respond with empathy so that all students’ values and needs are respected. Finally, you will become familiar with creating a lesson plan for a debate, as well as with evaluation tools, and you will be able to provide your students with all the tips they need to excel in their debate! After an intense week of practice, during which you will plan, analyze, participate and evaluate debates, you will feel confident implementing debates, therefore taking advantage of all the amazing benefits in your classroom. The course will help the participants to: - Stimulate meaningful, constructive, and engaging discussion in the classroom while ensuring that all students are heard; - Develop students’ ability to articulate an informed understanding of different topics; - Model and facilitate deeper listening skills resulting in improved collaboration, group dynamics, and collective learning; - Cultivate empathy amongst their students while fostering an appreciation for different perspectives; - Identify in which area of their practice to introduce debates as a teaching tool; - Design interesting lesson plans involving debates; - Create evaluation tools for their debate sessions; - Implement debates as an outcome for curriculum subjects. Day 1 – Course introduction and needs analysis - Introduction to the course, the school, and the external week activities; - Icebreaker activities; - Analyze the need for debate classes; - Presentations of the participants’ schools. Day 2 – The skills for debates - Theories behind debate practice; - The concept of active listening; - The art of public speaking. Day 3 – Lesson planning with debates - Implement debates in your curriculum; - Choose the right topic for your debate; - Carry out accurate preparatory research; - Create evaluation tools for your debates. Day 4 – Let’s debate! - How to facilitate the emergence of debates; - Support debating activities while avoiding taking control of them. Day 5 – Supporting your students - Reflections about debating activities; - Create a sustainable debate lesson plan; - Create tools to help your students prepare for debate sessions. Day 6 – Course closure & cultural activities - Course evaluation: round-up of acquired competencies, feedback, and discussion; - Awarding of the course Certificate of Attendance; - Excursion and other external cultural activities.
DRAGONBOX ALGEBRA 5+ overview LESSON PLAN OVERVIEW DragonBox is one of the best examples of merging game rules with learning content. Consequently, when a player learns the rules of DragonBox, she is also learning some of the core rules and principles of algebra. This activity tries to leverage that by having students ostensibly describe and document the rules of the game and then convert those rules into a description of the basics of algebra. During play, students document the rules of the game (e.g. light and dark versions of the same cards can be combined to form hurricanes). They should be as comprehensive and articulate as possible, acting like technical writers for the game's manual. After they've drafted their manual, students convert this technical documentation into excerpts from an algebra textbook. This activity is designed to coincide with players' first experience with the game, so that they believe themselves initially to just be documenting the game's rules and then later realize they have been simultaneously documenting mathematical rules. - Use the order of operations to simplify/solve equations with the least number of steps possible. - Accurately simplify and solve for a single variable using the properties of real numbers under a given operation. - Students begin DragonBox and document any rules they observe. Make sure students are documenting the rules as precisely and thoroughly as possible. To provide context and guidance, have them consider themselves to be writing instruction manuals for the game. - Once students see the game start to convert the cards into mathematical numbers, variables, and operations, offer them a new task: converting their documentation (and future documentation) into explanations of algebraic rules. To provide this with some extra guidance and context, have students write this documentation in the form of algebra textbook examples. - As an extension, students can share the rules they created in small groups and compile the best entries into one comprehensive mini-textbook, or the best examples could be showcased to the class, discussed, and compiled until students decide on a final list of algebraic rules covered in DragonBox. - What is the default operation linking all cards? - What do you think the light and dark versions of cards represent? What is the mathematical term for that? - What do you think the line between the two sides of the screen represents? - What do you observe when you move a card over to the box’s side of the screen? - Why when you add a card to one side do you have to add cards on the other side? What property does that represent? - What does it mean that something is labeled “useless” after you finish a round? How do you prevent a card from being determined useless? - Why can you eliminate a die with the value “1” when it is connected to another card? - Why were you able to eliminate a fraction with two equal cards (one on top and the other on bottom) in the same fraction? - When there’s a box at the bottom of the fraction, what strategy do you use to eliminate it? Common Core - Mathematics Expressions & equations Grade 6: Apply and extend previous understandings of arithmetic to algebraic expressions. CCSS.Math.Content.6.EE.A.2 Write, read, and evaluate expressions in which letters stand for numbers.CCSS.Math.Content.6.EE.A.3 Apply the properties of operations to generate equivalent expressions. For example, apply the distributive property to the expression 3 (2 + x) to produce the equivalent expression 6 + 3x; apply the distributive property to the expression 24x + 18y to produce the equivalent expression 6 (4x + 3y); apply properties of operations to y + y + y to produce the equivalent expression 3y.CCSS.Math.Content.6.EE.A.4 Identify when two expressions are equivalent (i.e., when the two expressions name the same number regardless of which value is substituted into them).For example, the expressions y + y + y and 3y are equivalent because they name the same number regardless of which number y stands for. Grade 6: Reason about and solve one-variable equations and inequalities. CCSS.Math.Content.6.EE.B.5 Understand solving an equation or inequality as a process of answering a question: which values from a specified set, if any, make the equation or inequality true? Use substitution to determine whether a given number in a specified set makes an equation or inequality true. CCSS.Math.Content.6.EE.B.8 Write an inequality of the form x > c or x < c to represent a constraint or condition in a real-world or mathematical problem. Recognize that inequalities of the form x > c or x < c have infinitely many solutions; represent solutions of such inequalities on number line diagrams. Grade 7: Use properties of operations to generate equivalent expressions. CCSS.Math.Content.7.EE.A.1 Apply properties of operations as strategies to add, subtract, factor, and expand linear expressions with rational coefficients. CCSS.Math.Content.7.EE.A.2 Understand that rewriting an expression in different forms in a problem context can shed light on the problem and how the quantities in it are related. For example, a + 0.05a = 1.05a means that “increase by 5%” is the same as “multiply by 1.05.” CCSS.Math.Content.8.EE.C.7 Solve linear equations in one variable. CCSS.Math.Content.8.EE.C.8 Analyze and solve pairs of simultaneous linear equations.
Fabales, order of dicotyledonous flowering plants in the Rosid I group among the core eudicots. The order comprises 4 families (Fabaceae, Polygalaceae, Quillajaceae, and Surianaceae), 754 genera, and more than 20,000 species. However, more than 95 percent of the genera and species belong to Fabaceae, the legume family. Fabaceae is the third largest family of angiosperms, exceeded only by Asteraceae (aster or sunflower family) and Orchidaceae (orchid family). Along with Poaceae (the grass family), Fabaceae is the most important plant family in the production of food for humans and livestock, as well as in the production of industrial products. Because they develop bacteria-harbouring root nodules that maintain the nitrogen balance in the soil, which is necessary for plant growth, the legumes are also an essential element in nature and in agriculture. Legumes are perhaps best known by their more common cultivated names, such as peas, beans, soybeans, peanuts (groundnuts), alfalfa (lucerne), and clover. The characteristic fruit of most legumes is a pod (legume) consisting, in essence, of an ovary that is a tightly folded leaf, as in a pea pod. The pod normally splits into two halves when mature. Distribution and abundance Fabaceae, with about 730 genera and nearly 20,000 species, occurs in all terrestrial habitats occupied by plants, although the greatest number of species is in the tropics, where the group probably originated. There are also many legumes in the temperate plains, woodlands, and deserts. A few succeed as weeds in farming, industrial, and urban environments. They are less common in the northern boreal forests and are rare in aquatic habitats. Beyond their natural occurrence, many legumes—e.g., Glycine max (soybeans) and Phaseolus (several species of beans)—are cultivated every year on a single vast area of land. Many species are seeded as pasture components; others are planted for soil improvement or to prevent erosion; woody species are grown for firewood and timber in developing countries; and dozens of species are popular ornamentals. Thus, legumes are cosmopolitan, not only in the wild but also in the human environment that has replaced the wilderness throughout much of the world. Polygalaceae, the milkwort family, is the second largest family in the order, with about 21 genera and some 1,000 species. Its members are distributed worldwide, except for the Arctic and New Zealand. The genus Polygala contains about a third of the species in the family. Surianaceae, with five genera and eight species, is restricted to Australia (Cadellia, Guilfoylia, and Stylobasium), Mexico (Recchia), and pantropical littoral areas (Suriana). Quillajaceae, with one genus (Quillaja) and three species, is restricted to temperate South America. Characteristic morphological features Members of Fabaceae include trees, herbaceous or woody vines, and perennial or annual herbs. The leaves are usually compound, and in some the leaflets are secondarily compound. The simple leaves of some are presumably reduced from the compound forms. The most striking of these modified leaf forms are the several hundred species of Australian Acacia, in which the apparently simple leaf represents the flattened and modified axis of a compound leaf. Stipules, a pair of appendages subtending the leaf petiole, are usually present. The flowers may be solitary or bunched in leaf axils. The inflorescences, when present, are of various kinds, simple or branched in diverse ways. The flowers are usually bisexual, but unisexual flowers occur sporadically throughout the family. Some legumes produce two kinds of flowers, commonly on the same plant. The typical kind have conspicuous petals that open so that cross-pollination (in some, an obligatory mechanism of propagation) is possible (chasmogamous); in others all parts are reduced and the petals do not open, thus enforcing self-pollination (cleistogamous). In the chasmogamous flowers, the sepals are most commonly partly fused, and the five petals alternate in position with the sepals. There are commonly 10 stamens, but there may be fewer or more. The stamens may remain free or they may be fused into a single tubular structure (monadelphous) or into a group of nine united stamens with a free stamen above this (diadelphous). Most of these floral features, however, also can be found in other plant families. It is the pistil, or gynoecium, of Fabales that is unique. The single carpel develops into a fruit (the pod, or legume) that generally splits open (dehisces) along one or both edges (sutures) at maturity, releasing the seeds that have developed from the ovules. This basic legume type is idealized in a pea or bean pod, which bears two rows of marginally placed ovules along the upper suture. But evolution within the family has variously modified many legume fruits, and they bear but scant resemblance to that of a bean or pea. Some retain the form of the basic type but do not split open when ripe (indehiscent), as with Robinia (locusts) and Cercis (redbud). In many Fabaceae—for instance, Melilotus (sweet clover)—the fruit has been reduced to a single-seeded indehiscent structure that resembles a tiny nutlet. In others, it is several-seeded and indehiscent but is divided transversely into single-seeded segments that break apart at maturity (e.g., Desmodium). In another variant, the fruit coat becomes fleshy and plumlike as in the tropical Andira inermis (angelin tree). There are species in which the fruit is flattened and winged, facilitating wind transport. A few legumes have fruits that are produced or that mature underground; the peanut (Arachis hypogaea) is the best-known example. The peanut flower is actually produced above ground but assumes a position close to the soil surface as it ages. The ovary elongates and develops as a subterranean pod. All of these and other modifications are derivative and can be traced back to the basic, dehiscent pod of the pea or bean. Seeds within the legumes are also variable, ranging from the size of a pinhead to that of a baseball. Legume seeds are sometimes quite colourful; the Abrus precatorius (jequirity bean) and Ormosia species, for example, produce striking black and red seeds. These seeds have been used as currency by native peoples and in the production of beads and handbags, especially in the more tropical regions. They may be quite poisonous if eaten, however. Polygalaceae contains trees, shrubs, and even parasitic herbs that lack green chlorophyll. Leaves are usually spirally arranged, simple, and without toothed margins. The flowers are bisexual, usually strongly irregular in shape, and superficially similar to legume flowers of Fabaceae. Flowers are arranged in spikes, racemes, or panicles. Two of the five sepals are often petaloid. The three or five petals are often fused to the anthers to form a tube, and the stamens often open by terminal slits. Fruits that develop are quite variable in structure, including capsules, nuts, winged samaras, and fleshy drupes. Many seeds contain an aril. Members of Surianaceae are trees or shrubs with simple, spirally arranged leaves. The flowers are bisexual and usually radially symmetric, with five sepals and petals except in the unisexual and wind-pollinated Stylobasium, which lacks petals. The number of carpels varies from one to five. The fruits are berries, drupes, or nuts. The three species of Quillaja, the only genus in Quillajaceae, are small, evergreen trees with bark containing considerable quantities of saponins. Leaves are spirally arranged and toothed. The terminal inflorescence contains five parted flowers usually with a well-developed nectary inserted into the hypanthium (floral cup). The flowers are rather remarkable looking with the stamens opposite the sepals borne on the outer edge of the hypanthium disk and the stamens opposite the petals borne near the base of the ovary. The five carpels of the ovary mature into follicles splitting along one side to release winged seeds. Classification of Fabaceae Fabaceae has traditionally been divided into three subfamilies: Caesalpinioideae, Mimosoideae, and Faboideae (or Papilionoideae), each of which have been considered a separate plant family in the past. Classifications based on molecular analyses now separate Caesalpinioideae into several lineages and recognize the tribe Cercideae as a separate and more basal group in the family. The floral types in the legume family are quite variable, with flowers ranging from regular (i.e., actinomorphic, radially symmetric) in Mimosoideae to highly irregular (i.e., zygomorphic, bilaterally symmetric) in Faboideae. The flowers of the tribes Cercideae and Caesalpinioideae are somewhat intermediate between these extremes as regards symmetry. Cercideae is a small tropical and temperate woody group (e.g., Cercis, Bauhinia) in which the leaves are apparently simple and often bilobed. The flowers of Cercis are only superficially similar to those of Faboideae. The subfamily Caesalpinioideae (classified as a family, Caesalpiniaceae, by some authorities) is a heterogeneous group of plants with about 160 genera and some 2,000 species. The latest classifications show that this subfamily is the most basal lineage among the legumes and the one from which the other two subfamilies evolved. In that sense it is not a true monophyletic group, and it will undoubtedly be treated taxonomically in a different way in the future. Caesalpinioideae legumes are found throughout the world but are primarily woody plants in the tropics. Their moderate secondary invasion of temperate regions is mostly by herbaceous (nonwoody) evolutionary derivatives. The presence of Gleditsia triacanthos (honey locust) and of the related Gymnocladus dioica (Kentucky coffee tree) in temperate regions is a striking exception to this generalization, however, and they may represent more ancient and relictual lineages in the subfamily. Caesalpinioideae is more variable than the other three groups. The leaves are usually divided into leaflets (compound), or else the leaflets are again divided into leaflets (bicompound). The flowers also vary in symmetric form, from nearly radial to bilateral to irregular (symmetric in no plane). The sepals are usually separate and imbricate (overlapping in the bud). There are generally five separate imbricate petals, the upper one inside of the lateral petals in the bud. The 10 or fewer stamens are exposed, although not as conspicuously as in many of the members of the subfamily Mimosoideae described below. The fruit conformation is diverse. Bacterial nodulation is much less prevalent than in either of the other two subfamilies. Canavanine is not present. Many Caesalpinioideae species are prized ornamentals in the tropics, such as Delonix regia (royal poinciana), Cassia grandis (pink shower), and Bauhinia (orchid trees). Gleditsia triacanthos (honey locust) is well known in temperate regions. The subfamily Mimosoideae (classified as a family, Mimosaceae, by some authorities) includes 82 genera and more than 3,200 species. Like Caesalpinioideae, Mimosoideae legumes are primarily woody plants of the tropics, and the few species native to temperate parts of the world are mostly herbaceous. The majority of Mimosoideae have large leaves that are divided into secondary (compound) leaflets, and in many these leaflets are again divided (bicompound) and have a feathery, sometimes fernlike appearance. A striking exception is that of most of the Australian acacias (but not of the American kinds) mentioned above, in which the compound leaves have become modified, losing all their leaflets and appearing to be undivided, or simple. The flowers of the family are radially symmetric and are usually most easily recognized by the long stamens that extend beyond the rest of the flower. The calyx and corolla are both valvate in bud, contrasting with the usual condition in both of the other subfamilies. The petals are small and often not noticed except by close examination. Many of these plants have nodules containing the nitrogen-fixing bacterium Rhizobium on their roots. Mimosa pudica (sensitive plant) is sometimes grown as a novelty because its leaves quickly fold up when touched. Albizia julibrissin (mimosa, or silk, tree), a widely planted ornamental in the southern United States, folds its leaves together at dusk, decreasing by at least half the amount of leaf surface exposed to the atmosphere. The movement is caused by changes in water pressure in specialized structures at the base of the petioles and leaflets. The subfamily Faboideae, also called Papilionoideae (classified as a family, Fabaceae or Papilionaceae, by some authorities), is the largest group of legumes, consisting of about 475 genera and nearly 14,000 species grouped in 14 tribes. The name of the group probably originated because of the flower’s resemblance to a butterfly (Latin: papilio). It is the unique bilaterally symmetric (zygomorphic) flowers that especially characterize the group, so that thousands of species can be recognized as a member of Papilionoideae at a glance. The Lathyrus odoratus (sweet pea) flower provides an example. It has a large petal at the top, called the banner, or standard, that develops outside of the others before the flower has opened, two lateral petals called wings, and two lower petals that are usually fused and form a keel that encloses the stamens and pistil. The whole design is adapted for pollination by insects or, in a few members, by hummingbirds. Sweet nectar, to which the insects are cued by coloured petals, is the usual pollinator attractant. Various locking and releasing devices of the keel and wing petals control pollination in diverse ways favouring (or enforcing) either outcrossing or self-pollination—e.g., Trifolium (clover), Medicago (alfalfa), and Lotus corniculatus (bird’s-foot trefoil). The most effective kind of obligate self-pollination, however, is that of cleistogamous flowers, which do not open and thus prevent the entry of insects. Lespedeza and many other genera of Papilionoideae legumes bear both kinds of flowers, generally on the same plant. Enforced inbreeding serves to fix and maintain successful strains; outbreeding provides evolutionary diversity that may facilitate habitat or range expansion or may serve to provide flexibility for environmental changes. The calyx is composed of fused sepals. The stamens are 10 or fewer and are free in a few tribes but are most commonly fused at their filaments (monadelphous) or fused at all filaments but one, which remains free (diadelphous). The ovary has a single carpel and develops into various fruit types. Like the other subfamilies, members of Papilionoideae have their origins in the tropics, but their occupation of the arid and temperate parts of the world, mostly as herbaceous plants, is far more extensive. In the forests, prairies, and deserts, they are among the most common plants. The largest genus of legumes, Astragalus (2,400 to 3,300 species, known as locoweed), is mainly western North American but also occurs in Eurasia, India, Africa, and South America. These temperate legumes have mostly pinnate leaves among which those with three leaflets (trifoliolate) are common—e.g., beans and soybeans. Trifoliolate leaves rarely occur in the other subfamilies. The large genus of Lupinus (lupines) generally has 5 to 11 (occasionally up to 15) palmate leaves. The leaves of clovers are most commonly palmately trifoliolate, as are those of Baptisia. In one tribe the leaf axis terminates in a tendril, which facilitates climbing; members include the sweet pea and Vicia (vetches). The symbiotic relationship between Rhizobium and the plant, which takes place in root nodules and “fixes” atmospheric nitrogen into compounds useful to the plant, is most strongly developed in Papilionoideae legumes. The legumes produce many kinds of chemical substances—e.g., alkaloids, flavonoids, tannins, and the free amino acid canavanine (the latter found only in legumes). The function of those that are physiologically active (i.e., often poisonous) in animals seems usually to be that of predator defense. The medical potential (especially of the alkaloids) of some of these substances, or of their synthetic derivatives, has been extensively studied. The absence or presence and distribution of these substances in the various groups are also used in legume classification. Information about other cryptic features, such as pollen and plant anatomy, contributes to scientific knowledge of legume evolution as well.AD!!!! The origin of Fabales and its relationship to other plant families and orders are now becoming clearer. The order is closely related to a group of Rosid orders that also contain nitrogen-fixing plants: Rosales, Cucurbitales, and Fagales. Members of these orders which do fix nitrogen, however, use root-dwelling actinomycetes, typically Frankia, rather than Rhizobium and relatives used in legumes. There are at least six independent origins of the symbiotic relationships of Frankia and host plants. Within the order Fabales, the family Polygalaceae is most distant with the three other families, Surianaceae, Quillajaceae, and Fabaceae, possessing stipules and separate carpels. Of the latter three families, Quillajaceae and Fabaceae are most closely related and share the feature of clawed petals. Molecular evidence confirms the hypothesis that Caesalpinioideae includes the earliest diverging lineages among the legumes. This was also the prevailing theory prior to molecular studies, based on the group’s high diversity in the tropics, an extended fossil record, and the wide variation of floral and vegetative structures beyond the specializations in the other two subfamilies. The unique Rhizobium nitrogen-fixation symbiosis is much less developed in Caesalpinioideae than in the other groups; indeed, it seems to have originated in this subfamily. What is becoming more clear, however, is that Caesalpinioideae legumes are more diverse than previously thought and that the other two subfamilies, Mimosoideae and Papilionoideae, were derived from particular lineages among the diverse Caesalpinioideae legumes. This strengthens the idea that legumes form a single family; however, the phylogenetic relationships within the family are more complex than the former simple division into three subfamilies. A clearer picture is expected to develop in the future as further molecular analysis is obtained. Ecological and economic importance The unique ecological role of Fabaceae is in nitrogen fixation. Nitrogen is an element of all proteins and is an essential component in both plant and animal metabolism. Although elemental nitrogen makes up about 80 percent of the atmosphere, it is not directly available to living organisms; nitrogen that can be metabolized by living organisms must be in the form of nitrates or ammonia compounds. Through a mutual benefit arrangement (symbiosis) between legumes and Rhizobium bacteria, nitrogen gas (N2) is fixed into a compound and then becomes available to the biotic world. The legume plant furnishes a home and subsistence for the bacteria in root nodules. In a complex biosynthetic interaction between the host plant and the bacterium, nitrogen compounds are formed that are used by the host plant. These compounds are also available to other plants after decayed roots (and other plant parts) of the host plant have allowed these nitrogen products to be released into the soil. Animals obtain compound nitrogen by eating plants or other animals. Consequently, the vegetation of the forests, prairies, and deserts of most of the world is primarily dependent on the legume component of their vegetation and could not exist without it. Only in a few ecosystems—those that include few legume species—have alternative biological nitrogen-fixing arrangements evolved. These include symbiotic relationships between miscellaneous woody species other than legumes, and certain actinomycetes or bacteria and are limited mostly to boreal evergreen forests, certain coastal areas, and acid bogs. Nitrogen fixation by free-living cyanobacteria seems to be important in aquatic ecosystems. On a worldwide scale, however, these alternate arrangements of nitrogen fixation are relatively minor compared with those supported by legumes. Legume nitrogen fixation is of prime importance in agriculture. Before the use of synthetic fertilizers in the industrial countries, the cultivation of crop plants, with the exception of rice, was dependent on legumes and plant and animal wastes (as manure) for nitrogen fertilization. A common procedure was the use of crop rotation, usually the alternation of a cash grain crop such as corn (maize) with a legume, often alfalfa (Medicago sativa), in the temperate world. Apart from the nitrogen contribution, the legume in this case furnishes animal forage (hay or silage). Pastures or other grazing areas must have legume components, such as a clover (Trifolium), as well as a grass component. The 20th-century substitution of petroleum-derived synthetic nitrogen fertilizers is partly a consequence of economics in that a cash grain, such as corn or wheat, planted every year provides a higher fiscal return than alternating it with a legume crop. In addition, legume-rhizobium nitrogen fixation is inhibited when the level of nitrogen in the soil is high and is not sufficient for maximum yields of a grass crop. Therefore, in developed countries chemical fertilizers have largely replaced biological fixation in row-crop culture. On a worldwide basis, however, dependence on legumes is still preeminent. Even in the United States, when rangeland and pasture agriculture are included, it has been estimated that nitrogen production by biological fertilizers still exceeds chemical application. Other benefits accrue from the use of legumes to maintain soil nitrogen. Weed control is facilitated by a crop sequence that alternately changes the growing environment. Such legumes as alfalfa may be harvested for forage (hay or silage) or grazed by livestock. As cover crops, legumes prevent or reduce soil erosion and may be plowed under as “green manure.” Even though starch-producing grasses such as corn are more efficient under favourable conditions in producing energy foods, grain legumes are commonly grown in the tropics because they are more successful in depleted, nitrogen-deficient soils. Legume seeds constitute a part of the diet of nearly all humans. Their most vital role is that of supplying most of the protein in regions of high population density and in balancing the deficiencies of cereal protein (Poaceae). Except for the soybean and peanut, the order is not noted for the oil content of the seeds since most seeds have only about 10 percent oil content by weight. The legume seeds generally are highest in carbohydrate compounds, followed by protein and fat. Legumes are thus considered to be energy foods. Most legumes that are used for foods are multipurpose plants, serving for animal forage and soil improvement as well. Some, notably the soybean, are also important industrial crops. Fabaceae contains the more important crop plants, such as soybeans, beans, cowpeas (Vigna), pigeon peas (Cajanus cajan), chick-peas (Cicer arietinum), lentils (Lens culinaris), peas (Pisum sativum), and peanuts. Forage legumes (which concentrate their vitamins and proteins in their young growing parts) also are grown as animal feed. Their role as such is especially common in countries that can afford the luxury of meat (luxury because livestock typically yield fewer calories than the plants they are fed). Some major forage legumes of the temperate world include clovers, alfalfa, bird’s-foot trefoil (Lotus corniculatus), and vetches. In the tropics or arid regions, some of the important elements of the habitat are species of Glycine (soybean), Stylosanthes, and Desmodium (tick trefoil). Apart from the legume plants of worldwide importance, the following are examples of locally significant legume species that are cultivated or gathered from the wild. Some would plainly have substantial potential were they subject to genetic evaluation and development through modern breeding techniques. They are still in the same stage as teosinte (the ancestor of corn) or einkorn and emmer (the ancestors of the modern varieties of cultivated wheats) in yield and utilization potential. Notable among the locally useful plants of the legume family is Vigna subterranea (Bambara groundnut), a leguminous plant that develops underground fruits in the arid lands of Africa. Important too are the seeds of Bauhinia esculenta; they are gathered for the high-protein tubers and seeds. Vigna aconitifolia (moth bean) and V. umbellata (rice bean) are much used in the tropics for forage and soil improvement, and their seeds are palatable and rich in protein. Psophocarpus tetragonolobus (winged bean) is collected in Southeast Asia for the edible fruits and protein-rich tubers. Pachyrhizus (yam bean) is a high-yield root crop of Central America. Various forms of leucaena (such as Leucaena leucocephala) have been developed for animal forage, firewood, and construction, as well as for the high production of nitrogen that enriches impoverished soils, especially in the Asiatic tropics. Other important plants are acacia, used for animal food (both pods and leaf forage), for soil improvement and revegetation, and as a source of tannin and pulpwood; Cordeauxia edulis (yeheb), an uncultivated desert shrub of North Africa that has been so extensively exploited for food (seeds) that it is in danger of extinction; Ceratonia siliqua (carob), a Mediterranean plant whose fruits are used as animal and human food and in the manufacture of industrial gums; and Tamarindus indica (tamarind) of Africa, now primarily grown in India, which has food and medicinal uses and is also used as an industrial gum. The soybean is a bushy annual whose seeds are an important source of oil and protein. An edible oil pressed from the seeds is used to make margarine and as a stabilizing agent in the processing of food and the manufacture of cosmetics and pharmaceuticals. The oil is employed in such industrial products as paint, varnish, printing ink, soaps, insecticides, and disinfectants. Oil cakes pressed from the seeds are used as protein concentrate in the mixed-feeds industry. The soybean is a good source of vitamin B and is dried to produce soy milk, which is used in infant formulas. Fermented pods are used in making soy sauce, a flavouring common in Asian cooking. The peanut, a native of South America, is high in vitamin B complex, proteins, and minerals. The peanut is eaten raw or roasted or is processed into peanut butter. An edible oil is pressed from the seed and is used as a cooking oil and in processing margarine, soap, and lubricants. The oil also is employed by the pharmaceutical industry in making medications. Pressed oil cake is fed to livestock. Peanuts are commercially grown in the United States, Asia, Africa, and Central and South America. Legumes in general are used to revitalize nutrient-depleted soils, especially abandoned or abused agricultural and grazing lands. A more stringent revegetational challenge is that following strip-mining. Generally speaking, native legumes are common in these habitats because they are better able to thrive in nitrogen-poor soils than other plants. As mentioned above, the legumes produce secondary compounds of an irritating or poisonous nature that provide protection against predators. Some of these secondary compounds are being studied for their pharmacological potential. They are found in the leaves and fruiting parts and include flavonoids, alkaloids, terpenoids, nonprotein amino acids, and others. Some of these—for example, the amino acid canavanine—may comprise up to 5 percent of the dry weight of seeds. The chemical compound rotenone, which is toxic to a number of organisms, is sufficiently abundant in the roots and stems of certain species belonging to the Papilionoideae that primitive peoples often used these plants to poison fish. More recently it has been shown that serious bone and neural diseases afflicting humans (e.g., lathyrism) and livestock may be caused by the ingestion of unusually large amounts of certain free amino acids. In sheep, ingestion of large quantities of the amino acid mimosine, found in Leucaena glauca and some other species of the Mimosoideae, apparently halts the growth of hair or wool, and in certain cases the fleece itself has been observed to shed. A wide variety of alkaloids are found in the order, most of them restricted to Fabaceae, however. Some alkaloids occur in sufficient concentration in range plants to be poisonous to livestock, especially in species belonging to the large genus Astragalus. Species of Astragalus are commonly referred to as locoweed in North America because, following excessive consumption of these plants, cattle seem to become unmanageable and “go crazy” or “loco.” Astragalus is poisonous in any of three ways: by promoting selenium accumulation, through locoine, and through several nitrogen-containing toxins. In the early 20th century, several African species of Crotalaria were brought to the United States for use as soil-improvement plants. Their poisonous qualities were discovered in connection with animal stock loss, and development was then halted, but several persist as common noxious weeds. An interesting biochemical component of the legume seed is phytohemagglutinin, a large protein molecule that is specific in its capacity to agglutinate certain human blood types. Approximately 60 percent of the several thousand seeds belonging to this order tested to date contain the compound. Phytohemagglutinin is particularly abundant in the common bean and has been extracted in a relatively pure state on a commercial scale from species belonging to this genus. In addition to its agglutination properties, the compound has been of interest because of its other biological effects. It is toxic to rats, inactivates some human tumor cells, and has beneficial effects in the treatment of aplastic anemia, the shortage of blood cells in humans due to the destruction of blood-forming tissues. The subfamilies Caesalpinioideae and Mimosoideae do not contain many food crops and are perhaps best known for their shade and ornamental species, such as Cercis siliquastrum (the Judas tree, or redbud), Bauhinia bartlettii (orchid tree), and Acacia farnesiana (sweet acacia), although some of the more rapid-growing weedy species—for example, Leucaena leucocephala (white popinac) and Albizia species—are widely employed as green manure and fodder crops. Acacia species are used extensively in the production of gum exudates and wood, especially in South Africa and Australia, where the species are known as wattle trees.
Circle A Number For this probability worksheet, students explore and test the theory of probability. Students use strips of paper to take a poll of family members and then fill out a chart. 3 Views 0 Downloads The Circumference of a Circle and the Area of the Region it Encloses Bring your math class around with this task. Learners simply identify parts of a given circle, compute its radius, and estimate the circumference and area. It is a strong scaffolding exercise in preparation for applying the formulas for... 6th - 8th Math CCSS: Designed Using the Number Line to Model the Addition of Integers The second lesson in a series of 25 shows the class how to use arrows and a number line to add integers. Learners use their knowledge of the commutative property and absolute value in the explanation. Classmates play the integer game... 7th Math CCSS: Designed Micro-Geography of the Number Line Young mathematicians dive into the number line to discover decimals and how the numbers infinitely get smaller in between. They click the zoom button a few times and learn that the number line doesn't just stop at integers. Includes a... 5th - 8th Math CCSS: Adaptable Practice: Measuring Angles and Using a Protractor and More! Four fabulous worksheets are included in this resource, all having to do with the measurement of angles. On the first, anglers will use a protractor to determine the degrees of 10 different angles. An arc is drawn on each. On the second,... 4th - 6th Math CCSS: Adaptable Rational Number Project: Initial Fraction Ideas Deepen the fractional number sense of young learners with this introductory lesson on equivalent fractions. After completing a short warm-up activity, children go on to work in pairs using fraction circles to complete a table of... 3rd - 6th Math CCSS: Adaptable
MATH TERMS GLOSSARY MATH TERMS GLOSSARY Acute Angle: An angle with a measure less than 90º Addend: Any number that is being added. Analog Time: Time displayed on a timepiece having hour and minute hands. Area: The measure, in square units, of the inside of a plane figure. Array: A rectangular arrangement of objects in equal rows or columns. Bar Graph : A graph that uses bars to show data. Chord: A line segment whose endpoints are on a circle. Circumference: The distance around a circle. Combination: A group of items. Placing these items in a different order does not create a new combination. Composite number: A whole number having more than two factors. Cone: A solid figure that has a circular base and one vertex. Congruent: Having the same size and shape. EXAMPLE - Congruent angles have the same measure; congruent segments have the same length. Cube: (noun) A rectangular solid having six congruent, square faces. Cube: (verb) To raise a quantity or number to the third power. (x)(x)(x) Cylinder: A three-dimensional figure with two circular bases, which are parallel and congruent. Diameter: A line segment that has endpoints on a circle and passes through the center of the circle. Difference: The answer in a subtraction problem. EXAMPLE: 8 - 3 = 5; 5 is the difference. Edge: The line segment where two faces of a solid figure meet. Equation: A statement that two mathematical expressions are equal. Equilateral triangle: A triangle with all three sides of equal length. The angles of an equilateral triangle are always 60° Equivalent: Having the same value. Expanded notation: A way to write numbers that shows the value of each Expression: A variable, or any combination of numbers, variables, and symbols that represents a mathematical relationship (EXAMPLE: 24 x 2 + 5 or 4a – 9). Face: A plane figure that serves as one side of a solid figure. Fact family: A set of related addition and subtraction, or multiplication and division equations using the same numbers (EXAMPLE: 6+9=15, 15-9=6, 9+6=15, 15-6=9). Factor: A whole number that divides evenly into another whole number (EXAMPLE: 1, 3, 5, and 15 are factors of 15). Function: A relation in which every input value has a unique output value. Greatest common factor (GCF): The largest factor that 2 or more numbers have in common. Heptagon: A polygon with 7 sides. Hexagon: A polygon with 6 sides. Histogram: A bar graph in which the labels for the bars are numerical intervals. Hypotenuse: The longest side of a right triangle (which is also the side opposite the right angle). Inequality: A mathematical sentence that contains a symbol that shows the terms on either side of the symbol are unequal (EXAMPLE: 3+4>6). Intersecting lines: Lines that cross. Irrational Number: a number that cannot be written as a simple fraction - it's decimal goes on forever without repeating. It is called "irrational" because it cannot be written as a ratio (or fraction). Isosceles triangle: A triangle with two equal sides. Least common denominator (LCD): The least common multiple of the denominators in two or more fractions. Least common multiple (LCM): The smallest number, other than zero, that is a common multiple of two or more numbers. Leg (of a right triangle): Either of the two sides that form the right angle in a right triangle. Line: A straight path extending in both directions with no endpoints. Line of symmetry: A line that divides a figure into two halves that are mirror images of each other. Line plot: A graph showing the frequency of data on a number line. Line segment: A part of a line with two endpoints. Mean (average): The number found by dividing the sum of a set of numbers by the number of addends. Median: The middle number in an ordered set of data, or the average of the two middle numbers when the set has two middle numbers. Mode: The number(s) that occurs most often in a set of data. Multiples: The product of a given whole number and another whole number (EXAMPLE: multiples of 4 are 4, 8, 12, 16….). Nonagon: A polygon with 9 sides. Number sentence: An equation or inequality with numbers. Obtuse angle: An angle with a measure more than 90º. Octagon: A polygon with 8 sides. Ordered pair: A pair of numbers used to locate a point on a coordinate grid. The first number tells how far to move horizontally, and the second number tells how far to move vertically. Parallel lines: Lines that never intersect and are always the same distance apart. Parallelogram: A quadrilateral whose opposite sides are parallel and congruent. Perimeter: The distance around a figure. Permutation: The action of changing the arrangement, especially the linear order, of a set of items. Perpendicular lines: Two lines, segments or rays that intersect to form right angles. Pictograph: A graph that uses pictures to show and compare information. Plane: A flat surface that extends infinitely in all directions. Prime number: A whole number that has exactly two factors, 1 and itself. Product: The answer to a multiplication problem. EXAMPLE: 6 x 2 = 12; the product is 12. Pyramid: A solid figure with a polygon base and triangular sides that meet at a single point (vertex). Quadrilateral: A polygon with 4 sides. Radius: A line segment that has one endpoint on a circle and the other endpoint at the center of the circle. Range: The difference between the greatest and least numbers in a set of data. Rate: A ratio that compares two quantities having different units (EXAMPLE: 95 miles in 2 hours). Ratio: A comparison of two numbers using division. Ray: A part of a line that has one endpoint and continues without end in one direction. Rectangular prism: A solid figure in which all six faces are rectangles. Regular polygon: A polygon that has all sides congruent and all angles congruent. Remainder: The amount left over when a number cannot be divided equally. Repeating decimal: A decimal that has a repeating sequence of numbers after the decimal point. Rhombus: A parallelogram with four equal sides. Right triangle: A triangle that has a 90º angle. Rotation (turn): A movement of a figure that turns that figure around a fixed point. Scalene triangle: A triangle in which no sides are equal. Similar polygons: Polygons that have the same shape, but not necessarily the same size. Corresponding sides of similar polygons are proportional. Sphere: A solid figure that has all points the same distance from the center. Square: (noun) A 4-sided polygon where all sides have equal length and every angle is a right angle (90°). Square: (verb) To rasie a number or quantity to the second power. (The number is multiplied by itself.) (x)(x) Straight angle: An angle with a measure of 180º. Sum: The answer to an addition problem. EXAMPLE: 12 + 7 = 19; the sum is 19. Tally chart: A table that uses tally marks to record data. Terminating decimal: A decimal that contains a finite number of digits. Tessellate: To combine plane figures so that they cover an area without any gaps or overlaps. Transformation: The moving of a figure by a translation (slide), rotation (turn) or reflection (flip). Translation (slide): A movement of a figure to a new position without turning or flipping it. Trapezoid: A quadrilateral with exactly one pair of parallel sides. Unit price: The price of a single item or amount (EXAMPLE: $3.50 per pound). Unit rate: A rate with the second term being one unit (EXAMPLE: 50 mi/gal, 4.5 km/sec). Variable: A letter or symbol that stands for a number or numbers. Venn diagram: A diagram that shows relationships among sets of objects. Vertex: A point where lines, rays, sides of a polygon or edges of a polyhedron meet (corner). Volume (capacity): The amount of space (in cubic units) that a solid figure can hold. Whole number: Any of the numbers 0, 1, 2, 3, 4, 5, … (and so on). X-axis: The horizontal number line on a coordinate plane. Y-axis: The vertical number line on a coordinate plane.
"I continue to be amazed at the power we can harness in our secondary students by teaching ourselves and our students real numeracy." As secondary math teachers, we’re often frustrated by the lack of true number sense in our students. Solid research at the elementary level shows how to help all students become mathematically proficient by redefining what it means to compute with number sense. Pam Harris has spent the past ten years scrutinizing the research and using the resulting reform materials with teachers and students, seeing what works and what doesn’t work, always with an eye to success in higher math. This book brings these insights to the secondary world, with an emphasis on one powerful goal: building numeracy. Developing numeracy in today’s middle and high school students is reflective of the Common Core State Standards mission to build “the skills that our young people need for success in college and careers.” (CCSS 2010) Numeracy is more than the ability to do basic arithmetic. At its heart, numeracy is the ability to use mathematical relationships to reason with numbers and numerical concepts, to think through the math logically, to have a repertoire of strategies to solve problems, and to be able to apply the logic outside of classrooms. How can we build powerful numeracy in middle and secondary students? Harris’s approach emphasizes two big ideas: - Teach the importance of representation. The representation of student strategies on models such as the open number line, the open array, and the ratio table promote discussion on relationships rather than procedures - Teach with problem strings. Introduced by Catherine Twomey Fosnot and her colleagues in the Young Mathematicians at Work series, problem strings are purposefully designed sequences of related problems that help students construct numerical relationships. They encourage students to look to the numbers first before choosing a strategy, nudging them toward efficient, sophisticated strategies for computation. Understanding numerical relationships gives students the freedom to choose a strategy, rather than being stuck with only one way to solve a problem. Using the strings and activities in this book can empower your students to reason through problems and seek to find clever solutions. They’ll become more naturally inclined to use the strategies that make sense to them. Students become engaged, willing to think, and more confident in their justifications. When we give secondary students this numerical power, we also help them learn higher mathematics with more confidence and more success. 2. Addition and Subtraction: Models and Strategies 5. Multiplication and Division: Models and Strategies 8. Decimals, Fractions, and Percents: Models and Strategies 9. Decimals and Fractions: Addition and Subtraction 10. Decimals and Fractions: Multiplication and Division Label companion resources Label support materials Label product support "This book is an outstanding and welcome contribution to the field of mathematics education. It simultaneously addresses the development of numeracy and the extensions to the mathematical ideas taught on the secondary level and does so in a wonderfully engaging, coherent, and thoughtful way. Any secondary teacher reading this book will come away with a far deeper understanding of how to develop numeracy and fluent computation while integrating the more advanced topics that they are required to teach." —Catherine Twomey Fosnot "Pam Harris...suggests an excellent framework not only for building numeracy but also for extending mathematical ideas, such as the four operations, first presented in elementary schools. As students apply the representation strategies outlined, they undoubtedly become more accurate, flexible, and effective mathematicians." --Mathematics Teaching in the Middle School No sales resources available for this title
Sunsets on Titan are teaching us about distant exoplanets Sen—Saturn's smog-enshrouded moon Titan is helping scientists understand the atmospheres of exoplanets. A new technique shows the dramatic influence that hazy skies could have on our ability to learn about alien worlds orbiting distant stars. A team of researchers led by Tyler Robinson, a NASA Postdoctoral Research Fellow at NASA's Ames Research Center, have published their findings in the Proceedings of the National Academy of Sciences. "It turns out there's a lot you can learn from looking at a sunset," Robinson said. Despite the staggering distances to other planetary systems, in recent years researchers have begun to develop techniques for collecting spectra of exoplanets. When one of these worlds transits, or passes in front of its host star as seen from Earth, some of the star's light travels through the exoplanet's atmosphere, where it is changed in subtle, but measurable, ways. This process imprints information about the planet that can be collected by telescopes. The resulting spectra enable scientists to tease out details about the temperature, composition and structure of exoplanets' atmospheres. Robinson and his colleagues exploited a similarity between exoplanet transits and sunsets witnessed by the Cassini spacecraft at Titan. Called solar occultations, these observations allowed the scientists to observe Titan as a transiting exoplanet without having to leave the solar system. Many worlds in our solar system, including Titan, are blanketed by clouds and high-altitude hazes. Scientists expect that many exoplanets would be similarly obscured. Clouds and hazes create a variety of complicated effects that must be disentangled from the signature of these alien atmospheres, and present a major obstacle for understanding transit observations. Due to the complexity and computing power required to address hazes, models used to understand exoplanet spectra usually simplify their effects. "Previously, it was unclear exactly how hazes were affecting observations of transiting exoplanets," said Robinson. "So we turned to Titan, a hazy world in our own solar system that has been extensively studied by Cassini." An artistic impression of an exoplanet in transit across the face of its star. Image credit: NASA, ESA, and G. Bacon (STScI) The team used four observations of Titan made between 2006 and 2011 by Cassini's visual and infrared mapping spectrometer instrument. Their results, including the complex effects due to hazes, can now be compared to exoplanet models and observations. Robinson and colleagues found that hazes high above some transiting exoplanets might strictly limit what their spectra can reveal to planet transit observers. The observations might be able to glean information only from a planet's upper atmosphere. On Titan, that corresponds to about 150 to 300 km (90 to 190 miles) above the moon's surface, high above the bulk of its dense and complex atmosphere. The study also found that Titan's hazes more strongly affect shorter wavelengths, or bluer light. Studies of exoplanet spectra have commonly assumed that hazes would affect all colours of light in similar ways. Studying sunsets through Titan's hazes has revealed that this is not the case. "People had dreamed up rules for how planets would behave when seen in transit, but Titan didn't get the memo," said Mark Marley, a co-author of the study at NASA Ames. "It looks nothing like some of the previous suggestions, and it's because of the haze." The technique applies equally well to similar observations taken from orbit around any world, not just Titan. This means that researchers could study the atmospheres of planets like Mars and Saturn in the context of exoplanet atmospheres as well.
When we look at pictures of galaxies outside the Milky Way, what we usually see is mainly the light of their stars. But stars are far from the only component that makes up a galaxy. Think of the stars in galactic soup as chunks of vegetables. The broth, then, in which they float, is the intergalactic medium—not empty space but filled with often feeble, sometimes dense clouds of dust and gas flowing between stars. Because the stars are so bright, dust is usually the second time; But the dust from which stars are born, to which stars return, can tell us much about the structure and activity within the Milky Way. Now, four new images have been released, showing the distribution of dust in the four galaxies closest to the Milky Way: the Large and Small Magellanic Clouds, the dwarf galaxies that orbit our own; The Andromeda Galaxy, a large spiral galaxy at a distance of 2.5 million light-years; and the Triangulum Galaxy, a spiral galaxy 2.73 million light-years away. Without dust and gas, galaxies as we know them would not exist. Stars form when a dense lump of material in a cold cloud of molecular gas collapses under gravity, adding material from the cloud around it. When that star dies, it ejects its outer matter back into the space around it, along with new, heavier elements that fuse during its lifetime. The new stars that are born incorporate the dust of the dead stars, leaving each next generation of stars a little different. We are, in fact, made of all star stuff — even stars. But the dust is not evenly distributed. Stellar winds, galactic winds, and the effects of gravity can all push and sculpt interstellar dust into complex shapes full of cavities. Mapping the structure of structures and the elements within them is an important tool for understanding the formation of… well… pretty much everything. The new images, unveiled at the 240th meeting of the American Astronomical Society, were obtained between 2009 and 2013 by the Herschel Space Observatory, operated by the European Space Agency. Until the launch of Webb – which has yet to deliver its first science images – Herschel was the largest infrared telescope ever to be launched. Like Webb, its ultracold operating temperature meant that Herschel could see in the far-infrared, seeing some of the coldest and dusty objects in space down to temperatures of -270 °C (-454 °F). . This includes the cold clouds in which stars are born and the dust in interstellar space. However, it was less efficient at detecting more diffuse dust and gas. To fill in the gap, a team of astronomers led by Christopher Clark of the Space Telescope Science Institute used data from three other retired telescopes: ESA’s Planck and NASA’s Infrared Astronomical Satellite (IRAS) and the Cosmic Background Explorer (COBE). . The results reveal complex interactions within the dust. Hydrogen gas appears in red; It is the most abundant element in the universe, so there is a lot of it. Cavities in the dust where newborn stars have blown it away with their intense winds appear as empty areas, surrounded by a green glow that indicates cold dust. The blue regions represent hot dust heated by stars or other processes. The images also reveal new information about the complex interactions taking place in interstellar dust, the researchers said. Heavy elements such as oxygen, carbon and iron can often cling to dust particles; In the densest clouds, most of the elements are bound to dust, increasing the dust-to-gas ratio. This can affect the way light is absorbed and re-emitted by dust. However, violent processes, such as star births, or supernovae, can release radiation that breaks up the dust, releasing heavier elements back into gaseous clouds. This shifts the dust-to-gas ratio back to gas. Herschel images show that in a single galaxy the ratio can vary by a factor of 20. It contains much more than astronomers think, important information that could help scientists better understand this cycle. And, they are just wonderfully beautiful. Who knew Andromeda soup could be such a dazzling rainbow of color.
What is an Arc Flash? The definition of an arc flash is “an undesired electric discharge that travels through the air between conductors or from a conductor to a ground.” The arc flash is a part of an arc fault, which is an example of an electrical explosion caused by a low-impedance connection that goes through the air to the ground. When an arc flash occurs it creates a very bright light and intense heat. In addition, it has the potential to create an arc blast, which can cause a traumatic force that can severely injure anyone in the area or damage anything nearby. What Happens During an Arc Flash? An arc flash begins when the electricity exits its intended path and begins traveling through the air toward a grounded area. Once this happens, it ionizes the air, which further reduces the overall resistance along the path that the arc is taking. This helps draw in additional electrical energy. The arc will travel toward a ground of some type, which will typically be whatever object is closest to its source. The exact distance that an arc flash can travel is known as the arc flash boundary. This is determined by the potential energy present and a variety of other factors such as air temperature and humidity. When working to improve arc flash safety, a facility will often mark off the arc flash boundary using floor marking tape. Anyone who is working within that area will be required to wear personal protective equipment (PPE). Potential Temperature of an Arc Flash One of the biggest dangers associated with an arc flash is the extremely high temperature it can create. Depending on the situation, they can reach temperatures as high as 35,000 degrees Fahrenheit. This is one of the hottest temperatures found anywhere on earth and is actually about 4 times hotter than temperatures found on the surface of the sun. Even if the actual electricity doesn’t touch a person, they can be severely burned if they are anywhere near it. In addition to direct burns, these temperatures can quickly start fires in the area. How Long does an Arc Flash Last? An arc flash can last anywhere from a fraction of a second to several seconds, depending on a number of factors. Most arc flashes don’t last very long because the source of the electricity is cut off quickly by circuit breakers or other safety equipment. If a system does not have any type of safety protection, however, the arc flash will continue until the flow of electricity is physically stopped. This may occur when an employee physically cuts the power to the area or when the damage caused by the arc flash becomes severe enough to somehow stop the flow of electricity. Damage Potential of an Arc Flash Due to the high temperatures, intense blasts, and other results of an arc flash, arc flashes can cause a lot of damage very quickly. Understanding the different types of damage that can occur can help facilities plan their safety efforts. The two main types of damage include damage to the facility and damage (injury) to people in the area. Potential Property Damage: Heat – The heat from an arc flash can easily melt metal, which can damage expensive machines and other equipment. Fire – The heat from these flashes can quickly cause a fire, which can spread through a facility if not stopped. Blasts – The arc blast that can result from an arc flash can break windows, splinter wood in the area, bend metal, and much more. Anything stored within the arc blast radius can be damaged or destroyed in just seconds. Potential Human Injury: Burns – Second and third degree burns can occur in a fraction of a second when someone is near the arc flash. Electrocution – If the arc flash travels through a person, he or she will be electrocuted. Depending on the amount of electricity, where it enters the body, and where it leaves, this can be fatal. Auditory Damage – Arc flashes can cause extremely loud noises, which can cause permanent hearing damage to those in the area. Eyesight Damage – Arc flashes can be very bright, which can cause temporary or even long-term damage to the eyes. Arc Blast Damage – An arc blast can create a force that is thousands of pounds per inch. This can knock a person through the area several feet. It can also cause broken bones, collapsed lungs, concussions, and more. Wearing personal protective equipment can provide a significant amount of protection, but it cannot eliminate all risk. Employees who are present when an arc flash occurs are always at risk, no matter the PPE they are wearing. This is why it is important to de-energize a machine before it is worked on whenever possible. Potential Causes of an Arc Flash Arc flashes can occur for a wide range of reasons. In most cases, the root cause will be a damaged piece of equipment such as a wire. It could also be a result of someone working on equipment, which makes it possible for the electricity to escape from the path it is normally confined to. Even when there is a potential path outside the wiring, the electricity is going to follow the path of least resistance. This is why an arc flash will not necessarily happen as soon as something is damaged or an alternate path is made available. Instead, the electricity will continue down the intended path until another option that has less resistance becomes available. Here are some things that can create a path with lower resistance and therefore cause an arc flash: Dust – In dusty areas the electricity may begin passing outside the wiring or other equipment through the dust. Dropped Tools – If a tool is dropped onto a wire, for example, it can damage it and allow the electricity to pass into the tool. From there, it must find another path to continue on. Accidental Touching – If a person touches the damaged area, the electricity may travel through his or her body or at least out of the normal path, creating an arc flash. Condensation – When condensation forms, the electricity may escape the wiring through the water, and then the arc flash will occur as the electricity seeks its destination. Material Failure – If a wire is damaged to the point where the electricity has trouble passing through, the path may be more resistant than going outside of the wire. Corrosion – Corrosion can create a path outside the wire, at which point the arc flash occurs. Faulty Installation – When equipment is installed improperly it can make it difficult or impossible for electricity to follow the intended path, which can cause an arc flash. Arc Flash Safety Requirements Companies with electrical equipment need to take arc flash safety very seriously. There are many things that can be done to reduce the chances an arc flash will occur and keep people as safe as possible if one does happen. Preventing Arc Flashes The first step in arc flash safety is minimizing the risk of one occurring. This can be done by completing an electrical risk assessment, which can help identify where the biggest dangers are in a facility. IEEE 1584 is a good option for most facilities and will help identify common problems. Routine inspections of all high voltage equipment and all wiring are another essential step. If there is any sign of corrosion, damage to wires, or other issues, they should be fixed as soon as possible. This will help keep the electrical currents safely contained within the machines and wires where they belong. Some specific areas that should be inspected include any electrical switchboards, panelboards, control panels, socket enclosures, and motor control centers. Proper Labeling Efforts Anywhere in a facility where high electrical currents can exist should be properly labeled with arc flash warning labels. These can be purchased pre-made or printed with any industrial label printer as they are needed. The National Electrical Code article 110.16 clearly states that this type of equipment needs to be marked to warn people of the risks. De-Energizing Equipment when Performing Maintenance Whenever a machine needs to be worked on in any way, it should be completely de-energized. De-energizing a machine is more than just turning it off. All machines should be shut down and physically disconnected from any power source. Once disconnected, a voltage check should also be done to ensure there is no latent energy that was stored up. Ideally a lockout tagout policy should be in place, which will put a physical lock on the electrical supply so that it cannot be accidently plugged back in while someone is working on it. Arc Flash Personal Protection Equipment It should be very rare, but there are some cases when machines must be worked on while they are still energized. When this is the case, all employees working in the area should be required to wear proper personal protective equipment. The specific PPE that is worn should correspond to the maximum potential risk based on the amount of electricity going through the machine. Having head to toe personal protective equipment can help to prevent serious injury or even fatalities should an arc flash occur while the machine is being worked on. Whenever possible, circuit breakers should be put in place on all machines. These breakers will quickly detect when there is a sudden surge in electricity being drawn and stop the flow immediately. Even with circuit breakers, an arc flash can occur, but it will only last a fraction of the time since the electrical current will be cut off. Even a very brief arc flash can be deadly, however, so circuit breakers should not be seen as a sufficient arc flash safety program. Arc Flash Safety Standards All facilities must consider the various arc flash safety standards that have been put in place by government and private institutions. Determining which standards must be followed can help ensure a facility is in compliance with area rules and regulations, in addition to helping keep the facility safe. The following are the most common standards that cover arc flash safety: OSHA – OSHA has several standards including 29 CFR parts 1910 and 1926. These standards cover requirements for electrical power generation, transmission, and distribution. NEC – the National Electrical Code (NEC) pertains to safe electrical installation and practices. NFPA – Standard NFPA 70E, the Standard for Electrical Safety in the Workplace, details a variety of requirements for warning labels, including warning labels concerning arc flashes and arc blasts. It also offers recommendations on implementing workplace best practices to help keep employees who work around high voltage equipment safe.
1 Chapter 4 One-Way ANOVA Recall this chart that showed how most of our course would be organized: Explanatory Variable(s) Response Variable Methods Categorical Categorical Contingency Tables Categorical Quantitative ANOVA Quantitative Quantitative Regression Quantitative Categorical (not discussed) When our data consists of a quantitative response variable and one or more categorical explanatory variables, we can employ a technique called analysis of variance, abbreviated as ANOVA. The material in this chapter corresponds to the first part of Chapter 14 of the textbook. Recall that a categorical explanatory variable is also called a factor. In this chapter, we ll study the simplest form of ANOVA, one-way ANOVA, which uses one factor and one response variable. We ll also study a more complicated setup in the next chapter, two-way ANOVA, which uses two factors instead. (In principle, we could do ANOVA with any number of factors, but in practice, people usually stick to one or two.) 4.1 Basics of One-Way ANOVA Let s start by discussing the way we organize and label the data for oneway ANOVA. We also need to formulate the basic question that we plan to ask. 2 4.1 Basics of One-Way ANOVA 55 Z Setup Typically, when we think about one-way ANOVA, we think about the factor as dividing the subjects into groups. The goal of our analysis is then to compare the means of the subjects in each group. Notation Let g represent the number of groups. Then we ll set things up as follows: ˆ Let µ 1, µ 2,..., µ g represent the true population means of the response variable for the subjects in each group. As usual, these population parameters are what we re really interested in, but we don t know their values. ˆ We call each observation in the sample Y ij, where i is a number from 1 to g that identifies the group number, and j identifies the individual within that group. (For example, Y 12 represents the response variable value of the second individual in the first group.) ˆ We can calculate the sample means for each group, which we ll call Ȳ 1Y, Ȳ2Y,..., ȲgY. We can use these known sample means as estimates of the corresponding unknown population means. Example 4.1: Suppose we want to see if three McDonald s locations around town tend to put the same amount of fries in a medium order, or if some locations put more fries in the container than others. We take the next 30 days on the calendar and randomly assign 10 days to each of the three locations. On each day, we go to the specified location, order a medium order of fries, take it home, and weigh it to see how many ounces of fries it contains. The categorical explanatory variable is just which location we went to, and the quantitative response variable is the number of ounces of fries. For each of the three locations (g 3), the population consists of all medium orders of fries sold at that location, while the sample consists of the orders that we actually got. The population means, which we call µ 1, µ 2, µ 3, represent the average number of ounces of fries in all orders at each location, and these are the quantities we re interested in. We estimate them using Ȳ1Y, Ȳ2Y, Ȳ3Y, the sample means for each location, which are collected from the data for our orders. The data is shown in Figure 4.1. n 3 4.1 Basics of One-Way ANOVA 56 Location Fries (ounces) Mean Std. Dev Figure 4.1: Ounces of fries in 10 medium orders of fries at each of three McDonald s locations Question of Interest What we really want to know is whether all of the groups have the same population mean, that is, whether µ 1, µ 2,..., µ g are all the same. This is equivalent to asking whether or not the response variable depends on the factor. Intuitively speaking, the most obvious way to answer this question is by looking at Ȳ1Y, Ȳ2Y,..., ȲgY, the sample means of the various groups. If they are close enough to each other, in some sense, then we re willing to believe that all the true population means µ 1, µ 2,..., µ g are the same. If one or more of Ȳ1Y, Ȳ2Y,..., ȲgY are too far from the others, then that convinces us that the true population means must not all be the same. All that remains is to figure out what we mean by close enough and too far. We ll eventually see how to do this with a hypothesis test. Z One-Way ANOVA Table ANOVA gets its name (analysis of variance) from the fact that it examines different kinds of variability in the data. It then uses this information to construct a hypothesis test. To describe these different kinds of variability, we ll first need to introduce some more notation: 4 4.1 Basics of One-Way ANOVA 57 ˆ ȲYY represents the overall sample mean of all the data from all groups combined. ˆ N is the total number of observations, and n i is the number of observations in the ith group. (So n 1 n 2 n g N.) Sums of Squares The most basic quantities that ANOVA uses to describe different kinds of variability are the sums of squares, abbreviated SS. One-way ANOVA involves three sums of squares: ˆ The total sum of squares, SS Tot, measures the overall variability in the data by looking at how the Y ij values vary around ȲYY, their overall mean. Its formula is SS Tot g n i Q i 1 Q» 1 It can be seen from the formula that SS Tot~ˆN is what we would get if we lumped all N observations together, ignoring groups, and calculated the sample standard deviation. j 1 Y ij ȲYYŽ2. ˆ The group sum of squares, SS G, measures the variability between the groups by looking at how the sample means for each group, ȲiY, vary around ȲYY, the overall mean. Its formula is SS G g Q i 1 n i Ȳ iy ȲYYŽ2. ˆ The error sum of squares, SS E, measures the variability within the groups by looking at how each Y ij value varies around ȲiY, the sample mean for its group. Its formula is SS E g n i Q i 1 Q j 1 Y ij ȲiYŽ2. If we call the sample standard deviation within each group s i, then another formula for SS E is SS E g Q i 1ˆn i 1 s 2 i. 5 4.1 Basics of One-Way ANOVA 58 It turns out to be true that SS Tot SS G SS E. In words, the total variability equals the sum of the variability between groups and the variability within groups. Degrees of Freedom The sums of squares are supposed to measure different kinds of variability in the data, but they also tend to be influenced in various ways by the number of groups g and the number of observations N. This influence is measured by quantities called degrees of freedom that are associated with each sum of squares. Their formulas are df Tot N 1, df G g 1, df E N g. Notice that df Tot df G df E. The group and error degrees of freedom add to the total, just like the sums of squares do. Mean Squares The mean squares are just the sums of squares divided by their degrees of freedom: MS G SS G df G, MS E SS E df E. (We seldom bother calculating MS Tot, because it s just the square of the sample standard deviation of all N observations lumped together.) MS G and MS E measure the variability between groups and within groups in a way that properly accounts for g and N, unlike SS G and SS E. Table We typically summarize all this information in an ANOVA table. An ANOVA table for one-way ANOVA is laid out as shown in Figure 4.2. (A few other quantities that we ll calculate later are also sometimes included as extra columns on the right side of the ANOVA table.) Example 4.2: The ANOVA table for the data shown in Figure 4.1 would obviously be very tedious to calculate by hand, so we use computer software to calculate the ANOVA table shown in Figure 4.3. n 6 4.2 One-Way ANOVA F Test 59 Source df SS MS Group df G SS G MS G Error df E SS E MS E Total df Tot SS Tot Figure 4.2: Generic one-way ANOVA table. Source df SS MS Group Error Total Figure 4.3: ANOVA table for the data in Figure One-Way ANOVA F Test The focus of ANOVA is a hypothesis test for checking whether all the groups have the same population mean. This is the same as testing whether the response variable depends on the factor. Sometimes we ll refer to this as a test for whether the factor has an effect on the response variable (although it may not be right to think about this as a literal cause-andeffect relationship). Z One-Way ANOVA F Test Procedure Like any other hypothesis test, the one-way ANOVA F test consists of the standard five steps. Assumptions The one-way ANOVA F test makes four assumptions: ˆ The data comes from a random sample or randomized experiment. In an observational study, the subjects in each group should be a random sample from that group. In an experiment, the subjects should be randomly assigned to the groups. 7 4.2 One-Way ANOVA F Test 60 ˆ The data for each group should be independent. For example, we wouldn t want to reuse the same subject for measurements in more than one group. ˆ For each group, the population distribution of the response variable has a normal distribution. To check this assumption, there a couple of things we should look for: The shape of the data should look at least sort of close to normal. There should be no outliers. ˆ The population distribution of the response variable has the same standard deviation σ for each group. Of course, we don t know σ, but we can still check this assumption by comparing the sample standard deviations for each group. As an approximate rule of thumb, we typically don t worry unless one group s standard deviation is more than twice as big as another s. Note: The textbook organizes these four assumptions a little differently. It combines my first two and my last two, and so it lists only two assumptions. Hypotheses The null hypothesis for the one-way ANOVA F test is that the factor has no effect, and the alternative is that it does. In terms of parameters, we can write these hypotheses as follows: H 0 : µ 1, µ 2,..., µ g are all equal. H a : µ 1, µ 2,..., µ g are not all equal. Test Statistic If we re testing whether or not µ 1, µ 2,..., µ g are all equal, then it seems reasonable to look at our estimates of those quantities and see if those are all close enough to each other. So we want to look at whether Ȳ 1Y, Ȳ2Y,..., ȲgY are all close enough to each other. We measure the closeness of the group means using MS G, the variability between groups. But there s something else we need to consider as 8 4.2 One-Way ANOVA F Test 61 State GA AL FL $2.06 $2.15 $2.25 Data $2.05 $2.16 $2.24 $2.04 $2.15 $2.26 $2.05 $2.14 $2.25 Mean $2.05 $2.15 $2.25 State GA AL FL $2.37 $2.42 $2.07 Data $1.73 $2.02 $1.83 $1.97 $2.18 $2.47 $2.13 $1.78 $2.23 Mean $2.05 $2.15 $2.25 Figure 4.4: Two hypothetical data sets for a study of gas prices. well. Look at the data in Figure 4.4, which shows some hypothetical data comparing gas prices from three different states. Notice that the sample mean for each group (state) is the same for both data sets, so the variability between groups, MS G, is the same as well. However, common sense says that the data set on the left is much more convincing that there is an actual difference from group to group. Mathematically, this is because the data set on the left has less variability within groups, which we measure with MS E. Our test statistic compares the variability between groups to the variability within groups by taking a ratio: F MS G MS E. When MS G is large compared to MS E, like the hypothetical data set on the left, F will be large. So larger F values represent more evidence that there is a difference between the group population means in other words, more evidence against H 0 and in favor of H a. P-Value and the F Distribution Recall the definition of the p-value: The p-value is the probability of getting a test statistic value at least as extreme as the one observed, if H 0 is true. Typically the p-value is a tail probability from whatever kind of statistical distribution the test statistic has when H 0 is true. For the one-way ANOVA F test statistic, we call this distribution an F distribution, like the ones shown in Figure 4.5. 9 4.2 One-Way ANOVA F Test Value of F Value of F Figure 4.5: Density of the F distribution for df 1 2, df 2 27 (left) and df 1 3, df 2 40 (right). An F distribution has the following properties: ˆ It is skewed right. ˆ Things with an F distribution can t be negative, so the F distribution has only one tail. (We never need to double any tail probabilities from an F distribution.) ˆ The center of the F distribution is usually somewhere around 1, or a little less. ˆ The exact shape of the F distribution is determined by two different degrees of freedom the numerator degrees of freedom, or df 1, and the denominator degrees of freedom, or df 2. If H 0 is true, our test statistic, F, has an F distribution with df 1 df G and df 2 df E. This is easy to remember, since the formula for F is F MS G MS E, and the numerator and denominator degrees of freedom are just the degrees of freedom associated with the quantities in the numerator and denominator of F. Remember that we said the larger values of F are the values that are more supportive of H a. So the p-value is the probability of getting an 10 4.2 One-Way ANOVA F Test 63 F value larger than the one we actually got, if H 0 is true. Since the test statistic F has an F distribution if H 0 is true, this probability is represented by the shaded area in Figure 4.6. To calculate this probability exactly, we typically need statistical software Value of F Figure 4.6: Tail probability of an F distribution with df 1 3, df If we don t have access to statistical software, we often have to use an F table like the one in the back of our textbook to try to figure out the p-value. Ideally, we would go to our F table, find the correct df 1 and df 2, look up our F value, and it would tell us the p-value. Unfortunately, that s way too much information and would require our F table to be dozens of pages long. Instead, a typical F table, like the one in Figure 4.7, works a little differently. For each combination of df 1 and df 2, the table tells us only a single number. That number is the F value corresponding to a p-value of We then check whether our observed F test statistic value is larger or smaller than the one listed in the table. ˆ If our test statistic value is larger than the number in the table, then our p-value is smaller than ˆ If our test statistic value is smaller than the number in the table, then our p-value is larger than We can see that the p-value behaves as it should: Smaller p-values correspond to larger F values, and both correspond to more evidence against 11 4.2 One-Way ANOVA F Test 64 df 1 df Figure 4.7: Top-left corner of an F table for right-tail probabilities of H 0 and in support of H a. Decision We make a decision the same way we always do for any hypothesis test: by rejecting H 0 if the p-value is less than or equal to α (often 0.05), and failing to reject H 0 if the p-value is greater than α. Remember that the hypotheses we re testing are H 0 : µ 1, µ 2,..., µ g are all equal. H a : µ 1, µ 2,..., µ g are not all equal. So let s think about what our decision really represents. ˆ If we reject H 0, then we re concluding that at least some of the group population means are different. ˆ If we fail to reject H 0, then we re concluding that it s reasonable that all the group population means are the same. Example 4.3: Let s go through the five steps of the one-way ANOVA F test for the data in Example 4.2 using α Let s check each of the four assumptions. ˆ Each day was randomly assigned to a particular location, so this is a randomized experiment. ˆ The different groups correspond to different locations, each of which should have no ability to affect the measurements of the other two, so the groups are independent. 12 4.2 One-Way ANOVA F Test 65 ˆ It s very hard to tell much about the shape of the data with only 10 observations in each group, but quick dotplots for each group show shapes that are at least somewhat consistent with a normal distribution. Also, we see no outliers in any of the groups. ˆ The sample standard deviations for the three groups are at least sort of close to each other, so we don t see any violation of the constant standard deviation assumption. Our assumptions are okay, so we can proceed. 2. The null hypothesis is that µ 1 µ 2 µ 3, which means that the three locations, on average, give out the same amount of fries. The alternative hypothesis is that at least one of µ 1, µ 2, µ 3 is not equal to the others, which means that at least one of the locations gives out more or fewer fries than the others. 3. The test statistic F is calculated from the mean squares in the ANOVA table shown in Figure 4.3: F MS G MS E To calculate our p-value, we compare our observed test statistic value to an F distribution with df 1 df G 2 and df 2 df E 27. When we consult the F table for these df values, the number that it gives us is This means that for an F distribution with these degrees of freedom, a test statistic value of 3.35 would correspond to a p-value of Our observed test statistic value of 3.55 is larger than 3.35, so our p-value is smaller than (We could use statistical software to calculate the exact p-value, which turns out to be ) 5. Our p-value is smaller than our α, so we reject H 0. We can conclude that the three locations do not give out the same amount of fries. However, we can t conclude anything about which locations give out more or less fries than the others, or about how many more or less they give out. n Figure 4.8 may be helpful for remembering various results and interpretations of a one-way ANOVA F test. 13 4.2 One-Way ANOVA F Test 66 Z Large F value Small F value (much larger than 1) (around 1 or less than 1) Small p-value Large p-value Evidence against H 0 (for H a ) No evidence against H 0 (for H a ) Reject H 0 Fail to reject H 0 Conclude that some population Reasonable that all population group means differ group means are the same Figure 4.8: Results and interpretations of a one-way ANOVA F test Alternatives to the One-Way ANOVA F Test There are some situations in which one-way ANOVA could be used, but another test procedure might be equivalent or preferable. One-Way ANOVA with Two Groups When we have only two groups, then the one-way ANOVA F test serves exactly the same purpose as the two-sided two-sample t test from Section 10.2, which you saw in your previous course. It turns out that oneway ANOVA with only two groups is completely equivalent to the two-sided two-sample t test, in the sense that both tests will give exactly the same p-value. (This happens because their test statistics are related: F t 2.) So in this case, it makes no difference which procedure is used, since both will yield exactly the same conclusion. However, the two-sample t test is slightly more flexible in this case since it also allows us to use a one-sided alternative hypothesis if we so desire. Ordinal Variables If the factor is an ordinal variable, one-way ANOVA makes no use of the ordering information. There exist other test procedures that might make slightly fewer type II errors than one-way ANOVA by taking into account 14 4.2 One-Way ANOVA F Test 67 the order of the factor categories, but we won t discuss these procedures here. Normality One-way ANOVA assumes that the data in each group comes from a normal distribution. Even if the distribution is somewhat different from normal, one-way ANOVA can still work okay if the sample sizes are large enough. However, when sample sizes are small, one-way ANOVA can be unreliable if the data in one or more of the groups comes from a highly non-normal distribution. There exists a nonparametric equivalent of the one-way ANOVA F test called the Kruskal-Wallis test that uses only the ranks of the data and is okay to use no matter what distribution the data comes from. We won t discuss the details, but Section 15.2 of the textbook gives a brief outline. Block Designs Recall from Stats 1 that when we wanted to compare the means of two groups, there were two different procedures: ˆ The two-sample t test compared groups when the data in one group was independent from the data in the other group. ˆ The matched-pairs t test compared groups when each observation in one group was paired with a corresponding observation in the other group (such as husbands and wives, or before and after measurements). The one-way ANOVA F test we discussed in this section is the multiplegroup analog of the two-sample t test. (That s why they re equivalent when there are only two groups.) As mentioned in the assumptions, it can t be used when the observations in a group correspond to observations in other groups. There also exists a procedure called a block design that is the multiplegroup analog of the matched-pairs t test. It should be used instead of simple one-way ANOVA when each subject is re-used for measurements in each group. There are many cases where such a procedure is useful. 15 4.3 One-Way ANOVA Confidence Intervals 68 Example 4.4: Suppose we want to compare the effectiveness of three kinds of fertilizer for growing corn. We have five plots of land available to use, so we divide each plot into thirds and use one fertilizer on each third. Here the plots of land are the subjects and the fertilizers are the groups. Each subject is being reused for each group, so we can t use the one-way ANOVA procedure we discussed in this section. However, this type of data can be analyzed using a block design. n Unfortunately, we won t have time to discuss block designs in detail in this course. The textbook doesn t discuss them either, so if for some reason you need to learn about them, consult another textbook instead. (I can give you a reference if you re interested.) 4.3 One-Way ANOVA Confidence Intervals The one-way ANOVA F test allows us to conclude whether or not the population group means are all equal. However, we might also want to say something about what we think the group means actually are, or about which group means are different and by how much. We can answer these questions by constructing confidence intervals. Since there are multiple quantities for which we might want to construct confidence intervals in a one-way ANOVA setup, we need to discuss the right way to do this. Z Simultaneous Confidence Intervals When we construct more than one confidence interval at a time, we have to be careful to maintain our specified overall confidence level. For example, if we re 95% confident in the statement µ 1 is between 78 and 86, and we re also 95% confident in the statement µ 2 is between 31 and 39, then we ll (usually) be less than 95% confident in the combined statement µ 1 is between 78 and 86 and µ 2 is between 31 and 39. When we want to state a certain overall confidence level for several confidence intervals simultaneously, we need to construct simultaneous confidence intervals. (If we re only interested in setting the confidence level for one confidence interval at a time, then we might call this an individual confidence level, to distinguish it from an overall simultaneous confidence level.) 16 4.3 One-Way ANOVA Confidence Intervals 69 Multiple Comparison Methods To construct simultaneous confidence intervals, we have to use something called a multiple comparison method. There are a variety of multiple comparison methods, and the best one to use depends on what kind of confidence intervals we plan to construct. We won t discuss the details here. Z Confidence Intervals for Group Means The most obvious quantities for which we might want to construct confidence intervals are µ 1,..., µ g, the population means of the groups. Since we re constructing multiple confidence intervals at once, we ll need to use a multiple comparison procedure. Many different multiple comparison methods exist for this situation, and one of the most commonly used is the Bonferroni method. We ll refer to the intervals it produces as Bonferroni simultaneous confidence intervals. Assumptions The assumptions for constructing confidence intervals for group means are the same as those for the one-way ANOVA F test. Estimating the Standard Deviation Recall º that one of our assumptions is that each group has the same population standard deviation, which we call σ. We can estimate σ using ˆσ MS E. This quantity will show up in the confidence interval formula, but it might also be useful in its own right. Example 4.5: In Example 4.2, we calculated MS E Hence our estimate for the population standard deviation σ of each group is ˆσ n Formula To construct a set of Bonferroni simultaneous confidence intervals µ 1,..., µ g, we can use the following formula for each µ i : CI for µ i Ȳ iy t ˆσ¾ 1 n i, 17 4.3 One-Way ANOVA Confidence Intervals 70 where t is a number that depends on the confidence level, N, and g. We won t discuss the details of how to get t in this chapter, but we may come back to it later (the Bonferroni method will come up again in a later chapter). Example 4.6: For Example 4.2, simultaneous 95% Bonferroni confidence intervals for the three group means, as calculated by statistical software, are as follows: µ 1 ˆ3.85, 4.87 µ 2 ˆ3.58, 4.60 µ 3 ˆ3.10, 4.12 Since this is a set of simultaneous confidence intervals, we can say that we re 95% confident that all three parameter values are in their corresponding intervals. n Z Confidence Intervals for Differences of Group Means The one-way ANOVA F test only tells us whether there are differences between the groups. It does not give a verdict on which groups are different, or by how much. To figure this out, we can construct confidence intervals to compare each pair of group population means. More specifically, we want to construct simultaneous confidence intervals for µ i µ k for each pair of groups k. For example, with three groups, there would be three quantities for which we would want to construct confidence intervals: µ 1 µ 2, µ 1 µ 3, and µ 2 µ 3. Many different multiple comparison methods exist for this situation, but the best one for our purposes is called the Tukey method. We ll refer to the intervals it produces as Tukey simultaneous confidence intervals. Assumptions The assumptions for constructing Tukey simultaneous confidence intervals are exactly the same as those for the one-way ANOVA F test, with one additional requirement: the group sample sizes n 1, n 2,..., n g should be at least approximately equal. 18 4.3 One-Way ANOVA Confidence Intervals 71 Formula To construct a set of Tukey simultaneous confidence intervals for each pair of groups i and k, we can use the following formula for each k: CI for µ i µ k Ȳ iy ȲkYŽq ˆσ¾ 1 n i 1 n k, where q is a number that depends on the confidence level, N, and g. We won t discuss the details of how to get q, since we would typically use statistical software to calculate it for us. Interpretation For each comparison of two groups, we interpret the corresponding Tukey simultaneous confidence interval as follows: ˆ If the interval contains only positive numbers, then we can conclude that the first of the two population means being compared is bigger than the second. ˆ If the interval contains only negative numbers, then we can conclude that the first of the two population means being compared is smaller than the second. ˆ If the interval contains both positive and negative numbers (in other words, if it contains zero), then we can t conclude that either of the two population means being compared is bigger than the other. Of course, whenever we conclude that one population mean is bigger than another, the interval also gives us an idea of how much bigger. Example 4.7: For Example 4.2, Tukey simultaneous 95% confidence intervals, as calculated by statistical software, are as follows: µ 1 µ 2 ˆ0.44, 0.98 µ 1 µ 3 ˆ0.04, 1.46 µ 2 µ 3 ˆ0.23, 1.19 So we can t conclude that there s any difference between µ 1 and µ 2 or between µ 2 and µ 3, since both of the corresponding intervals contain both positive and negative numbers. However, we can conclude that µ 1 is bigger than µ 3, since the corresponding interval contains only positive numbers. 19 4.3 One-Way ANOVA Confidence Intervals 72 In other words, we can conclude that Location 1 gives out more fries than Location 3, but we can t conclude anything about how Location 2 compares to either of them. n Chapter 7 One-way ANOVA One-way ANOVA examines equality of population means for a quantitative outcome and a single categorical explanatory variable with any number of levels. The t-test of Chapter 6 looks One-Way Analysis of Variance Note: Much of the math here is tedious but straightforward. We ll skim over it in class but you should be sure to ask questions if you don t understand it. I. Overview A. We Testing Group Differences using T-tests, ANOVA, and Nonparametric Measures Jamie DeCoster Department of Psychology University of Alabama 348 Gordon Palmer Hall Box 870348 Tuscaloosa, AL 35487-0348 Phone: MINITAB ASSISTANT WHITE PAPER This paper explains the research conducted by Minitab statisticians to develop the methods and data checks used in the Assistant in Minitab 17 Statistical Software. One-Way Chapter 7 Section 7.1: Inference for the Mean of a Population Now let s look at a similar situation Take an SRS of size n Normal Population : N(, ). Both and are unknown parameters. Unlike what we used Chapter 11 Two-Way ANOVA An analysis method for a quantitative outcome and two categorical explanatory variables. If an experiment has a quantitative outcome and two categorical explanatory variables that 3. Nonparametric methods If the probability distributions of the statistical variables are unknown or are not as required (e.g. normality assumption violated), then we may still apply nonparametric tests Chapter 7 Part 2 Hypothesis testing Power November 6, 2008 All of the normal curves in this handout are sampling distributions Goal: To understand the process of hypothesis testing and the relationship Name: Date: 1. Determine whether each of the following statements is true or false. A) The margin of error for a 95% confidence interval for the mean increases as the sample size increases. B) The margin Statistics: Rosie Cornish. 200. 1.5 Oneway Analysis of Variance 1 Introduction Oneway analysis of variance (ANOVA) is used to compare several means. This method is often used in scientific or medical experiments Two-sample t-tests. - Independent samples - Pooled standard devation - The equal variance assumption Last time, we used the mean of one sample to test against the hypothesis that the true mean was a particular Data Analysis Tools This section of the notes is meant to introduce you to many of the tools that are provided by Excel under the Tools/Data Analysis menu item. If your computer does not have that tool 1 Final Review 2 Review 2.1 CI 1-propZint Scenario 1 A TV manufacturer claims in its warranty brochure that in the past not more than 10 percent of its TV sets needed any repair during the first two years Notation: Notation and Equations for Regression Lecture 11/4 m: The number of predictor variables in a regression Xi: One of multiple predictor variables. The subscript i represents any number from 1 through One-Way Analysis of Variance (ANOVA) Example Problem Introduction Analysis of Variance (ANOVA) is a hypothesis-testing technique used to test the equality of two or more population (or treatment) means 1: Analysis of Variance Introduction EDA Hypothesis Test Introduction In Chapter 8 and again in Chapter 11 we compared means from two independent groups. In this chapter we extend the procedure to consider Chapter 565 Randomized Block Analysis of Variance Introduction This module analyzes a randomized block analysis of variance with up to two treatment factors and their interaction. It provides tables of INTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA) As with other parametric statistics, we begin the one-way ANOVA with a test of the underlying assumptions. Our first assumption is the assumption of Regression Analysis: A Complete Example This section works out an example that includes all the topics we have discussed so far in this chapter. A complete example of regression analysis. PhotoDisc, Inc./Getty t-tests in Excel By Mark Harmon Copyright 2011 Mark Harmon No part of this publication may be reproduced or distributed without the express permission of the author. email@example.com www.excelmasterseries.com Hypothesis test In statistics, a hypothesis is a claim or statement about a property of a population. A hypothesis test (or test of significance) is a standard procedure for testing a claim about a property Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression Objectives: To perform a hypothesis test concerning the slope of a least squares line To recognize that testing for a Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING In this lab you will explore the concept of a confidence interval and hypothesis testing through a simulation problem in engineering setting. Chapter 4 Two-Sample T-Tests Assuming Equal Variance (Enter Means) Introduction This procedure provides sample size and power calculations for one- or two-sided two-sample t-tests when the variances of Unit 29 Chi-Square Goodness-of-Fit Test Objectives: To perform the chi-square hypothesis test concerning proportions corresponding to more than two categories of a qualitative variable To perform the Bonferroni STAT 145 (Notes) Al Nosedal firstname.lastname@example.org Department of Mathematics and Statistics University of New Mexico Fall 2013 CHAPTER 18 INFERENCE ABOUT A POPULATION MEAN. Conditions for Inference about mean Chicago Booth BUSINESS STATISTICS 41000 Final Exam Fall 2011 Name: Section: I pledge my honor that I have not violated the Honor Code Signature: This exam has 34 pages. You have 3 hours to complete this THE KRUSKAL WALLLIS TEST TEODORA H. MEHOTCHEVA Wednesday, 23 rd April 08 THE KRUSKAL-WALLIS TEST: The non-parametric alternative to ANOVA: testing for difference between several independent groups 2 NON T O P I C 1 2 Techniques and tools for data analysis Preview Introduction In chapter 3 of Statistics In A Day different combinations of numbers and types of variables are presented. We go through these Chi-Square Tests 15 Chapter Chi-Square Test for Independence Chi-Square Tests for Goodness Uniform Goodness- Poisson Goodness- Goodness Test ECDF Tests (Optional) McGraw-Hill/Irwin Copyright 2009 by The Chapter 45 Two-Sample T-Tests Allowing Unequal Variance (Enter Difference) Introduction This procedure provides sample size and power calculations for one- or two-sided two-sample t-tests when no assumption ANOVA ANOVA Analysis of Variance Chapter 6 A procedure for comparing more than two groups independent variable: smoking status non-smoking one pack a day > two packs a day dependent variable: number of Chapter 5 Analysis of variance SPSS Analysis of variance Data file used: gss.sav How to get there: Analyze Compare Means One-way ANOVA To test the null hypothesis that several population means are equal, How To Run Statistical Tests in Excel Microsoft Excel is your best tool for storing and manipulating data, calculating basic descriptive statistics such as means and standard deviations, and conducting Descriptive Statistics Primer Descriptive statistics Central tendency Variation Relative position Relationships Calculating descriptive statistics Descriptive Statistics Purpose to describe or summarize Contents 10 Chi Square Tests 703 10.1 Introduction............................ 703 10.2 The Chi Square Distribution.................. 704 10.3 Goodness of Fit Test....................... 709 10.4 Chi Square CHAPTER 13 Experimental Design and Analysis of Variance CONTENTS STATISTICS IN PRACTICE: BURKE MARKETING SERVICES, INC. 13.1 AN INTRODUCTION TO EXPERIMENTAL DESIGN AND ANALYSIS OF VARIANCE Data Collection 1 The Wilcoxon Rank-Sum Test The Wilcoxon rank-sum test is a nonparametric alternative to the twosample t-test which is based solely on the order in which the observations from the two samples fall. We UNDERSTANDING THE e have seen how the one-way ANOVA can be used to compare two or more sample means in studies involving a single independent variable. This can be extended to two independent variables ANOVA February 12, 2015 1 ANOVA models Last time, we discussed the use of categorical variables in multivariate regression. Often, these are encoded as indicator columns in the design matrix. In : %%R Part 3 Comparing Groups Chapter 7 Comparing Paired Groups 189 Chapter 8 Comparing Two Independent Groups 217 Chapter 9 Comparing More Than Two Groups 257 188 Elementary Statistics Using SAS Chapter 7 Comparing Tutorial The F distribution and the basic principle behind ANOVAs Bodo Winter 1 Updates: September 21, 2011; January 23, 2014; April 24, 2014; March 2, 2015 This tutorial focuses on understanding rather Chapter 9 Two-Sample Tests Paired t Test (Correlated Groups t Test) Effect Sizes and Power Paired t Test Calculation Summary Independent t Test Chapter 9 Homework Power and Two-Sample Tests: Paired Versus Teaching Regression using American Football Scores Everson, Phil Swarthmore College Department of Mathematics and Statistics 5 College Avenue Swarthmore, PA198, USA E-mail: email@example.com 1. Introduction Research methods - II 3 2. Simple Linear Regression Simple linear regression is a technique in parametric statistics that is commonly used for analyzing mean response of a variable Y which changes according Business Course Text Bowerman, Bruce L., Richard T. O'Connell, J. B. Orris, and Dawn C. Porter. Essentials of Business, 2nd edition, McGraw-Hill/Irwin, 2008, ISBN: 978-0-07-331988-9. Required Computing 1 One-Way ANOVA using SPSS 11.0 This section covers steps for testing the difference between three or more group means using the SPSS ANOVA procedures found in the Compare Means analyses. Specifically, Glo bal Leadership M BA BUSINESS STATISTICS FINAL EXAM Name: INSTRUCTIONS 1. Do not open this exam until instructed to do so. 2. Be sure to fill in your name before starting the exam. 3. You have two hours STATISTICS 8, FINAL EXAM NAME: KEY Seat Number: Last six digits of Student ID#: Circle your Discussion Section: 1 2 3 4 Make sure you have 8 pages. You will be provided with a table as well, as a separate MULTIPLE LINEAR REGRESSION ANALYSIS USING MICROSOFT EXCEL by Michael L. Orlov Chemistry Department, Oregon State University (1996) INTRODUCTION In modern science, regression analysis is a necessary part Stat 411/511 THE RANDOMIZATION TEST Oct 16 2015 Charlotte Wickham stat511.cwick.co.nz Today Review randomization model Conduct randomization test What about CIs? Using a t-distribution as an approximation General Method: Difference of Means 1. Calculate x 1, x 2, SE 1, SE 2. 2. Combined SE = SE1 2 + SE2 2. ASSUMES INDEPENDENT SAMPLES. 3. Calculate df: either Welch-Satterthwaite formula or simpler df = min(n Module 5 Hypotheses Tests: Comparing Two Groups Objective: In medical research, we often compare the outcomes between two groups of patients, namely exposed and unexposed groups. At the completion of this Chapter 45 Non-Inferiority ests for One Mean Introduction his module computes power and sample size for non-inferiority tests in one-sample designs in which the outcome is distributed as a normal random Babraham Bioinformatics Introduction to Statistics with GraphPad Prism (5.01) Version 1.1 Introduction to Statistics with GraphPad Prism 2 Licence This manual is 2010-11, Anne Segonds-Pichon. This manual EXCEL Analysis TookPak [Statistical Analysis] 1 First of all, check to make sure that the Analysis ToolPak is installed. Here is how you do it: a. From the Tools menu, choose Add-Ins b. Make sure Analysis Comparing Means in Two Populations Overview The previous section discussed hypothesis testing when sampling from a single population (either a single mean or two means from the same population). Now we Analysis of Variance ANOVA Overview We ve used the t -test to compare the means from two independent groups. Now we ve come to the final topic of the course: how to compare means from more than two populations. Paper TU04 An Overview of Non-parametric Tests in SAS : When, Why, and How Paul A. Pappas and Venita DePuy Durham, North Carolina, USA ABSTRACT Most commonly used statistical procedures are based on the KSTAT MINI-MANUAL Decision Sciences 434 Kellogg Graduate School of Management Kstat is a set of macros added to Excel and it will enable you to do the statistics required for this course very easily. To 2 Making Connections: The Two-Sample t-test, Regression, and ANOVA In theory, there s no difference between theory and practice. In practice, there is. Yogi Berra 1 Statistics courses often teach the two-sample Permutation Tests for Comparing Two Populations Ferry Butar Butar, Ph.D. Jae-Wan Park Abstract Permutation tests for comparing two populations could be widely used in practice because of flexibility of Chapter 7 Testing Hypotheses Chapter Learning Objectives Understanding the assumptions of statistical hypothesis testing Defining and applying the components in hypothesis testing: the research and null 1 2 CONTENTS OF DAY 2 I. More Precise Definition of Simple Random Sample 3 Connection with independent random variables 3 Problems with small populations 8 II. Why Random Sampling is Important 9 A myth, Using Excel s Analysis ToolPak Add-In S. Christian Albright, September 2013 Introduction This document illustrates the use of Excel s Analysis ToolPak add-in for data analysis. The document is aimed at Lesson 1 Chi-square test Fisher s Exact test McNemar s Test Lesson 1 Overview Lesson 11 covered two inference methods for categorical data from groups Confidence Intervals for the difference of two proportions Statistics 112 Regression Cheatsheet Section 1B - Ryan Rosario I have found that the best way to practice regression is by brute force That is, given nothing but a dataset and your mind, compute everything 1 Linear Regression 1.1 Simple Linear Regression Model The linear regression model is applied if we want to model a numeric response variable and its dependency on at least one numeric factor variable. Chapter 8 Student Lecture Notes 8-1 Chapter 8 Introduction to Hypothesis Testing Fall 26 Fundamentals of Business Statistics 1 Chapter Goals After completing this chapter, you should be able to: Formulate 1 Nonparametric Statistics When finding confidence intervals or conducting tests so far, we always described the population with a model, which includes a set of parameters. Then we could make decisions Lesson 15 Linear Regression Lesson 15 Outline Review correlation analysis Dependent and Independent variables Least Squares Regression line Calculating l the slope Calculating the Intercept Residuals and Biostatistics: Types of Data Analysis Theresa A Scott, MS Vanderbilt University Department of Biostatistics firstname.lastname@example.org http://biostat.mc.vanderbilt.edu/theresascott Theresa A Scott, MS Confidence Intervals on Effect Size David C. Howell University of Vermont Recent years have seen a large increase in the use of confidence intervals and effect size measures such as Cohen s d in reporting
Greenland ice sheet |This article needs to be updated. (June 2016)| |Greenland ice sheet| |Area||1,710,000 km2 (660,000 sq mi)| |Length||2,400 km (1,500 mi)| |Width||1,100 km (680 mi)| |Thickness||2,000–3,000 m (6,600–9,800 ft)| It is the second largest ice body in the world, after the Antarctic Ice Sheet. The ice sheet is almost 2,400 kilometres (1,500 mi) long in a north-south direction, and its greatest width is 1,100 kilometres (680 mi) at a latitude of 77°N, near its northern margin. The mean altitude of the ice is 2,135 metres (7,005 ft). The thickness is generally more than 2 km (1.2 mi) and over 3 km (1.9 mi) at its thickest point. It is not the only ice mass of Greenland – isolated glaciers and small ice caps cover between 76,000 and 100,000 square kilometres (29,000 and 39,000 sq mi) around the periphery. If the entire 2,850,000 cubic kilometres (684,000 cu mi) of ice were to melt, it would lead to a global sea level rise of 7.2 m (24 ft). The Greenland Ice Sheet is sometimes referred to under the term inland ice, or its Danish equivalent, indlandsis. It is also sometimes referred to as an ice cap. The ice in the current ice sheet is as old as 110,000 years. The presence of ice-rafted sediments in deep-sea cores recovered off of northeast Greenland, in the Fram Strait, and south of Greenland indicated the more or less continuous presence of either an ice sheet or ice sheets covering significant parts of Greenland for the last 18 million years. From about 11 million years ago to 10 million years ago, the Greenland Ice Sheet was greatly reduced in size. The Greenland Ice Sheet formed in the middle Miocene by coalescence of ice caps and glaciers. There was an intensification of glaciation during the Late Pliocene. The weight of the ice has depressed the central area of Greenland; the bedrock surface is near sea level over most of the interior of Greenland, but mountains occur around the periphery, confining the sheet along its margins. If the ice disappeared, Greenland would most probably appear as an archipelago, at least until isostasy lifted the land surface above sea level once again. The ice surface reaches its greatest altitude on two north-south elongated domes, or ridges. The southern dome reaches almost 3,000 metres (10,000 ft) at latitudes 63°–65°N; the northern dome reaches about 3,290 metres (10,800 ft) at about latitude 72°N. The crests of both domes are displaced east of the centre line of Greenland. The unconfined ice sheet does not reach the sea along a broad front anywhere in Greenland, so that no large ice shelves occur. The ice margin just reaches the sea, however, in a region of irregular topography in the area of Melville Bay southeast of Thule. Large outlet glaciers, which are restricted tongues of the ice sheet, move through bordering valleys around the periphery of Greenland to calve off into the ocean, producing the numerous icebergs that sometimes occur in North Atlantic shipping lanes. The best known of these outlet glaciers is Jakobshavn Glacier (Greenlandic: Sermeq Kujalleq), which, at its terminus, flows at speeds of 20 to 22 metres or 66 to 72 feet per day. On the ice sheet, temperatures are generally substantially lower than elsewhere in Greenland. The lowest mean annual temperatures, about −31 °C (−24 °F), occur on the north-central part of the north dome, and temperatures at the crest of the south dome are about −20 °C (−4 °F). During winter, the ice sheet takes on a clear blue/green color. During summer, the top layer of ice melts leaving pockets of air in the ice that makes it look white. The ice sheet as a record of past climates The ice sheet, consisting of layers of compressed snow from more than 100,000 years, contains in its ice today's most valuable record of past climates. In the past decades, scientists have drilled ice cores up to 4 kilometres (2.5 mi) deep. Scientists have, using those ice cores, obtained information on (proxies for) temperature, ocean volume, precipitation, chemistry and gas composition of the lower atmosphere, volcanic eruptions, solar variability, sea-surface productivity, desert extent and forest fires. This variety of climatic proxies is greater than in any other natural recorder of climate, such as tree rings or sediment layers. The melting ice sheet Many scientists who study the ice melt in Greenland consider that a two or three degrees C temperature rise would result in a complete melting of Greenland’s ice. Positioned in the Arctic, the Greenland ice sheet is especially vulnerable to climate change. Arctic climate is believed to be now rapidly warming and much larger Arctic shrinkage changes are projected. The Greenland Ice Sheet has experienced record melting in recent years since detailed records were kept and is likely to contribute substantially to sea level rise as well as to possible changes in ocean circulation in the future if this is proven to be true or sustained. The area of the sheet that experiences melting has been argued to have increased by about 16% between 1979 (when measurements started) and 2002 (most recent data). The area of melting in 2002 broke all previous records. The number of glacial earthquakes at the Helheim Glacier and the northwest Greenland glaciers increased substantially between 1993 and 2005. In 2006, estimated monthly changes in the mass of Greenland's ice sheet suggest that it is melting at a rate of about 239 cubic kilometers (57 cu mi) per year. A more recent study, based on reprocessed and improved data between 2003 and 2008, reports an average trend of 195 cubic kilometers (47 cu mi) per year. These measurements came from the US space agency's GRACE (Gravity Recovery and Climate Experiment) satellite, launched in 2002, as reported by BBC. Using data from two ground-observing satellites, ICESAT and ASTER, a study published in Geophysical Research Letters (September 2008) shows that nearly 75 percent of the loss of Greenland's ice can be traced back to small coastal glaciers. If the entire 2,850,000 km3 (684,000 cu mi) of ice were to melt, global sea levels would rise 7.2 m (24 ft). Recently, fears have grown that continued climate change will make the Greenland Ice Sheet cross a threshold where long-term melting of the ice sheet is inevitable. Climate models project that local warming in Greenland will be 3 °C (5 °F) to 9 °C (16 °F) during this century. Ice sheet models project that such a warming would initiate the long-term melting of the ice sheet, leading to a complete melting of the ice sheet (over centuries), resulting in a global sea level rise of about 7 metres (23 ft). Such a rise would inundate almost every major coastal city in the world. How fast the melt would eventually occur is a matter of discussion. According to the IPCC 2001 report, such warming would, if kept from rising further after the 21st Century, result in 1 to 5 meter sea level rise over the next millennium due to Greenland ice sheet melting. Some scientists have cautioned that these rates of melting are overly optimistic as they assume a linear, rather than erratic, progression. James E. Hansen has argued that multiple positive feedbacks could lead to nonlinear ice sheet disintegration much faster than claimed by the IPCC. According to a 2007 paper, "we find no evidence of millennial lags between forcing and ice sheet response in paleoclimate data. An ice sheet response time of centuries seems probable, and we cannot rule out large changes on decadal time-scales once wide-scale surface melt is underway." In a 2013 study published in Nature, 133 researchers analyzed a Greenland ice core from the Eemian interglacial. They concluded that GIS had been 8 degrees C warmer than today. Resulting in a thickness decrease of the northwest Greenland ice sheet by 400 ± 250 metres, reaching surface elevations 122,000 years ago of 130 ± 300 metres lower than at present. The melt zone, where summer warmth turns snow and ice into slush and melt ponds of meltwater, has been expanding at an accelerating rate in recent years. When the meltwater seeps down through cracks in the sheet, it accelerates the melting and, in some areas, allows the ice to slide more easily over the bedrock below, speeding its movement to the sea. Besides contributing to global sea level rise, the process adds freshwater to the ocean, which may disturb ocean circulation and thus regional climate. In July 2012, this melt zone extended to 97 percent of the ice cover. Ice cores show that events such as this occur approximately every 150 years on average. The last time a melt this large happened was in 1889. This particular melt may be part of cyclical behavior; however, Lora Koenig, a Goddard glaciologist suggested that "...if we continue to observe melting events like this in upcoming years, it will be worrisome." Meltwater around Greenland may transport nutrients in both dissolved and particulate phases to the ocean. Measurements of the amount of iron in meltwater from the Greenland ice sheet show that extensive melting of the ice sheet might add an amount of this micronutrient to the Atlantic Ocean equivalent to that added by airborne dust. However much of the particles and iron derived from glaciers around Greenland may be trapped within the extensive fjords that surround the island and, unlike the HNLC Southern ocean where iron is an extensive limiting micronutrient, biological production in the North Atlantic is subject only to very spatially and temporally limited periods of iron limitation. Nonetheless high productivity is observed in the immediate vicinity of major marine terminating glaciers around Greenland and this is attributed to meltwater inputs driving the upwelling of seawater rich in macronutrients. Researchers have considered clouds to enhance Greenland ice sheet melt. A study published in Nature in 2013 found that optically thin liquid-bearing clouds extended this July 2012 extreme melt zone, while a Nature Communications study in 2016 suggests that clouds in general enhance Greenland ice sheet's meltwater runoff by more than 30% due to decreased meltwater refreezing in the firn layer at night. A 2015 study by climate scientists Michael Mann of Penn State and Stefan Rahmstorf from the Potsdam Institute for Climate Impact Research suggests that the observed cold blob in the North Atlantic during years of temperature records is a sign that the Atlantic ocean’s Meridional overturning circulation (AMOC) may be weakening. They published their findings, and concluded that the AMOC circulation shows exceptional slowdown in the last century, and that Greenland melt is a possible contributor. A study published in 2016, by researchers from the University of South Florida, Canada and the Netherlands, used GRACE satellite data to estimate freshwater flux from Greenland. They concluded that freshwater runoff is accelerating, and could eventually cause a disruption of AMOC in the future, which would affect Europe and North America. Recent ice loss events - Between 2000 and 2001: Northern Greenland's Petermann glacier lost 85 square kilometres (33 sq mi) of floating ice. - Between 2001 and 2005: Sermeq Kujalleq broke up, losing 93 square kilometres (36 sq mi) and raised awareness worldwide of glacial response to global climate change. - July 2008: Researchers monitoring daily satellite images discovered that a 28-square-kilometre (11 sq mi) piece of Petermann broke away. - August 2010: A sheet of ice measuring 260 square kilometres (100 sq mi) broke off from the Petermann Glacier. Researchers from the Canadian Ice Service located the calving from NASA satellite images taken on August 5. The images showed that Petermann lost about one-quarter of its 70 km-long (43 mile) floating ice shelf. - July 2012: Another large ice sheet twice the area of Manhattan, about 120 square kilometres (46 sq mi), broke away from the Petermann glacier in northern Greenland. - In 2015, Jakobshavn Glacier calved an iceberg the size of about 4,600 feet thick and about five square miles. Satellite measurements of Greenland's ice cover from 1979 to 2009 reveals a trend of increased melting. NASA's MODIS and QuikSCAT satellite data from 2007 were compared to confirm the precision of different melt observations. NASA scientist Eric Rignot provides a narrated tour of Greenland’s moving ice sheet. This narrated animation shows the accumulated change in the elevation of the Greenland ice sheet between 2003 and 2012. Satellite image of dark melt ponds. Ice sheet acceleration Two mechanisms have been utilized to explain the change in velocity of the Greenland Ice Sheets outlet glaciers. The first is the enhanced meltwater effect, which relies on additional surface melting, funneled through moulins reaching the glacier base and reducing the friction through a higher basal water pressure. (It should be noted that not all meltwater is retained in the ice sheet and some moulins drain into the ocean, with varying rapidity.) This idea was observed to be the cause of a brief seasonal acceleration of up to 20% on Sermeq Kujalleq in 1998 and 1999 at Swiss Camp. (The acceleration lasted between two and three months and was less than 10% in 1996 and 1997 for example. They offered a conclusion that the "coupling between surface melting and ice-sheet flow provides a mechanism for rapid, large-scale, dynamic responses of ice sheets to climate warming". Examination of recent rapid supra-glacial lake drainage documented short term velocity changes due to such events, but they had little significance to the annual flow of the large outlet glaciers. The second mechanism is a force imbalance at the calving front due to thinning causing a substantial non-linear response. In this case an imbalance of forces at the calving front propagates up-glacier. Thinning causes the glacier to be more buoyant, reducing frictional back forces, as the glacier becomes more afloat at the calving front. The reduced friction due to greater buoyancy allows for an increase in velocity. This is akin to letting off the emergency brake a bit. The reduced resistive force at the calving front is then propagated up-glacier via longitudinal extension because of the backforce reduction. For ice streaming sections of large outlet glaciers (in Antarctica as well) there is always water at the base of the glacier that helps lubricate the flow. If the enhanced meltwater effect is the key, then since meltwater is a seasonal input, velocity would have a seasonal signal and all glaciers would experience this effect. If the force imbalance effect is the key, then the velocity will propagate up-glacier, there will be no seasonal cycle, and the acceleration will be focused on calving glaciers. Helheim Glacier, East Greenland had a stable terminus from the 1970s-2000. In 2001–2005 the glacier retreated 7 km (4.3 mi) and accelerated from 20 to 33 m or 70 to 110 ft/day, while thinning up to 130 meters (430 ft) in the terminus region. Kangerdlugssuaq Glacier, East Greenland had a stable terminus history from 1960 to 2002. The glacier velocity was 13 m or 43 ft/day in the 1990s. In 2004–2005 it accelerated to 36 m or 120 ft/day and thinned by up to 100 m (300 ft) in the lower reach of the glacier. On Sermeq Kujalleq the acceleration began at the calving front and spread up-glacier 20 km (12 mi) in 1997 and up to 55 km (34 mi) inland by 2003. On Helheim the thinning and velocity propagated up-glacier from the calving front. In each case the major outlet glaciers accelerated by at least 50%, much larger than the impact noted due to summer meltwater increase. On each glacier the acceleration was not restricted to the summer, persisting through the winter when surface meltwater is absent. An examination of 32 outlet glaciers in southeast Greenland indicates that the acceleration is significant only for marine-terminating outlet glaciers—glaciers that calve into the ocean. A 2008 study noted that the thinning of the ice sheet is most pronounced for marine-terminating outlet glaciers. As a result of the above, all concluded that the only plausible sequence of events is that increased thinning of the terminus regions, of marine-terminating outlet glaciers, ungrounded the glacier tongues and subsequently allowed acceleration, retreat and further thinning. Warmer temperatures in the region have brought increased precipitation to Greenland, and part of the lost mass has been offset by increased snowfall. However, there are only a small number of weather stations on the island, and though satellite data can examine the entire island, it has only been available since the early 1990s, making the study of trends difficult. It has been observed that there is more precipitation where it is warmer, up to 1.5 meters per year on the southeast flank, and less precipitation or none on the 25–80 percent (depending on the time of year) of the island that is cooler. Rate of change Several factors determine the net rate of growth or decline. These are - Accumulation and melting rates of snow in the central parts - Melting of surface snow and ice which then flows into moulins, falls and flows to bedrock, lubricates the base of glaciers, and affects the speed of glacial motion. This flow is implicated in accelerating the speed of glaciers and thus the rate of glacial calving. - Melting of ice along the sheet's margins (runoff) and basal hydrology, - Iceberg calving into the sea from outlet glaciers also along the sheet's edges The IPCC Third Assessment Report (2001) estimated the accumulation to 520 ± 26 Gigatonnes of ice per year, runoff and bottom melting to 297±32 Gt/yr and 32±3 Gt/yr, respectively, and iceberg production to 235±33 Gt/yr. On balance, the IPCC estimates -44 ± 53 Gt/yr, which means that the ice sheet may currently be melting. Data from 1996 to 2005 shows that the ice sheet is thinning even faster than supposed by IPCC. According to the study, in 1996 Greenland was losing about 96 km3 or 23.0 cu mi per year in mass from its ice sheet. In 2005, this had increased to about 220 km3 or 52.8 cu mi a year due to rapid thinning near its coasts, while in 2006 it was estimated at 239 km3 (57.3 cu mi) per year. It was estimated that in the year 2007 Greenland ice sheet melting was higher than ever, 592 km3 (142.0 cu mi). Also snowfall was unusually low, which led to unprecedented negative −65 km3 (−15.6 cu mi) Surface Mass Balance. If iceberg calving has happened as an average, Greenland lost 294 Gt of its mass during 2007 (one km3 of ice weighs about 0.9 Gt). The IPCC Fourth Assessment Report (2007) noted, it is hard to measure the mass balance precisely, but most results indicate accelerating mass loss from Greenland during the 1990s up to 2005. Assessment of the data and techniques suggests a mass balance for the Greenland Ice Sheet ranging between growth of 25 Gt/yr and loss of 60 Gt/yr for 1961 to 2003, loss of 50 to 100 Gt/yr for 1993 to 2003 and loss at even higher rates between 2003 and 2005. Analysis of gravity data from GRACE satellites indicates that the Greenland ice sheet lost approximately 2900 Gt (0.1% of its total mass) between March 2002 and September 2012. The mean mass loss rate for 2008–2012 was 367 Gt/year. A paper on Greenland's temperature record shows that the warmest year on record was 1941 while the warmest decades were the 1930s and 1940s. The data used was from stations on the south and west coasts, most of which did not operate continuously the entire study period. While Arctic temperatures have generally increased, there is some discussion concerning the temperatures over Greenland. First of all, Arctic temperatures are highly variable, making it difficult to discern clear trends at a local level. Also, until recently, an area in the North Atlantic including southern Greenland was one of the only areas in the World showing cooling rather than warming in recent decades, but this cooling has now been replaced by strong warming in the period 1979–2005. - GLIMPSE Project - List of glaciers in Greenland - Moulin (geomorphology) - Polar ice packs - Retreat of glaciers since 1850 - Encyclopaedia Britannica. 1999 Multimedia edition. - Climate Change 2001: The Scientific Basis. Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change (IPCC) [Houghton, J.T.,Y. Ding, D.J. Griggs, M. Noguer, P.J. van der Linden, X. Dai, K. Maskell, and C.A. Johnson (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 881pp. ,, and . - Meese, DA, AJ Gow, RB Alley, GA Zielinski, PM Grootes, M Ram, KC Taylor, PA Mayewski, JF Bolzan (1997) The Greenland Ice Sheet Project 2 depth-age scale: Methods and results. Journal of Geophysical Research. C. Oceans. 102(C12):26,411-26,423. - Thiede, JC Jessen, P Knutz, A Kuijpers, N Mikkelsen, N Norgaard-Pedersen, and R Spielhagen (2011) Millions of Years of Greenland Ice Sheet History Recorded in Ocean Sediments. Polarforschung. 80(3):141-159. - "The Secrets in Greenland's Ice Sheet". The New York Times. 2015. - Impacts of a Warming Arctic: Arctic Climate Impact Assessment, Cambridge University Press, 2004. - "Glacial Earthquakes Point to Rising Temperatures in Greenland – Lamont-Doherty Earth Observatory News". columbia.edu. - ScienceDaily, 10 October 2008: "An Accurate Picture Of Ice Loss In Greenland" - "BBC NEWS – Science/Nature – Greenland melt 'speeding up'". bbc.co.uk. - Small Glaciers Account for Most of Greenland's Recent Ice Loss Newswise, Retrieved on September 15, 2008. - Climate change and trace gases. James Hansen, Makiko Sato, et al. Phil.Trans.R.Soc.A (2007)365,1925–1954, doi:10.1098/rsta.2007.2052. Published online 18 May 2007, - "Eemian interglacial reconstructed from a Greenland folded ice core". Nature. 493: 489–494. January 24, 2013. doi:10.1038/nature11789. - "Greenland enters melt mode". Science News. - Wall, Tim. "Greenland Hits 97 Percent Meltdown in July". Discovery News. - "NASA Made Up 150 Year Melt Cycle". Daily Kos. - "The Accumulation Record from the GISP2 Core as an Indicator of Climate Change Throughout the Holocene" (PDF). sciencemag.org. - Statham, Peter J.; Skidmore, Mark; Tranter, Martyn (2008-09-01). "Inputs of glacially derived dissolved and colloidal iron to the coastal ocean and implications for primary productivity". Global Biogeochemical Cycles. 22 (3): GB3013. doi:10.1029/2007GB003106. ISSN 1944-9224. - "Glaciers Contribute Significant Iron to North Atlantic Ocean" (news release). Woods Hole Oceanographic Institution. March 10, 2013. Retrieved March 18, 2013. - Hopwood, Mark James; Connelly, Douglas Patrick; Arendt, Kristine Engel; Juul-Pedersen, Thomas; Stinchcombe, Mark; Meire, Lorenz; Esposito, Mario; Krishna, Ram (2016-01-01). "Seasonal changes in Fe along a glaciated Greenlandic fjord.". Marine Biogeochemistry. 4: 15. doi:10.3389/feart.2016.00015. - Martin, John H.; Fitzwater, Steve E.; Gordon, R. Michael (1990-03-01). "Iron deficiency limits phytoplankton growth in Antarctic waters". Global Biogeochemical Cycles. 4 (1): 5–12. doi:10.1029/GB004i001p00005. ISSN 1944-9224. - Nielsdóttir, Maria C.; Moore, Christopher Mark; Sanders, Richard; Hinz, Daria J.; Achterberg, Eric P. (2009-09-01). "Iron limitation of the postbloom phytoplankton communities in the Iceland Basin". Global Biogeochemical Cycles. 23 (3): GB3001. doi:10.1029/2008GB003410. ISSN 1944-9224. - Arendt, Kristine Engel; Nielsen, Torkel Gissel; Rysgaard, Sren; Tnnesson, Kajsa (2010-02-22). "Differences in plankton community structure along the Godthåbsfjord, from the Greenland Ice Sheet to offshore waters". Marine Ecology Progress Series. 401: 49–62. doi:10.3354/meps08368. - Brown, Dwayne; Cabbage, Michael; McCarthy, Leslie; Norton, Karen (20 January 2016). "NASA, NOAA Analyses Reveal Record-Shattering Global Warm Temperatures in 2015". NASA. Retrieved 21 January 2016. - Bennartz, R.; Shupe, M. D.; Turner, D. D.; Walden, V. P.; Steffen, K.; Cox, C. J.; Kulie, M. S.; Miller, N. B.; Pettersen, C. "July 2012 Greenland melt extent enhanced by low-level liquid clouds". Nature. 496 (7443): 83–86. doi:10.1038/nature12002. - Van Tricht, K.; Lhermitte, S.; Lenaerts, J. T. M.; Gorodetskaya, I. V.; L’Ecuyer, T. S.; Noël, B.; van den Broeke, M. R.; Turner, D. D.; van Lipzig, N. P. M. (2016-01-12). "Clouds enhance Greenland ice sheet meltwater runoff". Nature Communications. 7: 10266. doi:10.1038/ncomms10266. - Stefan Rahmstorf, Jason E. Box, Georg Feulner, Michael E. Mann, Alexander Robinson, Scott Rutherford & Erik J. Schaffernicht. "Exceptional twentieth-century slowdown in Atlantic Ocean overturning circulation". Nature. doi:10.1038/nclimate2554. - "Melting Greenland ice sheet may affect global ocean circulation, future climate". Phys.org. 2016. - "Images Show Breakup of Two of Greenland's Largest Glaciers, Predict Disintegration in Near Future". NASA Earth Observatory. August 20, 2008. Retrieved 2008-08-31. - "Huge ice island breaks from Greenland glacier". BBC News. - Iceberg breaks off from Greenland's Petermann Glacier 19 July 2012 - "Surface Melt-Induced Acceleration of Greenland Ice-Sheet Flow by Zwally et al., " - "Fracture Propagation to the Base of the Greenland Ice Sheet During Supraglacial Lake Drainage by Das. et al.," - "Thomas R.H (2004), Force-perturbation analysis of recent thinning and acceleration of Jakobshavn Isbrae, Greenland, Journal of Glaciology 50 (168): 57–66. " - "Thomas, R. H. Abdalati W, Frederick E, Krabill WB, Manizade S, Steffen K, (2003) Investigation of surface melting and dynamic thinning on Jakobshavn Isbrae, Greenland. Journal of Glaciology 49, 231–239." - "Letters to Nature Nature 432, 608–610 (2 December 2004) | doi:10.1038/nature03130; Received 7 July 2004; Accepted 8 October 2004 Large fluctuations in speed on Greenland's Jakobshavn Isbræ glacier by Joughin, Abdalati and Fahnestock" - "Rates of southeast Greenland ice volume loss...by Howat et al.". AGU. - "Greenland Ice Sheet: is land-terminating ice thinning at anomalously high rates by Sole et al.," - "Rapid and synchronous ice-dynamic changes in East Greenland by Luckman, Murray. de Lange and Hanna" - "Greenland Ice Sheet: is land-terminating ice thinning at anomalously high rates by Sole et al.," - "Moulins calving fronts and Greenland outletglacier acceleration by Pelto" - "Modelling Precipitation over ice sheets: an assessment using Greenland", Gerard H. Roe, University of Washington, - "Greenland Ice Loss Doubles in Past Decade, Raising Sea Level Faster". Jet Propulsion Laboratory News release, Thursday, 16 February 2006. - Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change [Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M. Tignor and H.L. Miller (eds.)]. Chapter 4 Observations: Changes in Snow, Ice and Frozen Ground.IPCC, 2007. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 996 pp. - "Arctic Report Card: Update for 2012; Greenland Ice Sheet". - "A Greenland temperature record spanning two centuries" JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 111, D11105, doi:10.1029/2005JD006810, 2006. Vinther, Anderson, Jones, Briffa, Cappelen. - see Arctic Climate Impact Assessment (2004) and IPCC Second Assessment Report, among others. - IPCC, 2007. Trenberth, K.E., P.D. Jones, P. Ambenje, R. Bojariu, D. Easterling, A. Klein Tank, D. Parker, F. Rahimzadeh, J.A. Renwick, M. Rusticucci, B. Soden and P. Zhai, 2007: Observations: Surface and Atmospheric Climate Change. In: Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change [Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M. Tignor and H.L. Miller (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA. |Wikimedia Commons has media related to Greenland ice sheet.| - Real Climate the Greenland Ice - Geological Survey of Denmark and Greenland (GEUS) GEUS has much scientific material on Greenland. - Emporia State University – James S. Aber Lecture 2: Modern Glaciers and Ice Sheets. - Arctic Climate Impact Assessment - GRACE ice mass measurement: "Recent Land Ice Mass Flux from SpaceborneGravimetry" - Greenland ice cap melting faster than ever, Bristol University - Greenland Ice Mass Loss: Jan. 2004 – June 2014 (NASA animation) - AGU 2015: Eric Rignot - Ice Sheet Systems and Sea Level Change (Sea level rise)
voir la définition de Wikipedia In physics, current density in general is a measure of the density of flow of a conserved charge, in other words flux of the charge (sometimes used synonymously). As such the term "current density" can also be applied to other conserved quantities, like mass, energy, chemical concentration, etc. In the context of electromagnetism, and related fields in solid state physics, condensed matter physics etc., the charge is electric charge, in which case the associated current density is the electric current per unit area of cross section. It is defined as a vector whose magnitude is the electric current per cross-sectional area. In SI units, the electric current density is measured in amperes per square metre. For current density as a vector J, the surface integral it over a surface S, followed by an integral over the time duration t1 to t2, gives the total amount of charge flowing through the surface in that time (t2 − t1): The area required to calculate the flux is real or imaginary, flat or curved, either as a cross-sectional area or a surface. For example, for charge carriers passing through a electrical conductor, the area is the cross-section of the conductor, at the section considered. If the current density J passes through the area at an angle θ to the area normal , then where · is the dot product of the unit vectors. This is, the component of current density passing through the surface (i.e. normal to it) is J cos θ, while the component of current density passing tangential to the area is J sin θ, but there is no current density actually passing through the area in the tangential direction. The only component of current density passing normal to the area is the cosine component. Current density is important to the design of electrical and electronic systems. Circuit performance depends strongly upon the designed current level, and the current density then is determined by the dimensions of the conducting elements. For example, as integrated circuits are reduced in size, despite the lower current demanded by smaller devices, there is trend toward higher current densities to achieve higher device numbers in ever smaller chip areas. See Moore's law. At high frequencies, current density can increase because the conducting region in a wire becomes confined near its surface, the so-called skin effect. High current densities have undesirable consequences. Most electrical conductors have a finite, positive resistance, making them dissipate power in the form of heat. The current density must be kept sufficiently low to prevent the conductor from melting or burning up, the insulating material failing, or the desired electrical properties changing. At high current densities the material forming the interconnections actually moves, a phenomenon called electromigration. In superconductors excessive current density may generate a strong enough magnetic field to cause spontaneous loss of the superconductive property. The analysis and observation of current density also is used to probe the physics underlying the nature of solids, including not only metals, but also semiconductors and insulators. An elaborate theoretical formalism has developed to explain many fundamental observations. Charge carriers which are free to move constitute a free current density, which are given by expressions such as those in this section. is the charge density (SI unit: coulombs per cubic metre), in which n(r, t) is the number of particles per unit volume ("number density") (SI unit: m−3), q is the charge of the individual particles with density n (SI unit: coulombs). A common approximation to the current density assumes the current simply is proportional to the electric field, as expressed by: Conductivity σ is the reciprocal (inverse) of electrical resistivity and has the SI units of siemens per metre (S m−1), and E has the SI units of newtons per coulomb (N C−1) or, equivalently, volts per metre (V m−1). A more fundamental approach to calculation of current density is based upon: indicating the lag in response by the time dependence of σ, and the non-local nature of response to the field by the spatial dependence of σ, both calculated in principle from an underlying microscopic analysis, for example, in the case of small enough fields, the linear response function for the conductive behaviour in the material. See, for example, Giuliani or Rammer. The integral extends over the entire past history up to the present time. The above conductivity and its associated current density reflect the fundamental mechanisms underlying charge transport in the medium, both in time and over distance. A Fourier transform in space and time then results in: where σ(k, ω) is now a complex function. In many materials, for example, in crystalline materials, the conductivity is a tensor, and the current is not necessarily in the same direction as the applied field. Aside from the material properties themselves, the application of magnetic fields can alter conductive behaviour. Currents arise in materials when there is a non-uniform distribution of charge. Together, these terms form add up to the bound current density in the material (resultant current due to movements of electric and magnetic dipole moments per unit volume): The total current is simply the sum of the free and bound currents: which is an important term in Ampere's circuital law, one of Maxwell's equations, since absence of this term would not predict electromagnetic waves to propagate, or the time evolution of electric fields in general. The net flow out of some volume V (which can have an arbitrary shape but fixed for the calculation) must equal the net change in charge held inside the volume: where ρ is the charge density, and dA is a surface element of the surface S enclosing the volume V. The surface integral on the left expresses the current outflow from the volume, and the negatively signed volume integral on the right expresses the decrease in the total charge inside the volume. From the divergence theorem: This relation is valid for any volume, independent of size or location, which implies that: In electrical wiring, the maximum current density can vary from 4A∙mm−2 for a wire with no air circulation around it, to 6A∙mm−2 for a wire in free air. Regulations for building wiring list the maximum allowed current of each size of cable in differing conditions. For compact designs, such as windings of SMPS transformers, the value might be as low as 2A∙mm−2. If the wire is carrying high frequency currents, the skin effect may affect the distribution of the current across the section by concentrating the current on the surface of the conductor. In transformers designed for high frequencies, loss is reduced if Litz wire is used for the windings. This is made of multiple isolated wires in parallel with a diameter twice the skin depth. The isolated strands are twisted together to increase the total skin area and to reduce the resistance due to skin effects. For the top and bottom layers of printed circuit boards, the maximum current density can be as high as 35A∙mm−2 with a copper thickness of 35 µm. Inner layers cannot dissipate as much heat as outer layers; designers of circuit boards avoid putting high-current traces on inner layers. In semiconductors, the maximum current density is given by the manufacturer. A common average is 1mA∙µm−2 at 25°C for 180 nm technology. Above the maximum current density, apart from the joule effect, some other effects like electromigration appear in the micrometer scale. In biological organisms, ion channels regulate the flow of ions (for example, sodium, calcium, potassium) across the membrane in all cells. Current density is measured in pA∙pF−1 (picoamperes per picofarad), that is, current divided by capacitance, a de facto measure of membrane area.[clarification needed] In gas discharge lamps, such as flashlamps, current density plays an important role in the output spectrum produced. Low current densities produce spectral line emission and tend to favour longer wavelengths. High current densities produce continuum emission and tend to favour shorter wavelengths. Low current densities for flash lamps are generally around 1000A∙cm−2. High current densities can be more than 4000A∙cm−2. Contenu de sensagent dictionnaire et traducteur pour sites web Une fenêtre (pop-into) d'information (contenu principal de Sensagent) est invoquée un double-clic sur n'importe quel mot de votre page web. LA fenêtre fournit des explications et des traductions contextuelles, c'est-à-dire sans obliger votre visiteur à quitter votre page web ! Solution commerce électronique Augmenter le contenu de votre site Ajouter de nouveaux contenus Add à votre site depuis Sensagent par XML. Parcourir les produits et les annonces Obtenir des informations en XML pour filtrer le meilleur contenu. Indexer des images et définir des méta-données Fixer la signification de chaque méta-donnée (multilingue). Renseignements suite à un email de description de votre projet. Jeux de lettres Lettris est un jeu de lettres gravitationnelles proche de Tetris. Chaque lettre qui apparaît descend ; il faut placer les lettres de telle manière que des mots se forment (gauche, droit, haut et bas) et que de la place soit libérée. Il s'agit en 3 minutes de trouver le plus grand nombre de mots possibles de trois lettres et plus dans une grille de 16 lettres. Il est aussi possible de jouer avec la grille de 25 cases. Les lettres doivent être adjacentes et les mots les plus longs sont les meilleurs. Participer au concours et enregistrer votre nom dans la liste de meilleurs joueurs ! Jouer Dictionnaire de la langue française La plupart des définitions du français sont proposées par SenseGates et comportent un approfondissement avec Littré et plusieurs auteurs techniques spécialisés. Le dictionnaire des synonymes est surtout dérivé du dictionnaire intégral (TID). L'encyclopédie française bénéficie de la licence Wikipedia (GNU). Changer la langue cible pour obtenir des traductions. Astuce: parcourir les champs sémantiques du dictionnaire analogique en plusieurs langues pour mieux apprendre avec sensagent. calculé en 0,047s
For almost a century, astronomers and cosmologists have postulated that space is filled with an invisible mass known as “dark matter”. Accounting for 27% of the mass and energy in the observable universe, the existence of this matter was intended to explain all the “missing” baryonic matter in cosmological models. Unfortunately, the concept of dark matter has solved one cosmological problem, only to create another. If this matter does exist, what is it made of? So far, theories have ranged from saying that it is made up of cold, warm or hot matter, with the most widely-accepted theory being the Lambda Cold Dark Matter (Lambda-CDM) model. However, a new study produced by a team of European astronomer suggests that the Warm Dark Matter (WDM) model may be able to explain the latest observations made of the early Universe. But first, some explanations are in order. The different theories on dark matter (cold, warm, hot) refer not to the temperatures of the matter itself, but the size of the particles themselves with respect to the size of a protogalaxy – an early Universe formation, from which dwarf galaxies would later form. The size of these particles determines how fast they can travel, which determines their thermodynamic properties, and indicates how far they could have traveled – aka. their “free streaming length” (FSL) – before being slowed by cosmic expansion. Whereas hot dark matter would be made up of very light particles with high FSLs, cold dark matter is believed to be made up of massive particles bigger that a protogalaxy (hence, a low FSL). Cold dark matter has been speculated to take the form of Massive Compact Halo Objects (MACHOs) like black holes; Robust Associations of Massive Baryonic Objects (RAMBOs) like clusters of brown dwarfs; or a class of undiscovered heavy particles – i.e. Weakly-Interacting Massive Particles (WIMPs), and axions. The widely-accepted Lambda-CDM model is based in part of the theory that dark matter is “cold”. As cosmological explanations go, it is the most simple and can account for the formation of galaxies or galaxy cluster formations. However, there remains some holes in this theory, the biggest of which is that it predicts that there should be many more small, dwarf galaxies in the early Universe than we can account for. In short, the existence of dark matter as massive particles that have low FSL would result in small fluctuations in the density of matter in the early Universe – which would lead to large amounts of low-mass galaxies to be found as satellites of galactic halos, and with large concentrations of dark matter in their centers. Naturally, the absence of these galaxies might lead one to speculate that we simply haven’t spotted these galaxies yet, and that IR surveys like the Two-Micron All Sky Survey (2MASS) and the Wide-field Infrared Survey Explorer (WISE) missions might find them in time. But as the international research team – which includes astronomers from the Astronomical Observatory of Rome (INAF), the Italian Space Agency Science Data Center and the Paris Observatory – another possibility is that dark matter is neither hot nor cold, but “warm” – i.e. consisting of middle-mass particles (also undiscovered) with FSLs that are roughly the same as objects big as galaxies. As Dr. Nicola Menci – a researcher with the INAF and the lead author of the study – told Universe Today via email: “The Cold Dark Matter particles are characterized by low root mean square velocities, due to their large masses (usually assumed of the order of >~ 100 GeV, a hundred times the mass of a proton). Such low thermal velocities allow for the clumping of CDM even on very small scales. Conversely, lighter dark matter particles with masses of the order of keV (around 1/500 the mass of the electron) would be characterized by larger thermal velocities, inhibiting the clumping of DM on mass scales of dwarf galaxies. This would suppress the abundance of dwarf galaxies (and of satellite galaxies) and produce shallow inner density profiles in such objects, naturally matching the observations without the need for a strong feedback from stellar populations.” In other words, they found that the WDM could better account for the early Universe as we are seeing it today. Whereas the Lambda-CDM model would result in perturbations in densities in the early Universe, the longer FSL of warm dark matter particles would smooth these perturbations out, thus resembling what we see when we look deep into the cosmos to see the Universe during the epoch of galaxy formation. For the sake of their study, which appeared recently in the July 1st issue of The Astrophysical Journal Letters, the research team relied on data obtained from the Hubble Frontier Fields (HFF) program. Taking advantage of improvements made in recent years, they were able to examine the magnitude of particularly faint and distant galaxies. As Menci explained, this is a relatively new ability which the Hubble Space Telescope would not have been able to do a few years ago: “Since galaxy formation is deeply affected by the nature of DM on the scale of dwarf galaxies, a powerful tool to constraint DM models is to measure the abundance of low-mass galaxies at early cosmic times (high redshifts z=6-8), the epoch of their formation. This is a challenging task since it implies finding extremely faint objects (absolute magnitudes M_UV=-12 to -13) at very large distances (12-13 billion of light years) even for the Hubble Space Telescope. “However, the Hubble Frontier Field programme exploits the gravitational lensing produced by foreground galaxy clusters to amplify the light from distant galaxies. Since the formation of dwarf galaxies is suppressed in WDM models – and the strength of the suppression is larger for lighter DM particles – the high measured abundance of high-redshift dwarf galaxies (~ 3 galaxies per cube Mpc) can provide a lower limit for the WDM particle mass, which is completely independent of the stellar properties of galaxies.” The results they obtained provided strict constraints on dark matter and early galaxy formation, and were thus consistent with what HFF has been seeing. These results could indicate that our failure to detect dark matter so far may have been the result of looking for the wrong kind of particles. But of course, these results are just one step in a larger effort, and will require further testing and confirmation. Looking ahead, Menci and his colleagues hope to obtain further information from the HFF program, and hopes that future missions will allow them to see if their findings hold up. As already noted, these include infrared astronomy missions, which are expected to “see” more of the early Universe by looking beyond the visible spectrum. “Our results are based on the abundance of high-redshift dwarfs measured in only two fields,” he said. “However, the HFF program aims at measuring such abundances in six independent fields. The operation of the James Webb Space Telescope in the near future – with a lensing program analogous to the HFF – will allow us to pin down the possible mechanisms for the production of WDM particles, or to rule out WDM models as alternatives to CDM,” he said. “ For almost a century, dark matter has been a pervasive and ellusive mystery, always receding away the moment think we are about to figure it out. But the deeper we look into the known Universe (and the farther back in time) the more we are able to learn about the its evolution, and thus see if they accord with our theories. Further Reading: The Astrophysical Journal Letters, AAS Nova The post Dark Matter: Hot Or Not? appeared first on Universe Today.
|This article needs additional citations for verification. (November 2013)| Marchiafava & Celli, 1885 Plasmodium, commonly known as the malaria parasite, is a large genus of parasitic protozoa. Infection with these protozoans is known as malaria, a deadly disease widespread in the tropics. The parasite always has two hosts in its life cycle: a mosquito vector and a vertebrate host. The life-cycle is very complex, involving a sequence of different stages both in the vector and the host. These stages include sporozoites which are injected by the mosquito vector into the host's blood; latent hypnozoites which may rest undetected in the liver for up to 30 years; merosomes and merozoites which infect the red cells (erythrocytes) of the blood; trophozoites which grow in the red cells, and schizonts which divide there, producing more merozoites which leave to infect more red cells; and male and female sexual forms, gametocytes, which are taken up by other mosquitoes. In the mosquito's midgut, the gametocytes develop into gametes which fertilize each other to form motile zygotes which escape the gut, only to grow into new sporozoites which move to the mosquito's salivary glands, from where they are injected into the mosquito's next host, infecting it and restarting the cycle. The genus Plasmodium was first described in 1885. It now contains about 200 species divided into several subgenera; as of 2006 the taxonomy was shifting, and species from other genera are likely to be added to Plasmodium. At least ten species infect humans; other species infect other animals, including birds, reptiles and rodents, while 29 species infect non-human primates. The parasite is thought to have originated from Dinoflagellates, photosynthetic protozoa. The most common forms of human malaria are caused by Plasmodium falciparum, P. vivax, P. knowlesi, and P. malariae. P. falciparum malaria, common in sub-Saharan Africa, and P. knowlesi in South East Asia, are especially dangerous. - 1 Taxonomy and host range - 2 Life cycle - 3 Evolution - 4 Molecular biology - 5 Taxonomy - 6 References - 7 Further reading - 8 External links Taxonomy and host range As of 2006[update], the genus is in need of reorganization as it has been shown that parasites belonging to the genera Haemocystis and Hepatocystis appear to be closely related to Plasmodium. It is likely that other species such as Haemoproteus meleagridis will be included in this genus once it is revised. Host range among the mammalian orders is non uniform. At least 29 species infect non-human primates; rodents outside the tropical parts of Africa are rarely affected; a few species are known to infect bats, porcupines and squirrels; carnivores, insectivores and marsupials are not known to act as hosts. In 1898 Ronald Ross demonstrated the existence of Plasmodium in the wall of the midgut and salivary glands of a Culex mosquito. For this discovery he won the Nobel Prize in 1902. However credit must also be given to the Italian professor Giovanni Battista Grassi, who showed that human malaria could only be transmitted by Anopheles mosquitoes. For some species the vector may not be a mosquito. Mosquitoes of the genera Culex, Anopheles, Culiseta, Mansonia and Aedes may act as vectors. The known vectors for human malaria (more than 100 species) belong to the genus Anopheles. Bird malaria is commonly carried by species belonging to the genus Culex. Only female mosquitoes bite. Aside from blood both sexes live on nectar, but one or more blood meals are needed by the female for egg laying, because there is very little protein in nectar. The life cycle of Plasmodium is very complex. Sporozoites from the saliva of a biting female mosquito are transmitted to either the blood or the lymphatic system of the recipient. The sporozoites then migrate to the liver and invade hepatocytes. This latent or dormant stage of the Plasmodium sporozoite in the liver is called the hypnozoite. The development from the hepatic stages to the erythrocytic stages has been obscure. In 2006 it was shown that the parasite buds off the hepatocytes in merosomes containing hundreds or thousands of merozoites. These merosomes have been subsequently shown to lodge in the pulmonary capillaries and to disintegrate there slowly over 48–72 hours releasing merozoites. Erythrocyte invasion is enhanced when blood flow is slow and the cells are tightly packed: both of these conditions are found in the alveolar capillaries. Within the erythrocytes the merozoite grow first to a ring-shaped form and then to a larger trophozoite form. In the schizont stage, the parasite divides several times to produce new merozoites, which leave the red blood cells and travel within the bloodstream to invade new red blood cells. Most merozoites continue this replicative cycle, but some merozoites differentiate into male or female sexual forms (gametocytes) (also in the blood), which are taken up by the female mosquito. In the mosquito's midgut, the gametocytes develop into gametes and fertilize each other, forming a zygote. After a brief period of inactivity, zygotes transform into a motile form called ookinetes. The ookinetes penetrate and escape the midgut, then embed themselves onto the exterior of the gut membrane and transform into oocysts. The nuclei of oocysts divides many times to produce large numbers of tiny elongated sporozoites. These sporozoites migrate to the salivary glands of the mosquito where they are injected into the blood of the next host the mosquito bites. The sporozoites move to the liver where they repeat the cycle. The pattern of alternation of sexual and asexual reproduction is common in parasitic species. The evolutionary advantages of this type of life cycle were recognised by Mendel. Under favourable conditions, asexual reproduction is superior to sexual as the parent is well adapted to its environment and its descendents share all its genes. Transferring to a new host or in times of stress, sexual reproduction is generally superior as it shuffles the genes of two parents, producing a variety of individuals, some of which will be better adapted to the new environment. Reactivation of the hypnozoites has been reported for up to 30 years after the initial infection in humans. The factors precipating this reactivation are not known. In the species Plasmodium malariae, Plasmodium ovale and Plasmodium vivax hypnozoites have been shown to occur. Reactivation does not occur in infections with Plasmodium falciparum. It is not known if hypnozoite reactivaction may occur with any of the remaining species that infect humans but this is presumed to be the case. The life cycle of Plasmodium is best understood in terms of its evolution. The Apicomplexa—the phylum to which Plasmodium belongs—are thought to have originated within the Dinoflagellates, a large group of photosynthetic protozoa. It is thought that the ancestors of the Apicomplexa were originally prey organisms that evolved the ability to invade the intestinal cells and subsequently lost their photosynthetic ability. Some extant Dinoflagellates, however, can invade the bodies of jellyfish and continue to photosynthesize, which is possible because jellyfish bodies are almost transparent. In other organisms with opaque bodies this ability would most likely rapidly be lost. It is thought that Plasmodium evolved from a parasite spread by the fecal-oral route which infected the intestinal wall. At some point this parasite evolved the ability to infect the liver. This pattern is seen in the genus Cryptosporidium, to which Plasmodium is distantly related. At some later point this ancestor developed the ability to infect blood cells and to survive and infect mosquitoes. Plasmodium subsequently evolved a mechanism to invade the salivary glands of mosquitoes, allowing for transmission from mosquito to host. Once mosquito transmission was firmly established, the previous fecal-oral route was lost within the Plasmodium genus. The survivorship and relative fitness of mosquitoes are not adversely affected by Plasmodium infection which indicates the importance of vector fitness in shaping the evolution of Plasmodium. Plasmodium has evolved the capability to manipulate mosquito feeding behavior. Mosquitoes harboring Plasmodium have a higher propensity to bite than uninfected mosquitoes. This tendency has facilitated the spread of Plasmodium to the various hosts. Current (2007) theory suggests that the genera Plasmodium, Hepatocystis and Haemoproteus evolved from Leukocytozoon species. Parasites of the genus Leukocytozoan infect white blood cells (leukocytes), liver and spleen cells and are transmitted by 'black flies' (Simulium species) — a large genus of flies related to the mosquitoes. Leukocytes, hepatocytes and most spleen cells actively phagocytose particulate matter, making entry into the cell easier for the parasite. The mechanism of entry of Plasmodium species into erythrocytes is still very unclear, taking as it does less than 30 seconds. It is not yet known if this mechanism evolved before mosquitoes became the main vectors for transmission of Plasmodium. Plasmodium evolved about 130 million years ago. This period coincided with the rapid spread of the angiosperms (flowering plants). This expansion in the angiosperms is thought to be due to at least one genomic duplication event. It seems probable that the increase in the number of flowers led to an increase in the number of mosquitoes and their contact with vertebrates. Environmental factors play a considerable role in the evolution of Plasmodium and the transmission of malaria. The genetic information of Plasmodium falciparum has signaled a recent expansion that coincides with the agricultural revolution It is likely that the development of extensive agriculture increased mosquito population densities by giving rise to more breeding sites, which may have triggered the evolution and expansion of Plasmodium falciparum. Mosquitoes evolved in what is now South America about 230 million years ago. There are over 3500 species recognised but to date their evolution has not been well worked out so a number of gaps in our knowledge of the evolution of Plasmodium remain. It seems probable that birds were the first group infected by Plasmodium followed by the reptiles — probably the lizards. At some point primates and rodents became infected. The remaining species infected outside these groups seem likely to be due to relatively recent events. There are over one hundred species of mosquito-transmitted Plasmodium. The phylogeny of these malarial parasites suggests that the Plasmodium of mammalian hosts forms a well-defined clade strongly associated with the specialization to the Anopheles mosquito vector. This was a major evolutionary transition that allowed Plasmodium to exploit humans and other mammals. The high mortality and morbidity caused by malaria—especially that caused by P. falciparum —has placed the greatest selective pressure on the human genome in recent history. Several genetic factors provide some resistance to Plasmodium infection, including sickle cell trait, thalassaemia traits, glucose-6-phosphate dehydrogenase deficiency, and the absence of Duffy antigens on red blood cells. Although there are therapeutic medications to treat malaria, Plasmodium has accumulated increasing drug resistance over time. A recent examination has shown that even artemisinin, one of the most powerful anti-malarial drugs, has been experiencing decreased efficacy due to the development of resistance. All the species examined to date have 14 chromosomes, one mitochondrion and one plastid (also known as apicoplast, an organelle similar to a chloroplast). The chromosomes vary from 500 kilobases to 3.5 megabases in length. It is presumed that this is the pattern throughout the genus. The plastid, unlike those found in algae, is not photosynthetic. Its function is not fully known; however, it has been demonstrated that some essential metabolic pathways like isoprenoid, Fe-S clusters, fatty acid and phospholipid biosynthesis occurs in this organelle, and that it also still possess its own genome, partly shared with the nucleus. Plasmodium belongs to the family Plasmodiidae (Levine, 1988), order Haemosporidia and phylum Apicomplexa. There are 450 recognised species in this order. Many species of this order are undergoing reexamination of their taxonomy with DNA analysis. It seems likely that many of these species will be reassigned after these studies have been completed. For this reason the entire order is outlined here. - Genus Plasmodium - Subgenus Asiamoeba (lizards) - Subgenus Bennettinia (birds) - Subgenus Carinamoeba (reptiles) - Subgenus Giovannolaia (birds) - Subgenus Haemamoeba (birds) - Subgenus Huffia (birds) - Subgenus Lacertamoeba (reptiles) - Subgenus Laverania (higher primates) - Subgenus Novyella (birds) - Subgenus Paraplasmodium (lizards) - Subgenus Plasmodium (monkeys, higher primates) - Subgenus Sauramoeba (reptiles) - Subgenus Vinckeia (non-primate mammals) - Genus Polychromophilus - Genus Rayella - Genus Saurocytozoon The genera Plasmodium, Fallisia and Saurocytozoon all cause malaria in lizards. All are carried by Diptera (true two-winged flies). Pigment is absent in the Garnia. Non pigmented gametocytes are typically the only forms found in Saurocytozoon: pigmented forms may be found in the leukocytes occasionally. Fallisia produce non pigmented asexual and gametocyte forms in leukocytes and thrombocytes. The full taxonomic name of a species includes the subgenus but this is often omitted. The full name indicates some features of the morphology and type of host species. The only two species in the sub genus Laverania are P. falciparum and P. reichenowi. Parasites infecting other mammals including lower primates (lemurs and others) are classified in the subgenus Vinckeia. The distinction between P. falciparum and P. reichenowi and the other species infecting higher primates was based on morphological findings but have since been confirmed by DNA analysis. Vinckeia, while previously considered to be something of a taxonomic 'rag bag', has been recently shown to form a coherent grouping. The remaining groupings here are based on the morphology of the parasites. Revisions to this system are likely as more species are subject to DNA analysis. The four subgenera Giovannolaia, Haemamoeba, Huffia and Novyella were created by Corradetti et al. for the known avian malarial species. A fifth — Bennettinia — was created in 1997 by Valkiunas. The relationships between the subgenera are a matter of current investigation. Martinsen et al. 's recent (2006) paper outlines what was known at the time. As of 2007[update], P. juxtanucleare is the only known member of the subgenus Bennettinia. Unlike the mammalian and bird malarias those affecting reptiles have been more difficult to classify. In 1966 Garnham classified those with large schizonts as Sauramoeba, those with small schizonts as Carinamoeba and the single then known species infecting snakes (Plasmodium wenyoni) as Ophidiella. He was aware of the arbitrariness of this system and that it might not prove to be biologically valid. Telford in 1988 used this scheme as the basis for the accepted (2007) system. Species in the subgenus Bennettinia have the following characteristics: Species in the subgenus Giovannolaia have the following characteristics: Species in the subgenus Haemamoeba have the following characteristics: Species in the subgenus Huffia have the following characteristics: Species in the subgenus Novyella have the following characteristics: Species in the subgenus Carinamoeba infect lizards. Their schizonts normally give rise to less than 8 merozoites, unlike those in the subgenus Sauramoeba which also infect lizards, but whose schizonts normally give rise to more than 8 merozoites. Species infecting humans The species of Plasmodium that infect humans include: - Plasmodium falciparum (the cause of malignant tertian malaria) - Plasmodium vivax (the most frequent cause of benign tertian malaria) - Plasmodium ovale (the other, less frequent, cause of benign tertian malaria) - Plasmodium malariae (the cause of benign quartan malaria) - Plasmodium knowlesi (the cause of severe quotidian malaria in South East Asia since 1965) - Plasmodium brasilianum - Plasmodium cynomolgi - Plasmodium cynomolgi bastianellii - Plasmodium inui - Plasmodium rhodiani - Plasmodium schweitzi - Plasmodium semiovale - Plasmodium simium The first four listed here are the most common species that infect humans. Nearly all human deaths from malaria are caused by the first species, P. falciparum, mainly in sub-Saharan Africa. With the use of the polymerase chain reaction additional species have been and are still being identified that infect humans. One possible experimental infection has been reported with Plasmodium eylesi. Fever and low grade parasitemia were apparent at 15 days. The volunteer (Dr Bennett) had previously been infected by Plasmodium cynomolgi and the infection was not transferable to a gibbon (P. eylesi 's natural host) so this cannot be regarded as definitive evidence of its ability to infect humans. A second case has been reported that may have been a case of P. eylesi but the author was not certain of the infecting species. A possible infection with Plasmodium tenue has been reported. This report described a case of malaria in a three-year-old black girl from Georgia, US, who had never been outside the US. She suffered from both P. falciparum and P. vivax malaria and while forms similar to those described for P. tenue were found in her blood even the author was skeptical about the validity of the diagnosis. Confusingly, P. tenue was proposed in the same year (1914) for a species found in birds. The human species is now considered probably a misdiagnosis, and the bird species is described on the P. tenue page. The only known host of P. falciparum and P. malariae is humans. P. vivax however can infect chimpanzees. Infection tends to be low grade but may be persistent and remain as source of parasites for humans for some time. P. vivax can also infect orangutans. P. ovale can be transmitted to chimpanzees. P. ovale has an unusual distribution, being found in Africa, the Philippines and New Guinea. In spite of its admittedly poor transmission to chimpanzees given its discontigous spread, it is suspected that P. ovale is a zoonosis with an as yet unidentified host. If so, the host is likely to be a primate. The remaining species capable of infecting humans all have other primate hosts. Plasmodium shortii and Plasmodium osmaniae are now considered junior synonyms of Plasmodium inui Taxonomy in parasitology before DNA based methods was always problematic, and revisions are continuing, leaving many obsolete names for Plasmodium species that infect humans. |Obsolete names for Plasmodium species infecting humans| Infections in primates The species that infect primates other than humans include: P. bouillize, P. brasilianum, P. bucki, P. cercopitheci,P. coatneyi, P. coulangesi, P. cynomolgi, P. eylesi, P. fieldi, P. foleyi, P. fragile, P. girardi, P. georgesi, P. gonderi, P. hylobati, P. inui, P. jefferyi, P. joyeuxi, P. knowlesi, P. lemuris, P. percygarnhami, P. petersi, P. reichenowi, P. rodhaini, P. sandoshami, P. semnopitheci, P. silvaticum, P. simiovale, P. simium, P. uilenbergi, P. vivax and P. youngei. Most if not all Plasmodium species infect more than one host: the host records shown here should be regarded as incomplete. |Primate mosquito vectors| |*Anopheles albimanus — P. vivax |Subspecies infecting primates| |*P. cynomolgi — P. cynomolgi bastianelli and P. cynomolgi ceylonensis. The evolution of these species is still being worked out and the relationships given here should be regarded as tentative. This grouping, while originally made on morphological grounds, now has considerable support at the DNA level. Infections in non-primate mammals The subgenus Vinckeia was created by Garnham to accommodate the mammalian parasites other than those infecting primates. Species infecting lemurs have also been included in this subgenus. P. aegyptensis, P. bergei, P. chabaudi, P. inopinatum, P. yoelli and P. vinckei infect rodents. P. bergei, P. chabaudi, P. yoelli and P. vinckei have been used to study malarial infections in the laboratory. Other members of this subgenus infect other mammalian hosts. |*P. aegyptensis — Egyptian grass rat (Arvicanthis noloticus) |*P. berghei — P. berghei yoelii |Less well documented species| The species listed here from Courtney et al. should be regarded as dubious. Infections in birds Species in five Plasmodium subgenera infect birds — Bennettinia, Giovannolaia, Haemamoeba, Huffia and Novyella. Giovannolaia appears to be a polyphyletic group and may be sudivided in the future. DNA evidence is in 2014 helping to improve understanding of the diversity of Plasmodium species that infect birds. |Species infecting birds include: P. accipiteris, P. alloelongatum, P. anasum, P. ashfordi, P. bambusicolai, P. bigueti, P. biziurae, P. buteonis, P. cathemerium, P. circumflexum, P. coggeshalli, P. corradettii, P. coturnix, P. dissanaikei, P. durae, P. elongatum, P. fallax, P forresteri, P. gallinacium, P. garnhami, P. giovannolai, P. griffithsi, P. gundersi, P. guangdong, P. hegneri, P. hermani, P. hexamerium, P. huffi, P. jiangi, P. juxtanucleare, P. kempi, P. lophurae, P.lutzi, P. matutinum, P. nucleophilum, P. papernai, P. paranucleophilum, P. parvulum, P. pediocetti, P. paddae, P. pinotti, P. polare, P. relictum, P. rouxi, P. tenue, P. tejerai, P. tumbayaensis and P. vaughani.| |Avian host records| |*P. accipiteris — Levant sparrowhawk (Accipiter brevipes) |Avian malaria vectors| |Avian malaria subspecies| |Avian malaria inter-relatedness and doubtful species| |*P. durae is related to P. asanum, P. circumflexum, P. fallax, P. formosanum, P. gabaldoni, P. hegneri, P. lophrae, P. lophrae, P. pediocetti, P. pinotti, and P. polare. A number of additional species have been described in birds — P. centropi, P. chloropsidis, P. gallinuae, P. herodialis, P. heroni, P. mornony, P. pericorcoti and P. ploceii — but the suggested speciation was based at least in part on the idea — 'one host — one species'. It has not been possible to reconcile the descriptions with any of the recognised species and these are not regarded as valid species. As further investigations are made into this genus these species may be resurrected. A species P. japonicum has been reported but this appears to be the only report of this species and it should therefore be regarded of dubious validity. Infections in reptiles Over 90 species and subspecies of Plasmodium infect lizards and they have been reported from over 3200 species of lizard and 29 species of snake. Only three species — P. pessoai, P. tomodoni and P. wenyoni — infect snakes. |Species infecting reptiles| |P. achiotense, P. aeuminatum, P. agamae, P. arachniformis, P. attenuatum,P. aurulentum, P. australis, P. azurophilum, P. balli, P. basilisci, P. beebei, P. beltrani , P. brumpti, P. brygooi, P. chiricahuae, P. circularis, P. cnemaspi, P. cnemidophori, P. colombiense, P. cordyli, P. diminutivum, P. diploglossi, P. egerniae, P. fairchildi, P. floridense, P. gabaldoni, P. giganteum, P. gologoense, P. gracilis, P. guyannense, P. heischi, P. holaspi, P. icipeensis, P. iguanae, P. josephinae, P. kentropyxi, P. lacertiliae, P. lainsoni, P. lepidoptiformis, P. lionatum, P. loveridgei, P. lygosomae, P. mabuiae, P. mackerrasae, P. maculilabre, P. marginatum, P. mexicanum, P. michikoa, P. minasense, P. pelaezi, P. pessoai, P. pifanoi, P. pitmani, P. rhadinurum, P. sasai,P. saurocaudatum, P. scorzai, P. siamense, P. robinsoni, P. sasai, P. scorzai, P. tanzaniae, P. tomodoni, P. torrealbai, P. tribolonoti, P. tropiduri, P. uluguruense, P. uzungwiense, P. vacuolatum, P. vastator, P. volans, P. wenyoni and P. zonuriae.| |*P. agamae — Lutzomyia or Culicoides species |*P. fairchildi — P. fairchildi fairchildi and P. fairchildi hispaniolae |*P. floridense is closely related to P. tropiduri and P. minasense| Species reclassified into other genera |As of 2007[update] the following species are regarded as belonging to the genus Hepatocystis rather than Plasmodium.| - Chavatte JM, Chiron F, Chabaud A, Landau I (March 2007). "[Probable speciations by "host-vector 'fidelity'": 14 species of Plasmodium from magpies]". Parasite (in French) 14 (1): 21–37. PMID 17432055. - "Malaria Parasites Develop in Lymph Nodes". HHMI News. Howard Hughes Medical Institute. 22 January 2006. - Sturm A, Amino R, van de Sand C, et al. (September 2006). "Manipulation of host hepatocytes by the malaria parasite for delivery into liver sinusoids". Science 313 (5791): 1287–90. doi:10.1126/science.1129720. PMID 16888102. - Baer K, Klotz C, Kappe SH, Schnieder T, Frevert U (November 2007). "Release of hepatic Plasmodium yoelii merozoites into the pulmonary microvasculature". PLoS Pathog. 3 (11): e171. doi:10.1371/journal.ppat.0030171. PMC 2065874. PMID 17997605. - Ghosh, A.K; Devenport, M; Jethwaney, D. "Malaria parasite invasion of the mosquito salivary glands requires interaction between the Plasmoidum TRAP and the Anopheles saglin proteins". PLoS Pathog. 5(1):e1000265. - Ferguson, HM; Mackinnon, MJ; Chan, BH; Read, AF. (2003). "Mosquito mortality and the evolution of malaria virulence". Evolution 57 (12): 2792–804. doi:10.1554/03-211. - Koella, J.C.; Sørensen, F.L.; Anderson, R.A. (1998). "The malaria parasite, Plasmodium falciparum, increases the frequency of multiple feeding of its mosquito vector, Anopheles gambiae". Proc Biol Sci 265 (1398): 763–8. doi:10.1098/rspb.1998.0358. - Hume J.C., Lyons E.J., Day K.P. 2003. Human migration, mosquitoes and the evolution of Plasmodium falciparum. Trends Parasitol. 19(3):144-9. - Martinsen ES, Perkins SL, Schall JJ. (2008). "A three-genome phylogeny of malaria parasites (Plasmodium and closely related genera): evolution of life-history traits and host switches". Molecular Phylogenetic Evolution 47 (1): 261–273. doi:10.1016/j.ympev.2007.11.012. PMID 18248741. - Weimin Liu, Yingying Li, Gerald H. Learn, Rebecca S. Rudicell, Joel D. Robertson, Brandon F. Keele, Jean-Bosco N. Ndjango, Crickette M. Sanz, David B. Morgan, Sabrina Locatelli, Mary K. Gonder, Philip J. Kranzusch, Peter D. Walsh, Eric Delaporte, Eitel Mpoudi-Ngole, Alexander V. Georgiev, Martin N. Muller, George M. Shaw, Martine Peeters, Paul M. Sharp, Julian C. Rayner & Beatrice H. Hahn (23 September 2010). "Origin of the human malaria parasite Plasmodium falciparum in gorillas" 467 (7314). pp. 420–425. doi:10.1038/nature09442. PMC 2997044. PMID 20864995. - Kwiatkowski DP (2005). "How malaria has affected the human genome and what human genetics can teach us about malaria". American Journal of Human Genetics 77 (2): 171–92. doi:10.1086/432519. PMC 1224522. PMID 16001361. - Hedrick PW (2011). "Population genetics of malaria resistance in humans". Heredity 107 (4): 283–304. doi:10.1038/hdy.2011.16. PMC 3182497. PMID 21427751. - Ashley, E.A.; Dhorda, M.; Fairhurst, R.M. et al. (2014). "Spread of artemisinin resistance in Plasmodium falciparum malaria". N Engl J Med 371 (5): 411–23. - vanDooren, G; Striepen, B (2013). "The Algal Past and Parasite Present of Apicoplast". Annual Reviews of Microbiology 67: 271–289. doi:10.1146/annurev-micro-092412-155741. PMID 23808340. - Perkins SL, Schall JJ (October 2002). [0972:AMPOMP2.0.CO;2 "A molecular phylogeny of malarial parasites recovered from cytochrome b gene sequences"]. J. Parasitol. 88 (5): 972–8. doi:10.1645/0022-3395(2002)088[0972:AMPOMP]2.0.CO;2. PMID 12435139. - Yotoko, K.S.C.; Elisei C. (November 2006). "Malaria parasites (Apicomplexa, Haematozoea) and their relationships with their hosts: is there an evolutionary cost for the specialization?". J. Zoo. Syst. Evol. Res. 44 (4): 265–273. doi:10.1111/j.1439-0469.2006.00377.x. - Corradetti A., Garnham P.C.C., Laird M. (1963). "New classification of the avian malaria parasites". Parassitologia 5: 1–4. - Valkiunas G (1997). "Bird Haemosporidia". Acta Zoologica Lituanica 3–5: 1–607. ISSN 1392-1657. - Martinsen ES, Waite JL, Schall JJ (April 2007). "Morphologically defined subgenera of Plasmodium from avian hosts: test of monophyly by phylogenetic analysis of two mitochondrial genes". Parasitology 134 (Pt 4): 483–90. doi:10.1017/S0031182006001922. PMID 17147839. - Garnham 1966 - Telford S (1988). "A contribution to the systematics of the reptilian malaria parasites, family Plasmodiidae (Apicomplexa: Haemosporina)". Bulletin of the Florida State Museum Biological Sciences 34 (2): 65–96. - Tsukamoto M (1977). "An imported human malarial case characterized by severe multiple infections of the red blood cells". Ann. Trop. Med. Parasit. 19 (2): 95–104. - Russel P.F. (1928). "Plasmodium tenue (Stephens): A review of the literature and a case report". Am. J. Trop. Med. s1–8 (5): 449–479. - Reid MJ, Ursic R, Cooper D, et al. (December 2006). "Transmission of human and macaque Plasmodium spp. to ex-captive orangutans in Kalimantan, Indonesia". Emerging Infect. Dis. 12 (12): 1902–8. doi:10.3201/eid1212.060191. PMC 3291341. PMID 17326942. - Coatney G.R., Roudabush R.L. (1936). "A catalog and host-index of the genus Plasmodium". J. Parasitol. 22 (4): 338–353. doi:10.2307/3271859. - Collins WE, Sullivan JS, Nace D, Williams T, Williams A, Barnwell JW (February 2008). "Observations on the sporozoite transmission of Plasmodium vivax to monkeys". J. Parasitol. 94 (1): 287–8. doi:10.1645/GE-1283.1. PMID 18372652. - Collins WE, Richardson BB, Morris CL, Sullivan JS, Galland GG (July 1998). "Salvador II strain of Plasmodium vivax in Aotus monkeys and mosquitoes for transmission-blocking vaccine trials". Am. J. Trop. Med. Hyg. 59 (1): 29–34. PMID 9684622. - Collins WE, Sullivan JS, Nace D, et al. (April 2002). [0295:EIOAFW2.0.CO;2 "Experimental infection of Anopheles farauti with different species of Plasmodium"]. J. Parasitol. 88 (2): 295–8. doi:10.1645/0022-3395(2002)088[0295:EIOAFW]2.0.CO;2. PMID 12054000. - Collins WE, Morris CL, Richardson BB, Sullivan JS, Galland GG (August 1994). "Further studies on the sporozoite transmission of the Salvador I strain of Plasmodium vivax". J. Parasitol. 80 (4): 512–7. doi:10.2307/3283184. PMID 8064516. - Tan CH, Vythilingam I, Matusop A, Chan ST, Singh B (2008). "Bionomics of Anopheles latens in Kapit, Sarawak, Malaysian Borneo in relation to the transmission of zoonotic simian malaria parasite Plasmodium knowlesi". Malar. J. 7: 52. doi:10.1186/1475-2875-7-52. PMC 2292735. PMID 18377652. - Abd-el-Aziz GA, Landau I, Miltgen F (1975). "[Description of Plasmodium aegyptensis n. sp., presumed parasite of the Muridae Arvicanthis noloticus in Upper Egypt]". Ann Parasitol Hum Comp (in French) 50 (4): 419–24. PMID 1211772. - Sandosham AA, Yap LF, Omar I (September 1965). "A malaria parasite, plasmodium (Vinckeia) booliati sp.nov., from a Malayan giant flying squirrel". Med J Malaya 20 (1): 3–7. PMID 4221411. - Keymer IF (June 1966). "Studies on Plasmodium (Vinckeia) cephalophi of the grey duiker (Sylvicapra grimmia)". Ann Trop Med Parasitol 60 (2): 129–38. PMID 5962467. - Landau I, Chabaud AG (1978). "[Description of P. cyclopsi n. sp. a parasite of the microchiropteran bat Hipposideros cyclops in Gabon (author's transl)]". Ann Parasitol Hum Comp (in French) 53 (3): 247–53. PMID 697287. - Lien JC, Cross JH (December 1968). "Plasmodium (Vinckeia) watteni sp. n. from the Formosan giant flying squirrel, Petaurista petaurista grandis". J. Parasitol. 54 (6): 1171–4. doi:10.2307/3276986. PMID 5757690. - Wiersch SC, Maier WA, Kampen H (May 2005). "Plasmodium (Haemamoeba) cathemerium gene sequences for phylogenetic analysis of malaria parasites". Parasitol. Res. 96 (2): 90–4. doi:10.1007/s00436-005-1324-8. PMID 15812672. - Clark, Nicholas; Clegg, Sonya; Lima, Marcos (2014). "A review of global diversity in avian haemosporidians (Plasmodium and Haemoproteus: Haemosporida): new insights from molecular data". International Journal for Parasitology 44 (5): 329–338. doi:10.1016/j.ijpara.2014.01.004. PMID 24556563. - Valkiūnas G, Zehtindjiev P, Hellgren O, Ilieva M, Iezhova TA, Bensch S (May 2007). "Linkage between mitochondrial cytochrome b lineages and morphospecies of two avian malaria parasites, with a description of Plasmodium (Novyella) ashfordi sp. nov". Parasitol. Res. 100 (6): 1311–22. doi:10.1007/s00436-006-0409-3. PMID 17235548. - Landau I, Chabaud AG, Bertani S, Snounou G (December 2003). "Taxonomic status and re-description of Plasmodium relictum (Grassi et Feletti, 1891), Plasmodium maior Raffaele, 1931, and description of P. bigueti n. sp. in sparrows". Parassitologia 45 (3-4): 119–23. PMID 15267099. - Kirkpatrick CE, Lauer DM (January 1985). "Hematozoa of raptors from southern New Jersey and adjacent areas". J. Wildl. Dis. 21 (1): 1–6. doi:10.7589/0090-3558-21.1.1. PMID 3981737. - Earlé RA, Horak IG, Huchzermeyer FW, Bennett GF, Braack LE, Penzhorn BL (September 1991). "The prevalence of blood parasites in helmeted guineafowls, Numida meleagris, in the Kruger National Park". Onderstepoort J. Vet. Res. 58 (3): 145–7. PMID 1923376. - Valkiūnas G, Zehtindjiev P, Dimitrov D, Krizanauskiene A, Iezhova TA, Bensch S (May 2008). "Polymerase chain reaction-based identification of Plasmodium (Huffia) elongatum, with remarks on species identity of haemosporidian lineages deposited in GenBank". Parasitol. Res. 102 (6): 1185–93. doi:10.1007/s00436-008-0892-9. PMID 18270739. - Murata K, Nii R, Sasaki E, et al. (February 2008). "Plasmodium (Bennettinia) juxtanucleare infection in a captive white eared-pheasant (Crossoptilon crossoptilon) at a Japanese zoo". J. Vet. Med. Sci. 70 (2): 203–5. doi:10.1292/jvms.70.203. PMID 18319584. - Christensen BM, Barnes HJ, Rowley WA (July 1983). "Vertebrate host specificity and experimental vectors of Plasmodium (Novyella) kempi sp. n. from the eastern wild turkey in Iowa". J. Wildl. Dis. 19 (3): 204–13. doi:10.7589/0090-3558-19.3.204. PMID 6644918. - Manwell RD (November 1968). "Plasmodium octamerium n. sp., an avian malaria parasite from the pintail whydah bird Vidua macroura". J. Protozool. 15 (4): 680–5. doi:10.1111/j.1550-7408.1968.tb02194.x. PMID 5719065. - Valkiũnas G, Iezhova TA (August 2001). [0930:ACOTBP2.0.CO;2 "A comparison of the blood parasites in three subspecies of the yellow wagtail Motacilla flava"]. J. Parasitol. 87 (4): 930–4. doi:10.1645/0022-3395(2001)087[0930:ACOTBP]2.0.CO;2. PMID 11534666. - Poinar G (May 2005). "Plasmodium dominicana n. sp. (Plasmodiidae: Haemospororida) from Tertiary Dominican amber". Syst. Parasitol. 61 (1): 47–52. doi:10.1007/s11230-004-6354-6. PMID 15928991. - Manwell RD (February 1966). "Plasmodium japonicum, P. juxtanucleare and P. nucleophilum in the Far East". J. Protozool. 13 (1): 8–11. doi:10.1111/j.1550-7408.1966.tb01860.x. PMID 5912391. - Schall JJ (December 2000). "Transmission success of the malaria parasite Plasmodium mexicanum into its vector: role of gametocyte density and sex ratio". Parasitology 121 (Pt 6): 575–80. doi:10.1017/s0031182000006818. PMID 11155927. - Southgate BA (1970). "Plasmodium (Sauramoeba) giganteum in Agama cyanogaster: a new host record". Trans. R. Soc. Trop. Med. Hyg. 64 (1): 12–3. PMID 5462484. - Garnham PC, Telford SR (November 1984). "A new malaria parasite Plasmodium (Sauramoeba) heischi in skinks (Mabuya striata) from Nairobi, with a brief discussion of the distribution of malaria parasites in the family Scincidae". J. Protozool. 31 (4): 518–21. doi:10.1111/j.1550-7408.1984.tb05494.x. PMID 6512723. - Telford SR (October 1986). "Fallisia parasites (Haemosporidia: Plasmodiidae) from the flying lizard, Draco maculatus (Agamidae) in Thailand". J. Parasitol. 72 (5): 766–9. doi:10.2307/3281471. PMID 3100759. - Telford SR (1979). "A taxonomic revision of small neotropical saurian Malarias allied to Plasmodium minasense". Ann Parasitol Hum Comp 54 (4): 409–22. PMID 533109. - Telford SR, Telford SR (April 2003). [0362:RAROPP2.0.CO;2 "Rediscovery and redescription of Plasmodium pifanoi and description of two additional Plasmodium parasites of Venezuelan lizards"]. J. Parasitol. 89 (2): 362–8. doi:10.1645/0022-3395(2003)089[0362:RAROPP]2.0.CO;2. PMID 12760655. - Garnham, P.C.C. (1966). Malaria Parasites And Other Haemosporidia. Oxford: Blackwell. ISBN 0397601328. - Hewitt, R.I. (1940). Bird Malaria. American Journal of Hygiene 15. Baltimore: Johns Hopkins Press. - Laird, M. (1998). Avian Malaria in the Asian Tropical Subregion. Singapore: Springer. ISBN 9813083190. - Baldacci P, Ménard R (October 2004). "The elusive malaria sporozoite in the mammalian host". Mol. Microbiol. 54 (2): 298–306. doi:10.1111/j.1365-2958.2004.04275.x. PMID 15469504. - Bledsoe GH (December 2005). "Malaria primer for clinicians in the United States" (PDF). South. Med. J. 98 (12): 1197–204; quiz 1205, 1230. doi:10.1097/01.smj.0000189904.50838.eb. PMID 16440920. - Shortt HE (1951). "Life-cycle of the mammalian malaria parasite". Br. Med. Bull. 8 (1): 7–9. PMID 14944807. - Slater LB (2005). "Malarial birds: modeling infectious human disease in animals". Bull Hist Med 79 (2): 261–94. doi:10.1353/bhm.2005.0092. PMID 15965289. |Wikispecies has information related to: Plasmodium|
The study of galaxy formation and growth is primarily concerned with the development of galaxies from a rapidly homogenous seed to the present massive, clustered, faint images of galaxies. Inflation, galaxy formation, super wind, gravitational collapse, and red pulsar wind are some of the most prevalent theories about how the evolution of a galaxy happens. A recent theory on how galaxies formed the X-ray map was also developed. It has been proven that black holes are produced in very cold and dense centers. A research conducted at the Institute of Space Sciences and Astronomy, University of Toronto claimed that there are four such giant galaxies. One of them has a mass almost the same as the sun while the others are extremely compact. Galaxy formation is dependent upon many factors and none of them are constant. The only constant factor is the presence of gas in a system. Gas is necessary for the process of galaxy formation. If there is no gas, no stars will form and no galaxies will form. Hence, it is concluded that without the presence of some kind of gas, no stars can form. There are many theories formulated by astronomers that suggest a relation between galaxy formation and the presence of dark matter halos. They are either produced or evolved from a primordial cloud of gas. The gas Cloud has no escape route and so astronomers think that it forms large dark holes at the centers of merging massive galaxies. These holes may be responsible for creating large concentrations of neutral but neutral gas which eventually makes the clouds opaque and forming stars. Stellar formation, galaxy formation and structure formation all depend on gravity. Gravity alone does not account for all the matter in the universe. The core regions of the universe are much less dense than the rest. This means that something other than gravity must be responsible for the creation of large-scale structure. Gravity plus the strong pull of other large-scale structures like superclusters must have caused the emergence of what we call stars in the early universe. Star formation also seems to have a hand in ensuring that the universe is in a state of expansion. One theory is the idea of cluster formation. Basically, clusters are groups of very small stars which, due to their mutual rotation, cause the light from these stars to spread out and reach us at a very long distance. Many studies show that most of the clusters have a huge mass, a good estimate being over 10 solar masses, with numerous small stars within them. Some models of cluster formation suggest the presence of small planet-like objects at the centers of the clusters, with a mass not much greater than the mass of the star alone. Another hypothesis regarding the formation of galaxy-sized objects involves the idea of halo formation. What do astronomers mean by this? Basically, a halo is a ring or “hump” around a large galaxy. In spiral galaxies like the Milky Way, many of the halo regions are filled with extremely cold dust. While astronomers have detected the presence of this cold dust around many of our elliptical galaxies, they do not have a complete idea of what the nature of this dark matter is. Scientists speculate that a filament of material, which contains molecules of ice and metal, might be forming around many of these spiral arms. A more complex model of galaxy formation involves a combination of multiple theories. One of these theories concerns the idea of a so-called “solar halo” around very hot regions of a galaxy. According to this model, a halo will not only give off infrared radiation, but also emit X-rays and gamma rays, moving faster than the solar system. Evidence has already been found to support this halo formation process, including the discovery of a huge galaxy cluster which contains nearly a hundred thousand stars. Other galaxy formation theories involve a more massive concentration of dark matter halos around extremely compact and fast-moving gas clouds. It is important to note that most of the most massive halos are very faint and are usually found very close to the centers of galaxy clusters, making them difficult to spot through telescopes. While astronomers have found a number of dark matter halos in clusters, they have been unable to determine their exact sizes, which makes them nearly impossible to study directly. Astronomers have theories about the evolution of these halos, which suggest that they are made up of several hundred millions of low-density gas clouds that are rotating very quickly. They may also form a halo around a very dense galaxy like the Milky Way, which is composed primarily of cool gas.
We are now going to calculate the CountA function in Excel with the help of examples. As a preview, since this is premade on the excel count function, the role is to return the count values of numbers. Particularly, numbers include percentages, dates, frames, negative numbers and formulas. Otherwise, it will ignore empty cells or values. What does the COUNT Function do in Excel The COUNTA function does categorize under Excel statistical function. Financial analysts find it useful to keep count of cells in a given range. Besides numbers, it often needs to count cells to have value. The COUNTA function can count cells containing several types of data values. This includes text, numbers, Boolean values, date/time values, error values, and empty text strings (“”). In fact, such a scenario becomes valuable. Further, as a statistical function, it counts a number of nonblank cells in a cell of range or the cell reference. The COUNTA function is a built-in function in Excel that is categorized as a Statistical Function. It can be used as a worksheet function (WS) in Excel. As a worksheet function, the COUNTA function can be entered as part of a formula in a cell of a worksheet. The COUNTA function is also commonly referred to as the Excel COUNTIF Not Blank formula. The Syntax of the COUNTA Function The syntax for the COUNTA function in Microsoft Excel is as follows: COUNTA( value1, [value2, ... value_n] ) Parameters or Arguments The function accepts the following arguments: - Value1: This represents the values that are to be counted. - Value2: This also represents the values that are to be counted. - The first argument is mandatory, while the second is optional. The COUNTA function returns a numeric value. Note: A maximum of 255 arguments can be entered in MS Excel 2007 and the subsequent versions. The earlier versions of Excel can handle 30 arguments only. Examples COUNTA Function in Excel Example 1: Single Range Here we want to see the number of absolutely empty cells considering the range of B2:B44. Hence, the data is given in the figure at the right. - We will use the formula =COUNTA(B2:B44). - The COUNTA function counts the number of data cells from B2 to B44. - It returns 40 because cell B6 and B7 is blank. Hence, all the values are calculated except the blank value of cells. Example 2: Multiple Ranges Consider the data on the right image as we determine the non-blank cells in two ranges of cells B2:B44 and C2:C44. - Use this formula “=COUNTA(B2:B44,C2:C44)”. - The COUNTA function counts the number of data cells from B2 to B44and C2 to C44. - It returns 83 because for instance cells B6 and B7 are blank. Hence, all values are counted except the blank cell values. Example 3: Multiple Columns Range In this scenario, the grades of the students are computed as seen in the worksheet, thus we want here to count the number of students who got their grades. We use the following formulae. Note: The COUNTA function counts the number of grades in Maths from B2 to B44, in Math from C2 to C6, and in Filipino from E2 to E44. It returns the values 40, 43, and 43 respectively. Example 4: Value Arguments With Range Let us supply direct values and a range to the COUNTA function. Let’s work out the previous worksheet, assuming we have students Jones, thus we see how his grades appeared. Basically, we use the following formula: Example 5: Multiple Value Arguments This time let’s do the direct values using CountA Function. Along with, finding the number of nonblank cells with their direct values. We use the following formula. The COUNTA function returns the number of non-blank values from the total direct values. Hence, it returns 43. How To Count Non-Blank Cells using COUNTA? The COUNTA excel function is particularly designed by Microsoft Excel to count single and multiples cells non-blank range of cells. Additionally, it could be a non-adjacent range of cells also. Take a look at this example: - The formula for counting cells in the range B2:B44 is “=COUNTA (B2:B44).” Similarly, the function also counts the number of arguments provided. The value argument is a parameter that is neither a cell nor a range of cells. The COUNTA function is used to count the following categories of data: - The number of students collected their grades. - The number of students having high grades. - The number of failed grades. How to use COUNTA Function in Excel COUNTA function, step by step: - Select A45 - Double-click COUNTA in the menu - Select range A22:A44 - Hit enter What is the difference between the COUNTA and the COUNT functions of Excel The difference between the COUNTA and COUNT is described in the table below. |The COUNTA function counts the non-empty cells within a specified range||The COUNT function counts the number of cells containing numeric values.| |The COUNTA function is more inclusive than the COUNT function because the former counts all kinds of data values.||In contrast, the Count ignores the non-numeric values when provided as a cell reference.| Note: While deciding which of the two functions to use, the data to be counted must be analyzed. In summary, the COUNTA FUNCTION in Excel is a premade function of COUNT. It may sound complex, but it is simple when you learn how it works. The Excel function may seem to perform simple calculations. But when you combine them with other Excel functions, you will be amazed by how powerful Excel is in getting meaning out of enormous datasets. Do you have any questions related to this article? If so, please mention it in the comments section.
The constant C is a basic constant that every function needs to have its value stored in. The constant C determines what values of the function C are meaningful. For example, the function C(x) = x 2 has a value of 0 for x > 0 and x The constant C is also used in evaluating the continuity and difference between functions. These functions determine whether a change in one variable causes a change in another variable, or if one variable is just a reference to another. Example of continuity Consider the following example: finding the constant c for a function f that continuous along a curve c. The constant c for f on the curve is illustrated in the figure. In this case, the curve is the x-axis and the constant is the y-axis. In this case, c = 2. In another case, f continuously changes its value as x → 0 and x → 1. The value of f at any point x is shown in the figure. In this case, there are two constants: 1/x and 1/f, respectively. Both of these constants are different for each value of x because 1/x and 1/f differ for that value of x. To find their values at any point, we must use integration techniques. These require both values of x and their differences to produce a difference in f(x) that changes sign. Definition of limits Limits are a common domain of analysis for functions. Most functions enter a new and unusual region of space-time where they stop changing and sliding down the curve. This is known as a limit or inflection point for the function. A limit is when the function changes direction at a specific point, called an inflection point. This change in direction is what defines a limit, and what we call it. A simple way to think about limits is that they are places where the function starts to drop off in intensity or value. Limits are an important part of function theory. A limit is the value that a function reaches or passes through on a certain point of its graph. In math, limits are calculated using the limit and difference techniques. These techniques make calculations more straightforward, making you more likely to understand how limits work. Limits can be tricky to calculate, though. That is why we need the help of functions to do it for us! There are two ways to find a limit on a function. The difference (or less-lengthened) Limit technique is used when there is no obvious limit on the function. The less-lengthened (or zero) Limit technique is used when there is an obvious limit but it has been forgotten. Determining the sign of the limit When the limit is negative, it’s important to determine whether the limit is positive or negative. Both cases have advantages and disadvantages. In addition, there are several conditions that make the limit positive or negative. The most common condition is that the function instance has a peak or valley, respectively. These points on the curve define where the function value lies in relation to other values. If a valley appears on a curve, then it may be suggested that the function has a positive value. If a peak appears, then it may be assumed that the function has a zero value. However, neither of these appearances are definitive. If the limit is positive, then F is continuous on (−∞ ∞) with a constant value If the limit is negative, then F is continuous on (0, ∞) with a falling value. This is important to note: If the limit is positive, then it does not matter what value of the constant C you look at as long as (−∞ ∞) is included. That’s because if you look at a positive value of C, it will be replaced by an equal but opposite negative value of C, which will be continuous on (0, ∞). As we discussed earlier, if we were to find the function F continuously on a set A with only positive values of F, then we would get an infinity speech function. If we were to find F with only negative values of F, then we would get a infinitesimal speech function. If the limit is negative, then F is not continuous on (−∞ ∞) with a constant value If the constant C is large, then F may be continuous on (−∞ ∞) with a value that is negative. This happens more often than you might think! If the limit is zero, then F may not be continuous on (−∞ ∞) with a value that is negative. This happens more often than you might think! In fact, it can happen for some limits of functions that aren’t negative. This can create some tricky situations where we need to know if the limit is positive or negative. Fortunately, we can do some quick math to find out! We can take the logarithm of each side of the function and check whether or not they are positive or negative. If they are, then we know that the limit is positive|>. As shown in this article, there are several functions that are continuous on the constant C and that take positive values. These include the exponential function, logarithmic function, and even trigonometric functions such as sin(x) or cos(x). While most of these functions do not make sense for everyday life, they can be fun to know. For instance, knowing that the logarithm of 10 is 20 is a nice way to stay motivated to learn new things. As always, use your judgment when it comes to knowing these functions. If you feel you need more training in this area, try starting with the basics and working your way up to more advanced ones. A common extension of the function f is Fourier series, ∣f(x), where ∣ stands for “derive”. This article does not discuss Fourier series in depth, but they are an extension of f that create new functions with −∞ 0. The constant C is called the constant value of the function f and it is named after its French founder Louis XV, who developed it. The constant value of a function depends on what other values of the variable you take to denote it. For example, if you used a positive number to denote the area under a curve, then the constant value would be 2 because 1/2 = 0. Since C is an easy value to find for many functions, we will always choose it as their constant value. It is also commonly referred to as the zero point or origin point of the function.
Because we can’t sample every part. Because no matter what we do, no two parts will be the same. Because we define a product to meet specifications and we can measure how well it conforms. Because differences between parts are hard (assignable causes) or impossible (chance causes) to predict. Because we can sample a few and draw conclusions about the whole group. • What is the objective? We want to sample as little data as possible to draw the most accurate conclusions about the distribution of the values. • Two type of statistics Inductive: Try to get overall variance within group. i.e. assume all of group should conform ***WE USE THIS TYPE Deductive: Attempt to classify differences that exist within a group (i.e. election polls). 2.1 Data Distributions • Showing Differences consider Domino’s pizza that delivered < 30minutes by tolerance • Histograms can be used to show these distributions graphically. • Cumulative distributions can be used to estimate the probability of an event. For example, if in the graph above we want to know how many pizzas are delivered within 25 minutes, we could read 10 (approx.) every week off the graph. • there are typically 10 to 20 divisions in a histogram • percentages can be used on the right axis, in place of counts. 2.1.2 Continuous Distributions • Histograms are useful for grouped data, but in the cases where the data is continuous, we use distributions of probability. • In general the area under the graph = 1.00000....... the graphs often stretch (asymptotically) to infinity • In specific, some of the distribution properties are, • In addition the center of the distribution can vary (i.e. the average or mean) • More on distribution later 2.1.3 Describing Distribution Centers With Numbers • The best known method is the average. This gives the center of a distribution • Another good measure is the median If odd number of samples it is the middle number If an even number of samples, it is the average of the left and right bounding numbers • If the numbers are grouped the median becomes • Mode can be useful for identifying repeated patterns a mode is a repeated value that occurs the most. Multiple modes are possible. 2.1.4 Dispersion As A Measure of Distribution • The range of values covered by a distribution are important Range is the difference between the highest and lowest numbers. • Standard deviation is a classical measure of grouping (for normal distributions??????????) the equation is, • When we use a standard deviation, we can estimate the distribution of the samples. • By adding standard deviations to increase the range size, the percentage of samples included are, • Other formulas for standard deviation are, 2.1.5 The Shape of the Distribution • Skewed functions • this lack of symmetry tends to indicate a bias in the data (and hence in the real world) • a skew factor can be calculated • This is a peaking in the data • This is best used for comparison to other values. i.e. you can watch the trends in the values of a4. 2.1.7 Generalizing From a Few to Many 2.1.8 The Normal Curve • this is a good curve that tends to represent distributions of things in nature (also called Gaussian) • This distribution can be fitted for populations (μ, σ), or for samples (X, s) • The area under the curve is 1, and therefore will enclose 100% of the population. • the parameters vary the shape of the distribution • The area under the curve indicates the cumulative probability of some event • When applied to quality ±3σ are used to define a typical “process variability” for the product. This is also known as the upper and lower natural limits (UNL & LNL) ******************* LOOK INTO USE OF SYMBOLS, and UNL, LNL, UCL, LCL, etc. 2.1.9 Probability plots • A way to figure out how chances interact • Mutually Exclusive: Probable events can only happen as one or the other. • Not Mutually Exclusive: Probable events can occur simultaneously • Independent Probabilities: Events will happen separately • Dependant Probabilities: The outcome of one event effects the outcome of another event • Permutations: for exact definitions of not only possibilities, but also order. • Combinations: similar to before except order does not mater. • empirical probability: experimentally determine the probability with, 2.2.1 Discrete Distributions 2.2.2 Hypergeometric Distribution 2.2.3 Binomial Distribution 2.2.4 Poisson Distribution
Math is measuring, sorting, building, noticing patterns, making comparisons, and describing the environment, as well as counting and knowing the names of shapes. There are many ways to incorporate math learning into everyday moments. - 1 What do children learn while studying mathematics? - 2 What are the 5 areas of math? - 3 What do you learn in basic math? - 4 What do preschoolers learn in the math center? - 5 What is math in early childhood? - 6 What is the best way for kids to learn math? - 7 What is math content area? - 8 What are the 4 basic concepts of mathematics? - 9 What is the purpose of learning mathematics? - 10 How do I teach basic math? - 11 What are some math skills? - 12 How can I teach my 4 year old math? - 13 What are some math activities for preschoolers? - 14 What should be in a math center? - 15 How do small manipulatives and math area help children? What do children learn while studying mathematics? The three basic groups of mathematical concepts that are essential in all topics included in the mathematics curriculum at the elementary school level are number and operations on numbers, spatial thinking and measurement. What are the 5 areas of math? The curriculum covers five content areas at the primary level: Number; Shape and Space; Measurement; Data Handling; and Algebra. What do you learn in basic math? Basic math skills are those that involve making calculations of amounts, sizes or other measurements. Core concepts like addition, subtraction, multiplication and division provide a foundation for learning and using more advanced math concepts. What do preschoolers learn in the math center? Counters, sorting trays, counting toys, dice, abacuses, number boards, and number games are all great math materials to include in your center. Money – Counting money is another skill children need to learn in order to help them budget their income and make purchases when they are older. What is math in early childhood? In every early childhood setting, children should experience effective, research-based curriculum and teaching practices. Math- ematics helps children make sense of their world outside of school and helps them construct a solid foundation for success in school. What is the best way for kids to learn math? In the classroom, your child will learn maths in many different ways – through watching the teacher work out maths problems, doing problems, talking about problems, drawing and writing, playing games, and using calculators, computers and other materials. What is math content area? Number Properties and Operations. This content area focuses on students’ abilities to represent numbers, order numbers, compute with numbers, make estimates appropriate to given situations, use ratios and proportional reasoning, and apply number properties and operations to solve real-world and mathematical problems. What are the 4 basic concepts of mathematics? –addition, subtraction, multiplication, and division– have application even in the most advanced mathematical theories. What is the purpose of learning mathematics? Mathematics provides an effective way of building mental discipline and encourages logical reasoning and mental rigor. In addition, mathematical knowledge plays a crucial role in understanding the contents of other school subjects such as science, social studies, and even music and art. How do I teach basic math? 7 Effective Strategies for Teaching Elementary Math - Make it hands-on. - Use visuals and images. - Find opportunities to differentiate learning. - Ask students to explain their ideas. - Incorporate storytelling to make connections to real-world scenarios. - Show and tell new concepts. - Let your students regularly know how they’re doing. What are some math skills? Key Math Skills for School - Number Sense. This is the ability to count accurately—first forward. - Representation. Making mathematical ideas “real” by using words, pictures, symbols, and objects (like blocks). - Spatial sense. How can I teach my 4 year old math? Things to try with your child - Listen to and sing songs and rhymes. Sing – even if it isn’t your strong point! - Talk about numbers around you. - Read together. - Count as much as you can. - Get your hands dirty. - Play maths games. What are some math activities for preschoolers? 15 Hands-On Math Activities for Preschoolers - Patterns with Bears. Counting Bears are a great math manipulative to use with preschoolers. - Sorting Colors with Bears. Sorting is a skill preschoolers should work on a lot. - Money Muncher. - Sorting Jelly Beans. - Shape Wheel. - Shape Sorter. - Noodle Shape Cards. What should be in a math center? 10 Must-Haves for a Math Learning Center - Furniture. Each separate learning center within a preschool needs appropriate furniture and décor for what the area is focusing on. - Storage Containers. - Numbers and Counters. - Measurement Tools. - Operation Cards. - Manipulative Kits. - Problem Solving Aids. How do small manipulatives and math area help children? Skills Children Gain from the Use of Manipulatives They explore patterns through sequencing, ordering, comparison, colors, and textures. A child can develop concentration and perseverance skills while learning about cause and effect and how to creatively analyze and solve problems.
The course gives the overview of the chemical reaction from the macroscopic and microscopic points of view. First of all the basic concepts, i.e. the reaction rate, reaction rate constant, rate equation, reaction order and molecularity, are given. Then, the characteristics of the first-order and second-order reactions is shown. From the microscopic points of view the transition state theory on the rate constant is discussed. From more microscopic points of view the concepts of the reaction cross section is introduced instead of the rate constant , and the relation between the reaction rate constant and reaction cross section is discussed. The outline of the diffusion-controlled reaction is given as an example of reactions in the condensed phase. The aim of this course is getting the basic recipe for understnading chemical reactions. By the end of this course students will be able to understand 1)chemical reactions from the macroscopic points of view and microscopic points of view, i.e. atomic and molecular picture, 2)the basis of the reactions in the condensed phase. Reaction kinetics, Transition state throry, Diffusion controlled reaction |✔ Specialist skills||Intercultural skills||Communication skills||Critical thinking skills||Practical and/or problem-solving skills| The first five lessons concern the chemical reaction from the macroscopic points of view, the next eight lessons concern the chemical reaction from the microscopic points of view. The last two lessons deal with the cemical reaction in the condensed phase.Every lesson consists of mainly a lecture. From time to time the very small test is assigned during a class. |Course schedule||Required learning| |Class 1||The reaction from the macroscopic points of view(the reaction rate, the reaction rate constant, and the reaction rate equation)||Understand the basic concepts on reaction kinetics| |Class 2||The reaction from the macroscopic points of view(the reaction order and molecurality ) and first-order reaction||Understand the difference between the reaction order and molecularity. Understand the characteristics of the first-order reaction.| |Class 3||The second-order reaction and pseudofirst-order reaction||Understand the characteritics of the second-order reaction and pseudofirst-order reaction.| |Class 4||The reaction intermediate||Understand what the reaction intermediate is.| |Class 5||The analysis of complex reactions(the steady-state approximation)||Understand the steady-state approximation as a useful tool for analyzing complex reactions.| |Class 6||The Arrhenius equation||Understand what the Arrhenius equation is and the concept of the activation energy.| |Class 7||The Born-Oppenheimer approximation||Understand the Born-Oppenheimer approximation.| |Class 8||The bimolecular reaction and potential energy surface||Explain the bimolecular reaction by use of the potential energy surface.| |Class 9||The transition state theory(What is the transition state?)||Explain the outline of the transition state theory.| |Class 10||The transition state theory(The way to the Arrhenius equation)||Derive the Arrhenius equation within the transition state theory.| |Class 11||The reaction as a two-body collision(1)-The collision cross section||Explain what the collision cross section is.| |Class 12||The reaction as a two-body collision-The bimolecular reaction and collision||Understand that the bimolecular reaction is considered a kind of collisions.| |Class 13||The reaction as a two-body collision(3)-The reaction cross section and reaction rate constant||Derive the relation between the rate constant of the bimolecular reaction and reaction cross section.| |Class 14||The reaction in the condensed phase -The diffusion, conduction and rate constant of the diffusion-controlled reaction||Understand the diffusion and conduction of solutes and obtain the rate constant of the diffudion-controlled reaction.| Home-made textbook is distributed. P.W. Atkins., Physical Chemistry (Oxford University Press, 1998) Students are assessed on understanding of basic concepts and applicability of them. Small tests during class 10% Homework assignments 10% Final examination 80% We recommend that students have successfully completed Introductory Quantum Chemistry (CHM.C201) and Chemical and Statistical Thermodynamics(CHM.C202). We recommend that students take Exercise in Introductory Chemical Kinetics(CHM.C303 ) as well. Noriyuki Kouchi: nkouchi[at]chem.titech.ac.jp Masashi Kitajima: mkitajim[at]chem.titech.ac.jp Contact by email in advance to schdule an appointment. Noriyuki Kouchi(West Building 4, Room 508) Masashi Kitajima(West Building 4, Room 503)
Imaged in enhanced color by MESSENGER in 2008 Average orbital speed Sidereal rotation period Equatorial rotation velocity |10.892 km/h (3.026 m/s)| |2.04′ ± 0.08′ (to orbit)| North pole right ascension North pole declination |−2.48 to +7.25| |trace (≲ 0.5 nPa)| |Composition by volume| Mercury is the smallest and innermost planet in the Solar System. Its orbit around the Sun takes 87.97 days, the shortest of all the planets in the Solar System. It is named after the Roman deity Mercury, the messenger of the gods. Like Venus, Mercury orbits the Sun within Earth's orbit as an inferior planet, and its apparent distance from the Sun as viewed from Earth never exceeds 28°. This proximity to the Sun means the planet can only be seen near the western horizon after sunset or eastern horizon before sunrise, usually in twilight. At this time, it may appear as a bright star-like object, but is often far more difficult to observe than Venus. The planet telescopically displays the complete range of phases, similar to Venus and the Moon, as it moves in its inner orbit relative to Earth, which recurs over its synodic period of approximately 116 days. Mercury rotates in a way that is unique in the Solar System. It is tidally locked with the Sun in a 3:2 spin–orbit resonance, meaning that relative to the fixed stars, it rotates on its axis exactly three times for every two revolutions it makes around the Sun.[a] As seen from the Sun, in a frame of reference that rotates with the orbital motion, it appears to rotate only once every two Mercurian years. An observer on Mercury would therefore see only one day every two Mercurian years. Mercury's axis has the smallest tilt of any of the Solar System's planets (about 1⁄30 degree). Its orbital eccentricity is the largest of all known planets in the Solar System;[b] at perihelion, Mercury's distance from the Sun is only about two-thirds (or 66%) of its distance at aphelion. Mercury's surface appears heavily cratered and is similar in appearance to the Moon's, indicating that it has been geologically inactive for billions of years. Having almost no atmosphere to retain heat, it has surface temperatures that vary diurnally more than on any other planet in the Solar System, ranging from 100 K (−173 °C; −280 °F) at night to 700 K (427 °C; 800 °F) during the day across the equatorial regions. The polar regions are constantly below 180 K (−93 °C; −136 °F). The planet has no known natural satellites. Two spacecraft have visited Mercury: Mariner 10 flew by in 1974 and 1975; and MESSENGER, launched in 2004, orbited Mercury over 4,000 times in four years before exhausting its fuel and crashing into the planet's surface on April 30, 2015. The BepiColombo spacecraft is planned to arrive at Mercury in 2025. Mercury is one of four terrestrial planets in the Solar System, and is a rocky body like Earth. It is the smallest planet in the Solar System, with an equatorial radius of 2,439.7 kilometres (1,516.0 mi). Mercury is also smaller—albeit more massive—than the largest natural satellites in the Solar System, Ganymede and Titan. Mercury consists of approximately 70% metallic and 30% silicate material. Mercury's density is the second highest in the Solar System at 5.427 g/cm3, only slightly less than Earth's density of 5.515 g/cm3. If the effect of gravitational compression were to be factored out from both planets, the materials of which Mercury is made would be denser than those of Earth, with an uncompressed density of 5.3 g/cm3 versus Earth's 4.4 g/cm3. Mercury's density can be used to infer details of its inner structure. Although Earth's high density results appreciably from gravitational compression, particularly at the core, Mercury is much smaller and its inner regions are not as compressed. Therefore, for it to have such a high density, its core must be large and rich in iron. Geologists estimate that Mercury's core occupies about 55% of its volume; for Earth this proportion is 17%. Research published in 2007 suggests that Mercury has a molten core. Surrounding the core is a 500–700 km (310–430 mi) mantle consisting of silicates. Based on data from the Mariner 10 mission and Earth-based observation, Mercury's crust is estimated to be 35 km (22 mi) thick. One distinctive feature of Mercury's surface is the presence of numerous narrow ridges, extending up to several hundred kilometers in length. It is thought that these were formed as Mercury's core and mantle cooled and contracted at a time when the crust had already solidified. Mercury's core has a higher iron content than that of any other major planet in the Solar System, and several theories have been proposed to explain this. The most widely accepted theory is that Mercury originally had a metal–silicate ratio similar to common chondrite meteorites, thought to be typical of the Solar System's rocky matter, and a mass approximately 2.25 times its current mass. Early in the Solar System's history, Mercury may have been struck by a planetesimal of approximately 1/6 that mass and several thousand kilometers across. The impact would have stripped away much of the original crust and mantle, leaving the core behind as a relatively major component. A similar process, known as the giant impact hypothesis, has been proposed to explain the formation of the Moon. Alternatively, Mercury may have formed from the solar nebula before the Sun's energy output had stabilized. It would initially have had twice its present mass, but as the protosun contracted, temperatures near Mercury could have been between 2,500 and 3,500 K and possibly even as high as 10,000 K. Much of Mercury's surface rock could have been vaporized at such temperatures, forming an atmosphere of "rock vapor" that could have been carried away by the solar wind. A third hypothesis proposes that the solar nebula caused drag on the particles from which Mercury was accreting, which meant that lighter particles were lost from the accreting material and not gathered by Mercury. Each hypothesis predicts a different surface composition, and there are two space missions set to make observations. MESSENGER, which ended in 2015, found higher-than-expected potassium and sulfur levels on the surface, suggesting that the giant impact hypothesis and vaporization of the crust and mantle did not occur because potassium and sulfur would have been driven off by the extreme heat of these events. BepiColombo, which will arrive at Mercury in 2025, will make observations to test these hypotheses. The findings so far would seem to favor the third hypothesis; however, further analysis of the data is needed. Mercury's surface is similar in appearance to that of the Moon, showing extensive mare-like plains and heavy cratering, indicating that it has been geologically inactive for billions of years. Because knowledge of Mercury's geology had been based only on the 1975 Mariner 10 flyby and terrestrial observations, it is the least understood of the terrestrial planets. As data from MESSENGER orbiter are processed, this knowledge will increase. For example, an unusual crater with radiating troughs has been discovered that scientists called "the spider". It was later named Apollodorus. Albedo features are areas of markedly different reflectivity, as seen by telescopic observation. Mercury has dorsa (also called "wrinkle-ridges"), Moon-like highlands, montes (mountains), planitiae (plains), rupes (escarpments), and valles (valleys). Names for features on Mercury come from a variety of sources. Names coming from people are limited to the deceased. Craters are named for artists, musicians, painters, and authors who have made outstanding or fundamental contributions to their field. Ridges, or dorsa, are named for scientists who have contributed to the study of Mercury. Depressions or fossae are named for works of architecture. Montes are named for the word "hot" in a variety of languages. Plains or planitiae are named for Mercury in various languages. Escarpments or rupēs are named for ships of scientific expeditions. Valleys or valles are named for abandoned cities, towns, or settlements of antiquity. Mercury was heavily bombarded by comets and asteroids during and shortly following its formation 4.6 billion years ago, as well as during a possibly separate subsequent episode called the Late Heavy Bombardment that ended 3.8 billion years ago. During this period of intense crater formation, Mercury received impacts over its entire surface, facilitated by the lack of any atmosphere to slow impactors down. During this time Mercury was volcanically active; basins such as the Caloris Basin were filled by magma, producing smooth plains similar to the maria found on the Moon. Data from the October 2008 flyby of MESSENGER gave researchers a greater appreciation for the jumbled nature of Mercury's surface. Mercury's surface is more heterogeneous than either Mars's or the Moon's, both of which contain significant stretches of similar geology, such as maria and plateaus. Impact basins and craters Craters on Mercury range in diameter from small bowl-shaped cavities to multi-ringed impact basins hundreds of kilometers across. They appear in all states of degradation, from relatively fresh rayed craters to highly degraded crater remnants. Mercurian craters differ subtly from lunar craters in that the area blanketed by their ejecta is much smaller, a consequence of Mercury's stronger surface gravity. According to IAU rules, each new crater must be named after an artist that was famous for more than fifty years, and dead for more than three years, before the date the crater is named. The largest known crater is Caloris Basin, with a diameter of 1,550 km. The impact that created the Caloris Basin was so powerful that it caused lava eruptions and left a concentric ring over 2 km tall surrounding the impact crater. At the antipode of the Caloris Basin is a large region of unusual, hilly terrain known as the "Weird Terrain". One hypothesis for its origin is that shock waves generated during the Caloris impact traveled around Mercury, converging at the basin's antipode (180 degrees away). The resulting high stresses fractured the surface. Alternatively, it has been suggested that this terrain formed as a result of the convergence of ejecta at this basin's antipode. Overall, about 15 impact basins have been identified on the imaged part of Mercury. A notable basin is the 400 km wide, multi-ring Tolstoj Basin that has an ejecta blanket extending up to 500 km from its rim and a floor that has been filled by smooth plains materials. Beethoven Basin has a similar-sized ejecta blanket and a 625 km diameter rim. Like the Moon, the surface of Mercury has likely incurred the effects of space weathering processes, including Solar wind and micrometeorite impacts. There are two geologically distinct plains regions on Mercury. Gently rolling, hilly plains in the regions between craters are Mercury's oldest visible surfaces, predating the heavily cratered terrain. These inter-crater plains appear to have obliterated many earlier craters, and show a general paucity of smaller craters below about 30 km in diameter. Smooth plains are widespread flat areas that fill depressions of various sizes and bear a strong resemblance to the lunar maria. Notably, they fill a wide ring surrounding the Caloris Basin. Unlike lunar maria, the smooth plains of Mercury have the same albedo as the older inter-crater plains. Despite a lack of unequivocally volcanic characteristics, the localisation and rounded, lobate shape of these plains strongly support volcanic origins. All the smooth plains of Mercury formed significantly later than the Caloris basin, as evidenced by appreciably smaller crater densities than on the Caloris ejecta blanket. The floor of the Caloris Basin is filled by a geologically distinct flat plain, broken up by ridges and fractures in a roughly polygonal pattern. It is not clear whether they are volcanic lavas induced by the impact, or a large sheet of impact melt. One unusual feature of Mercury's surface is the numerous compression folds, or rupes, that crisscross the plains. As Mercury's interior cooled, it contracted and its surface began to deform, creating wrinkle ridges and lobate scarps associated with thrust faults. The scarps can reach lengths of 1000 km and heights of 3 km. These compressional features can be seen on top of other features, such as craters and smooth plains, indicating they are more recent. Mapping of the features has suggested a total shrinkage of Mercury's radius in the range of ~1 to 7 km. Small-scale thrust fault scarps have been found, tens of meters in height and with lengths in the range of a few km, that appear to be less than 50 million years old, indicating that compression of the interior and consequent surface geological activity continue to the present. Images obtained by MESSENGER have revealed evidence for pyroclastic flows on Mercury from low-profile shield volcanoes. MESSENGER data has helped identify 51 pyroclastic deposits on the surface, where 90% of them are found within impact craters. A study of the degradation state of the impact craters that host pyroclastic deposits suggests that pyroclastic activity occurred on Mercury over a prolonged interval. A "rimless depression" inside the southwest rim of the Caloris Basin consists of at least nine overlapping volcanic vents, each individually up to 8 km in diameter. It is thus a "compound volcano". The vent floors are at a least 1 km below their brinks and they bear a closer resemblance to volcanic craters sculpted by explosive eruptions or modified by collapse into void spaces created by magma withdrawal back down into a conduit. Scientists could not quantify the age of the volcanic complex system, but reported that it could be of the order of a billion years. Surface conditions and exosphere The surface temperature of Mercury ranges from 100 to 700 K (−173 to 427 °C; −280 to 800 °F) at the most extreme places: 0°N, 0°W, or 180°W. It never rises above 180 K at the poles, due to the absence of an atmosphere and a steep temperature gradient between the equator and the poles. The subsolar point reaches about 700 K during perihelion (0°W or 180°W), but only 550 K at aphelion (90° or 270°W). On the dark side of the planet, temperatures average 110 K. The intensity of sunlight on Mercury's surface ranges between 4.59 and 10.61 times the solar constant (1,370 W·m−2). Although the daylight temperature at the surface of Mercury is generally extremely high, observations strongly suggest that ice (frozen water) exists on Mercury. The floors of deep craters at the poles are never exposed to direct sunlight, and temperatures there remain below 102 K; far lower than the global average. Water ice strongly reflects radar, and observations by the 70-meter Goldstone Solar System Radar and the VLA in the early 1990s revealed that there are patches of high radar reflection near the poles. Although ice was not the only possible cause of these reflective regions, astronomers think it was the most likely. The icy regions are estimated to contain about 1014–1015 kg of ice, and may be covered by a layer of regolith that inhibits sublimation. By comparison, the Antarctic ice sheet on Earth has a mass of about 4×1018 kg, and Mars's south polar cap contains about 1016 kg of water. The origin of the ice on Mercury is not yet known, but the two most likely sources are from outgassing of water from the planet's interior or deposition by impacts of comets. Mercury is too small and hot for its gravity to retain any significant atmosphere over long periods of time; it does have a tenuous surface-bounded exosphere containing hydrogen, helium, oxygen, sodium, calcium, potassium and others at a surface pressure of less than approximately 0.5 nPa (0.005 picobars). This exosphere is not stable—atoms are continuously lost and replenished from a variety of sources. Hydrogen atoms and helium atoms probably come from the solar wind, diffusing into Mercury's magnetosphere before later escaping back into space. Radioactive decay of elements within Mercury's crust is another source of helium, as well as sodium and potassium. MESSENGER found high proportions of calcium, helium, hydroxide, magnesium, oxygen, potassium, silicon and sodium. Water vapor is present, released by a combination of processes such as: comets striking its surface, sputtering creating water out of hydrogen from the solar wind and oxygen from rock, and sublimation from reservoirs of water ice in the permanently shadowed polar craters. The detection of high amounts of water-related ions like O+, OH−, and H3O+ was a surprise. Because of the quantities of these ions that were detected in Mercury's space environment, scientists surmise that these molecules were blasted from the surface or exosphere by the solar wind. Sodium, potassium and calcium were discovered in the atmosphere during the 1980–1990s, and are thought to result primarily from the vaporization of surface rock struck by micrometeorite impacts including presently from Comet Encke. In 2008, magnesium was discovered by MESSENGER. Studies indicate that, at times, sodium emissions are localized at points that correspond to the planet's magnetic poles. This would indicate an interaction between the magnetosphere and the planet's surface. On November 29, 2012, NASA confirmed that images from MESSENGER had detected that craters at the north pole contained water ice. MESSENGER's principal investigator Sean Solomon is quoted in The New York Times estimating the volume of the ice to be large enough to "encase Washington, D.C., in a frozen block two and a half miles deep".[c] Magnetic field and magnetosphere Despite its small size and slow 59-day-long rotation, Mercury has a significant, and apparently global, magnetic field. According to measurements taken by Mariner 10, it is about 1.1% the strength of Earth's. The magnetic-field strength at Mercury's equator is about 300 nT. Like that of Earth, Mercury's magnetic field is dipolar. Unlike Earth's, Mercury's poles are nearly aligned with the planet's spin axis. Measurements from both the Mariner 10 and MESSENGER space probes have indicated that the strength and shape of the magnetic field are stable. It is likely that this magnetic field is generated by a dynamo effect, in a manner similar to the magnetic field of Earth. This dynamo effect would result from the circulation of the planet's iron-rich liquid core. Particularly strong tidal effects caused by the planet's high orbital eccentricity would serve to keep the core in the liquid state necessary for this dynamo effect. Mercury's magnetic field is strong enough to deflect the solar wind around the planet, creating a magnetosphere. The planet's magnetosphere, though small enough to fit within Earth, is strong enough to trap solar wind plasma. This contributes to the space weathering of the planet's surface. Observations taken by the Mariner 10 spacecraft detected this low energy plasma in the magnetosphere of the planet's nightside. Bursts of energetic particles in the planet's magnetotail indicate a dynamic quality to the planet's magnetosphere. During its second flyby of the planet on October 6, 2008, MESSENGER discovered that Mercury's magnetic field can be extremely "leaky". The spacecraft encountered magnetic "tornadoes" – twisted bundles of magnetic fields connecting the planetary magnetic field to interplanetary space – that were up to 800 km wide or a third of the radius of the planet. These twisted magnetic flux tubes, technically known as flux transfer events, form open windows in the planet's magnetic shield through which the solar wind may enter and directly impact Mercury's surface via magnetic reconnection This also occurs in Earth's magnetic field. The MESSENGER observations showed the reconnection rate is ten times higher at Mercury, but its proximity to the Sun only accounts for about a third of the reconnection rate observed by MESSENGER. Orbit, rotation, and longitude Mercury has the most eccentric orbit of all the planets; its eccentricity is 0.21 with its distance from the Sun ranging from 46,000,000 to 70,000,000 km (29,000,000 to 43,000,000 mi). It takes 87.969 Earth days to complete an orbit. The diagram illustrates the effects of the eccentricity, showing Mercury's orbit overlaid with a circular orbit having the same semi-major axis. Mercury's higher velocity when it is near perihelion is clear from the greater distance it covers in each 5-day interval. In the diagram the varying distance of Mercury to the Sun is represented by the size of the planet, which is inversely proportional to Mercury's distance from the Sun. This varying distance to the Sun leads to Mercury's surface being flexed by tidal bulges raised by the Sun that are about 17 times stronger than the Moon's on Earth. Combined with a 3:2 spin–orbit resonance of the planet's rotation around its axis, it also results in complex variations of the surface temperature. The resonance makes a single solar day on Mercury last exactly two Mercury years, or about 176 Earth days. Mercury's orbit is inclined by 7 degrees to the plane of Earth's orbit (the ecliptic), as shown in the diagram on the right. As a result, transits of Mercury across the face of the Sun can only occur when the planet is crossing the plane of the ecliptic at the time it lies between Earth and the Sun, which is in May or November. This occurs about every seven years on average. Mercury's axial tilt is almost zero, with the best measured value as low as 0.027 degrees. This is significantly smaller than that of Jupiter, which has the second smallest axial tilt of all planets at 3.1 degrees. This means that to an observer at Mercury's poles, the center of the Sun never rises more than 2.1 arcminutes above the horizon. At certain points on Mercury's surface, an observer would be able to see the Sun peek up a little more than two-thirds of the way over the horizon, then reverse and set before rising again, all within the same Mercurian day. This is because approximately four Earth days before perihelion, Mercury's angular orbital velocity equals its angular rotational velocity so that the Sun's apparent motion ceases; closer to perihelion, Mercury's angular orbital velocity then exceeds the angular rotational velocity. Thus, to a hypothetical observer on Mercury, the Sun appears to move in a retrograde direction. Four Earth days after perihelion, the Sun's normal apparent motion resumes. A similar effect would have occurred if Mercury had been in synchronous rotation: the alternating gain and loss of rotation over revolution would have caused a libration of 23.65° in longitude. For the same reason, there are two points on Mercury's equator, 180 degrees apart in longitude, at either of which, around perihelion in alternate Mercurian years (once a Mercurian day), the Sun passes overhead, then reverses its apparent motion and passes overhead again, then reverses a second time and passes overhead a third time, taking a total of about 16 Earth-days for this entire process. In the other alternate Mercurian years, the same thing happens at the other of these two points. The amplitude of the retrograde motion is small, so the overall effect is that, for two or three weeks, the Sun is almost stationary overhead, and is at its most brilliant because Mercury is at perihelion, its closest to the Sun. This prolonged exposure to the Sun at its brightest makes these two points the hottest places on Mercury. Maximum temperature occurs when the Sun is at an angle of about 25 degrees past noon due to diurnal temperature lag, at 0.4 Mercury days and 0.8 Mercury years past sunrise. Conversely, there are two other points on the equator, 90 degrees of longitude apart from the first ones, where the Sun passes overhead only when the planet is at aphelion in alternate years, when the apparent motion of the Sun in Mercury's sky is relatively rapid. These points, which are the ones on the equator where the apparent retrograde motion of the Sun happens when it is crossing the horizon as described in the preceding paragraph, receive much less solar heat than the first ones described above. Mercury attains inferior conjunction (nearest approach to Earth) every 116 Earth days on average, but this interval can range from 105 days to 129 days due to the planet's eccentric orbit. Mercury can come as near as 82.2 gigametres (0.549 astronomical units; 51.1 million miles) to Earth, and that is slowly declining: The next approach to within 82.1 Gm (51.0 million miles) is in 2679, and to within 82.0 Gm (51.0 million miles) in 4487, but it will not be closer to Earth than 80 Gm (50 million miles) until 28,622. Its period of retrograde motion as seen from Earth can vary from 8 to 15 days on either side of inferior conjunction. This large range arises from the planet's high orbital eccentricity. On average, Mercury is the closest planet to the Earth, and it is the closest planet to each of the other planets in the Solar System. The longitude convention for Mercury puts the zero of longitude at one of the two hottest points on the surface, as described above. However, when this area was first visited, by Mariner 10, this zero meridian was in darkness, so it was impossible to select a feature on the surface to define the exact position of the meridian. Therefore, a small crater further west was chosen, called Hun Kal, which provides the exact reference point for measuring longitude. The center of Hun Kal defines the 20° west meridian. A 1970 International Astronomical Union resolution suggests that longitudes be measured positively in the westerly direction on Mercury. The two hottest places on the equator are therefore at longitudes 0° W and 180° W, and the coolest points on the equator are at longitudes 90° W and 270° W. However, the MESSENGER project uses an east-positive convention. For many years it was thought that Mercury was synchronously tidally locked with the Sun, rotating once for each orbit and always keeping the same face directed towards the Sun, in the same way that the same side of the Moon always faces Earth. Radar observations in 1965 proved that the planet has a 3:2 spin-orbit resonance, rotating three times for every two revolutions around the Sun. The eccentricity of Mercury's orbit makes this resonance stable—at perihelion, when the solar tide is strongest, the Sun is nearly still in Mercury's sky. The rare 3:2 resonant tidal locking is stabilized by the variance of the tidal force along Mercury's eccentric orbit, acting on a permanent dipole component of Mercury's mass distribution. In a circular orbit there is no such variance, so the only resonance stabilized in such an orbit is at 1:1 (e.g., Earth–Moon), when the tidal force, stretching a body along the "center-body" line, exerts a torque that aligns the body's axis of least inertia (the "longest" axis, and the axis of the aforementioned dipole) to point always at the center. However, with noticeable eccentricity, like that of Mercury's orbit, the tidal force has a maximum at perihelion and therefore stabilizes resonances, like 3:2, enforcing that the planet points its axis of least inertia roughly at the Sun when passing through perihelion. The original reason astronomers thought it was synchronously locked was that, whenever Mercury was best placed for observation, it was always nearly at the same point in its 3:2 resonance, hence showing the same face. This is because, coincidentally, Mercury's rotation period is almost exactly half of its synodic period with respect to Earth. Due to Mercury's 3:2 spin-orbit resonance, a solar day (the length between two meridian transits of the Sun) lasts about 176 Earth days. A sidereal day (the period of rotation) lasts about 58.7 Earth days. Simulations indicate that the orbital eccentricity of Mercury varies chaotically from nearly zero (circular) to more than 0.45 over millions of years due to perturbations from the other planets. This was thought to explain Mercury's 3:2 spin-orbit resonance (rather than the more usual 1:1), because this state is more likely to arise during a period of high eccentricity. However, accurate modeling based on a realistic model of tidal response has demonstrated that Mercury was captured into the 3:2 spin-orbit state at a very early stage of its history, within 20 (more likely, 10) million years after its formation. Numerical simulations show that a future secular orbital resonant perihelion interaction with Jupiter may cause the eccentricity of Mercury's orbit to increase to the point where there is a 1% chance that the planet may collide with Venus within the next five billion years. Advance of perihelion In 1859, the French mathematician and astronomer Urbain Le Verrier reported that the slow precession of Mercury's orbit around the Sun could not be completely explained by Newtonian mechanics and perturbations by the known planets. He suggested, among possible explanations, that another planet (or perhaps instead a series of smaller 'corpuscules') might exist in an orbit even closer to the Sun than that of Mercury, to account for this perturbation. (Other explanations considered included a slight oblateness of the Sun.) The success of the search for Neptune based on its perturbations of the orbit of Uranus led astronomers to place faith in this possible explanation, and the hypothetical planet was named Vulcan, but no such planet was ever found. The perihelion precession of Mercury is 5,600 arcseconds (1.5556°) per century relative to Earth, or 574.10±0.65 arcseconds per century relative to the inertial ICRF. Newtonian mechanics, taking into account all the effects from the other planets, predicts a precession of 5,557 arcseconds (1.5436°) per century. In the early 20th century, Albert Einstein's general theory of relativity provided the explanation for the observed precession, by formalizing gravitation as being mediated by the curvature of spacetime. The effect is small: just 42.98 arcseconds per century for Mercury; it therefore requires a little over twelve million orbits for a full excess turn. Similar, but much smaller, effects exist for other Solar System bodies: 8.62 arcseconds per century for Venus, 3.84 for Earth, 1.35 for Mars, and 10.05 for 1566 Icarus. Einstein's formula for the perihelion shift per revolution is , where is the orbital eccentricity, the semi-major axis, and the orbital period. Filling in the values gives a result of 0.1035 arcseconds per revolution or 0.4297 arcseconds per Earth year, i.e., 42.97 arcseconds per century. This is in close agreement with the accepted value of Mercury's perihelion advance of 42.98 arcseconds per century. There may be scientific support, based on studies reported in March 2020, for considering that parts of the planet Mercury may have been habitable, and perhaps that life forms, albeit likely primitive microorganisms, may have existed on the planet. Mercury's apparent magnitude is calculated to vary between −2.48 (brighter than Sirius) around superior conjunction and +7.25 (below the limit of naked-eye visibility) around inferior conjunction. The mean apparent magnitude is 0.23 while the standard deviation of 1.78 is the largest of any planet. The mean apparent magnitude at superior conjunction is −1.89 while that at inferior conjunction is +5.93. Observation of Mercury is complicated by its proximity to the Sun, as it is lost in the Sun's glare for much of the time. Mercury can be observed for only a brief period during either morning or evening twilight. Like the Moon and Venus, Mercury exhibits phases as seen from Earth. It is "new" at inferior conjunction and "full" at superior conjunction. The planet is rendered invisible from Earth on both of these occasions because of its being obscured by the Sun, except its new phase during a transit. Mercury is technically brightest as seen from Earth when it is at a full phase. Although Mercury is farthest from Earth when it is full, the greater illuminated area that is visible and the opposition brightness surge more than compensates for the distance. The opposite is true for Venus, which appears brightest when it is a crescent, because it is much closer to Earth than when gibbous. Nonetheless, the brightest (full phase) appearance of Mercury is an essentially impossible time for practical observation, because of the extreme proximity of the Sun. Mercury is best observed at the first and last quarter, although they are phases of lesser brightness. The first and last quarter phases occur at greatest elongation east and west of the Sun, respectively. At both of these times Mercury's separation from the Sun ranges anywhere from 17.9° at perihelion to 27.8° at aphelion. At greatest western elongation, Mercury rises at its earliest before sunrise, and at greatest eastern elongation, it sets at its latest after sunset. Mercury can be easily seen from the tropics and subtropics more than from higher latitudes. Viewed from low latitudes and at the right times of year, the ecliptic intersects the horizon at a steep angle. Mercury is 10° above the horizon when the planet appears directly above the Sun (i.e. its orbit appears vertical) and is at maximum elongation from the Sun (28°) and also when the Sun is 18° below the horizon, so the sky is just completely dark.[d] This angle is the maximum altitude at which Mercury is visible in a completely dark sky. At middle latitudes, Mercury is more often and easily visible from the Southern Hemisphere than from the Northern. This is because Mercury's maximum western elongation occurs only during early autumn in the Southern Hemisphere, whereas its greatest eastern elongation happens only during late winter in the Southern Hemisphere. In both of these cases, the angle at which the planet's orbit intersects the horizon is maximized, allowing it to rise several hours before sunrise in the former instance and not set until several hours after sundown in the latter from southern mid-latitudes, such as Argentina and South Africa. An alternate method for viewing Mercury involves observing the planet during daylight hours when conditions are clear, ideally when it is at its greatest elongation. This allows the planet to be found easily, even when using telescopes with 8 cm (3.1 in) apertures. Care must be taken to ensure the instrument isn't pointed directly towards the Sun because of the risk for eye damage. This method bypasses the limitation of twilight observing when the ecliptic is located at a low elevation (e.g. on autumn evenings). Ground-based telescope observations of Mercury reveal only an illuminated partial disk with limited detail. The first of two spacecraft to visit the planet was Mariner 10, which mapped about 45% of its surface from 1974 to 1975. The second is the MESSENGER spacecraft, which after three Mercury flybys between 2008 and 2009, attained orbit around Mercury on March 17, 2011, to study and map the rest of the planet. Because the shift of 0.15 revolutions in a year makes up a seven-year cycle (0.15 × 7 ≈ 1.0), in the seventh year Mercury follows almost exactly (earlier by 7 days) the sequence of phenomena it showed seven years before. The earliest known recorded observations of Mercury are from the Mul.Apin tablets. These observations were most likely made by an Assyrian astronomer around the 14th century BC. The cuneiform name used to designate Mercury on the Mul.Apin tablets is transcribed as Udu.Idim.Gu\u4.Ud ("the jumping planet").[e] Babylonian records of Mercury date back to the 1st millennium BC. The Babylonians called the planet Nabu after the messenger to the gods in their mythology. The ancients knew Mercury by different names depending on whether it was an evening star or a morning star. By about 350 BC, the ancient Greeks had realized the two stars were one. They knew the planet as Στίλβων Stilbōn, meaning "twinkling", and Ἑρμής Hermēs, for its fleeting motion, a name that is retained in modern Greek (Ερμής Ermis). The Romans named the planet after the swift-footed Roman messenger god, Mercury (Latin Mercurius), which they equated with the Greek Hermes, because it moves across the sky faster than any other planet. The astronomical symbol for Mercury is a stylized version of Hermes' caduceus. The Greco-Egyptian astronomer Ptolemy wrote about the possibility of planetary transits across the face of the Sun in his work Planetary Hypotheses. He suggested that no transits had been observed either because planets such as Mercury were too small to see, or because the transits were too infrequent. In ancient China, Mercury was known as "the Hour Star" (Chen-xing 辰星). It was associated with the direction north and the phase of water in the Five Phases system of metaphysics. Modern Chinese, Korean, Japanese and Vietnamese cultures refer to the planet literally as the "water star" (水星), based on the Five elements. Hindu mythology used the name Budha for Mercury, and this god was thought to preside over Wednesday. The god Odin (or Woden) of Germanic paganism was associated with the planet Mercury and Wednesday. The Maya may have represented Mercury as an owl (or possibly four owls; two for the morning aspect and two for the evening) that served as a messenger to the underworld. In medieval Islamic astronomy, the Andalusian astronomer Abū Ishāq Ibrāhīm al-Zarqālī in the 11th century described the deferent of Mercury's geocentric orbit as being oval, like an egg or a pignon, although this insight did not influence his astronomical theory or his astronomical calculations. In the 12th century, Ibn Bajjah observed "two planets as black spots on the face of the Sun", which was later suggested as the transit of Mercury and/or Venus by the Maragha astronomer Qotb al-Din Shirazi in the 13th century. (Note that most such medieval reports of transits were later taken as observations of sunspots.) In India, the Kerala school astronomer Nilakantha Somayaji in the 15th century developed a partially heliocentric planetary model in which Mercury orbits the Sun, which in turn orbits Earth, similar to the Tychonic system later proposed by Tycho Brahe in the late 16th century. Ground-based telescopic research The first telescopic observations of Mercury were made by Galileo in the early 17th century. Although he observed phases when he looked at Venus, his telescope was not powerful enough to see the phases of Mercury. In 1631, Pierre Gassendi made the first telescopic observations of the transit of a planet across the Sun when he saw a transit of Mercury predicted by Johannes Kepler. In 1639, Giovanni Zupi used a telescope to discover that the planet had orbital phases similar to Venus and the Moon. The observation demonstrated conclusively that Mercury orbited around the Sun. A rare event in astronomy is the passage of one planet in front of another (occultation), as seen from Earth. Mercury and Venus occult each other every few centuries, and the event of May 28, 1737 is the only one historically observed, having been seen by John Bevis at the Royal Greenwich Observatory. The next occultation of Mercury by Venus will be on December 3, 2133. The difficulties inherent in observing Mercury mean that it has been far less studied than the other planets. In 1800, Johann Schröter made observations of surface features, claiming to have observed 20-kilometre-high (12 mi) mountains. Friedrich Bessel used Schröter's drawings to erroneously estimate the rotation period as 24 hours and an axial tilt of 70°. In the 1880s, Giovanni Schiaparelli mapped the planet more accurately, and suggested that Mercury's rotational period was 88 days, the same as its orbital period due to tidal locking. This phenomenon is known as synchronous rotation. The effort to map the surface of Mercury was continued by Eugenios Antoniadi, who published a book in 1934 that included both maps and his own observations. Many of the planet's surface features, particularly the albedo features, take their names from Antoniadi's map. In June 1962, Soviet scientists at the Institute of Radio-engineering and Electronics of the USSR Academy of Sciences, led by Vladimir Kotelnikov, became the first to bounce a radar signal off Mercury and receive it, starting radar observations of the planet. Three years later, radar observations by Americans Gordon H. Pettengill and Rolf B. Dyce, using the 300-meter Arecibo Observatory radio telescope in Puerto Rico, showed conclusively that the planet's rotational period was about 59 days. The theory that Mercury's rotation was synchronous had become widely held, and it was a surprise to astronomers when these radio observations were announced. If Mercury were tidally locked, its dark face would be extremely cold, but measurements of radio emission revealed that it was much hotter than expected. Astronomers were reluctant to drop the synchronous rotation theory and proposed alternative mechanisms such as powerful heat-distributing winds to explain the observations. Italian astronomer Giuseppe Colombo noted that the rotation value was about two-thirds of Mercury's orbital period, and proposed that the planet's orbital and rotational periods were locked into a 3:2 rather than a 1:1 resonance. Data from Mariner 10 subsequently confirmed this view. This means that Schiaparelli's and Antoniadi's maps were not "wrong". Instead, the astronomers saw the same features during every second orbit and recorded them, but disregarded those seen in the meantime, when Mercury's other face was toward the Sun, because the orbital geometry meant that these observations were made under poor viewing conditions. Ground-based optical observations did not shed much further light on Mercury, but radio astronomers using interferometry at microwave wavelengths, a technique that enables removal of the solar radiation, were able to discern physical and chemical characteristics of the subsurface layers to a depth of several meters. Not until the first space probe flew past Mercury did many of its most fundamental morphological properties become known. Moreover, recent technological advances have led to improved ground-based observations. In 2000, high-resolution lucky imaging observations were conducted by the Mount Wilson Observatory 1.5 meter Hale telescope. They provided the first views that resolved surface features on the parts of Mercury that were not imaged in the Mariner 10 mission. Most of the planet has been mapped by the Arecibo radar telescope, with 5 km (3.1 mi) resolution, including polar deposits in shadowed craters of what may be water ice. Research with space probes Reaching Mercury from Earth poses significant technical challenges, because it orbits so much closer to the Sun than Earth. A Mercury-bound spacecraft launched from Earth must travel over 91 million kilometres (57 million miles) into the Sun's gravitational potential well. Mercury has an orbital speed of 48 km/s (30 mi/s), whereas Earth's orbital speed is 30 km/s (19 mi/s). Therefore, the spacecraft must make a large change in velocity (delta-v) to enter a Hohmann transfer orbit that passes near Mercury, as compared to the delta-v required for other planetary missions. The potential energy liberated by moving down the Sun's potential well becomes kinetic energy; requiring another large delta-v change to do anything other than rapidly pass by Mercury. To land safely or enter a stable orbit the spacecraft would rely entirely on rocket motors. Aerobraking is ruled out because Mercury has a negligible atmosphere. A trip to Mercury requires more rocket fuel than that required to escape the Solar System completely. As a result, only two space probes have visited it so far. A proposed alternative approach would use a solar sail to attain a Mercury-synchronous orbit around the Sun. The first spacecraft to visit Mercury was NASA's Mariner 10 (1974–1975). The spacecraft used the gravity of Venus to adjust its orbital velocity so that it could approach Mercury, making it both the first spacecraft to use this gravitational "slingshot" effect and the first NASA mission to visit multiple planets. Mariner 10 provided the first close-up images of Mercury's surface, which immediately showed its heavily cratered nature, and revealed many other types of geological features, such as the giant scarps that were later ascribed to the effect of the planet shrinking slightly as its iron core cools. Unfortunately, the same face of the planet was lit at each of Mariner 10's close approaches. This made close observation of both sides of the planet impossible, and resulted in the mapping of less than 45% of the planet's surface. The spacecraft made three close approaches to Mercury, the closest of which took it to within 327 km (203 mi) of the surface. At the first close approach, instruments detected a magnetic field, to the great surprise of planetary geologists—Mercury's rotation was expected to be much too slow to generate a significant dynamo effect. The second close approach was primarily used for imaging, but at the third approach, extensive magnetic data were obtained. The data revealed that the planet's magnetic field is much like Earth's, which deflects the solar wind around the planet. For many years after the Mariner 10 encounters, the origin of Mercury's magnetic field remained the subject of several competing theories. On March 24, 1975, just eight days after its final close approach, Mariner 10 ran out of fuel. Because its orbit could no longer be accurately controlled, mission controllers instructed the probe to shut down. Mariner 10 is thought to be still orbiting the Sun, passing close to Mercury every few months. A second NASA mission to Mercury, named MESSENGER (MErcury Surface, Space ENvironment, GEochemistry, and Ranging), was launched on August 3, 2004. It made a fly-by of Earth in August 2005, and of Venus in October 2006 and June 2007 to place it onto the correct trajectory to reach an orbit around Mercury. A first fly-by of Mercury occurred on January 14, 2008, a second on October 6, 2008, and a third on September 29, 2009. Most of the hemisphere not imaged by Mariner 10 was mapped during these fly-bys. The probe successfully entered an elliptical orbit around the planet on March 18, 2011. The first orbital image of Mercury was obtained on March 29, 2011. The probe finished a one-year mapping mission, and then entered a one-year extended mission into 2013. In addition to continued observations and mapping of Mercury, MESSENGER observed the 2012 solar maximum. The mission was designed to clear up six key issues: Mercury's high density, its geological history, the nature of its magnetic field, the structure of its core, whether it has ice at its poles, and where its tenuous atmosphere comes from. To this end, the probe carried imaging devices that gathered much-higher-resolution images of much more of Mercury than Mariner 10, assorted spectrometers to determine abundances of elements in the crust, and magnetometers and devices to measure velocities of charged particles. Measurements of changes in the probe's orbital velocity were expected to be used to infer details of the planet's interior structure. MESSENGER's final maneuver was on April 24, 2015, and it crashed into Mercury's surface on April 30, 2015. The spacecraft's impact with Mercury occurred near 3:26 PM EDT on April 30, 2015, leaving a crater estimated to be 16 m (52 ft) in diameter. The European Space Agency and the Japanese Space Agency developed and launched a joint mission called BepiColombo, which will orbit Mercury with two probes: one to map the planet and the other to study its magnetosphere. Launched on October 20, 2018, BepiColombo is expected to reach Mercury in 2025. It will release a magnetometer probe into an elliptical orbit, then chemical rockets will fire to deposit the mapper probe into a circular orbit. Both probes will operate for one terrestrial year. The mapper probe carries an array of spectrometers similar to those on MESSENGER, and will study the planet at many different wavelengths including infrared, ultraviolet, X-ray and gamma ray. - Outline of Mercury (planet) - Budha, Hinduism's name for the planet and the god Mercury - Colonization of Mercury - Exploration of Mercury - Mercury in astrology - Mercury in fiction - Timeline of the far future - Cold trap (astronomy) - In astronomy, the words "rotation" and "revolution" have different meanings. "Rotation" is the turning of a body about an axis that passes through the body, as in "Earth rotates once a day." "Revolution" is motion around a centre that is external to the body, usually in orbit, as in "Earth takes a year for each revolution around the Sun." The verbs "rotate" and "revolve" mean doing rotation and revolution, respectively. - Pluto was considered a planet from its discovery in 1930 to 2006, but after that it has been reclassified as a dwarf planet. Pluto's orbital eccentricity is greater than Mercury's. Pluto is also smaller than Mercury, but was thought to be larger until 1976. - If the area of Washington is about 177 km2 and 2.5 miles is taken to equal 4 km, Solomon's estimate would equal about 700 cubic kilometres of ice, which would have a mass of about 600 billion tons (6×1014 kg). - See Twilight#Astronomical twilight - Some sources precede the cuneiform transcription with "MUL". "MUL" is a cuneiform sign that was used in the Sumerian language to designate a star or planet, but it is not considered part of the actual name. The "4" is a reference number in the Sumero–Akkadian transliteration system to designate which of several syllables a certain cuneiform sign is most likely designating. - "Mercurian". Lexico UK Dictionary. Oxford University Press. - "Mercurial". Lexico UK Dictionary. Oxford University Press. - "Mercury Fact Sheet". NASA Goddard Space Flight Center. November 30, 2007. Archived from the original on March 28, 2014. Retrieved May 28, 2008. - "The MeanPlane (Invariable plane) of the Solar System passing through the barycenter". April 3, 2009. Archived from the original on April 20, 2009. Retrieved April 3, 2009. (produced with Solex 10 Archived December 20, 2008, at the Wayback Machine written by Aldo Vitagliano; see also Invariable plane) - Yeomans, Donald K. (April 7, 2008). "HORIZONS Web-Interface for Mercury Major Body". JPL Horizons On-Line Ephemeris System. Retrieved April 7, 2008. – Select "Ephemeris Type: Orbital Elements", "Time Span: 2000-01-01 12:00 to 2000-01-02". ("Target Body: Mercury" and "Center: Sun" should be defaulted to.) Results are instantaneous osculating values at the precise J2000 epoch. - Munsell, Kirk; Smith, Harman; Harvey, Samantha (May 28, 2009). "Mercury: Facts & Figures". Solar System Exploration. NASA. Retrieved April 7, 2008. - Seidelmann, P. Kenneth; Archinal, Brent A.; A'Hearn, Michael F.; et al. (2007). "Report of the IAU/IAG Working Group on cartographic coordinates and rotational elements: 2006". Celestial Mechanics and Dynamical Astronomy. 98 (3): 155–180. Bibcode:2007CeMDA..98..155S. doi:10.1007/s10569-007-9072-y. - Mazarico, Erwan; Genova, Antonio; Goossens, Sander; Lemoine, Frank G.; Neumann, Gregory A.; Zuber, Maria T.; Smith, David E.; Solomon, Sean C. (2014). "The gravity field, orientation, and ephemeris of Mercury from MESSENGER observations after three years in orbit" (PDF). Journal of Geophysical Research: Planets. 119 (12): 2417–2436. Bibcode:2014JGRE..119.2417M. doi:10.1002/2014JE004675. hdl:1721.1/97927. ISSN 2169-9097. - Margot, Jean-Luc; Peale, Stanton J.; Solomon, Sean C.; Hauck, Steven A.; Ghigo, Frank D.; Jurgens, Raymond F.; Yseboodt, Marie; Giorgini, Jon D.; Padovan, Sebastiano; Campbell, Donald B. (2012). "Mercury's moment of inertia from spin and gravity data". Journal of Geophysical Research: Planets. 117 (E12): n/a. Bibcode:2012JGRE..117.0L09M. CiteSeerX 10.1.1.676.5383. doi:10.1029/2012JE004161. ISSN 0148-0227. - Mallama, Anthony (2017). "The spherical bolometric albedo for planet Mercury". arXiv:1703.02670 [astro-ph.EP]. - Mallama, Anthony; Wang, Dennis; Howard, Russell A. (2002). "Photometry of Mercury from SOHO/LASCO and Earth". Icarus. 155 (2): 253–264. Bibcode:2002Icar..155..253M. doi:10.1006/icar.2001.6723. - Vasavada, Ashwin R.; Paige, David A.; Wood, Stephen E. (February 19, 1999). "Near-Surface Temperatures on Mercury and the Moon and the Stability of Polar Ice Deposits" (PDF). Icarus. 141 (2): 179–193. Bibcode:1999Icar..141..179V. doi:10.1006/icar.1999.6175. Figure 3 with the "TWO model"; Figure 5 for pole. - Mallama, Anthony; Hilton, James L. (October 2018). "Computing apparent planetary magnitudes for The Astronomical Almanac". Astronomy and Computing. 25: 10–24. arXiv:1808.01973. Bibcode:2018A&C....25...10M. doi:10.1016/j.ascom.2018.08.002. - "Mercury Fact Sheet". NASA. December 22, 2015. Archived from the original on November 6, 2015. Retrieved January 27, 2016. - "Mercury – The-atmosphere". Encyclopædia Britannica. - Elkins-Tanton, Linda T. (2006). Uranus, Neptune, Pluto, and the Outer Solar System. Infobase Publishing. p. 51. ISBN 978-1-4381-0729-5. Extract of page 51 - "Animated clip of orbit and rotation of Mercury". Sciencenetlinks.com. - Prockter, Louise (2005). Ice in the Solar System (PDF). 26. Johns Hopkins APL Technical Digest. Archived from the original (PDF) on September 11, 2006. Retrieved July 27, 2009. - "NASA Completes MESSENGER Mission with Expected Impact on Mercury's Surface". Archived from the original on May 3, 2015. Retrieved April 30, 2015. - "From Mercury orbit, MESSENGER watches a lunar eclipse". Planetary Society. October 10, 2014. Retrieved January 23, 2015. - "Innovative use of pressurant extends MESSENGER's Mercury mission". Astronomy.com. December 29, 2014. Retrieved January 22, 2015. - Talbert, Tricia, ed. (March 21, 2012). "MESSENGER Provides New Look at Mercury's Surprising Core and Landscape Curiosities". NASA. - "Scientists find evidence Mercury has a solid inner core". AGU Newsroom. Retrieved April 17, 2019. - Strom, Robert G.; Sprague, Ann L. (2003). Exploring Mercury: the iron planet. Springer. ISBN 978-1-85233-731-5. - "Mercury". US Geological Survey. May 8, 2003. Archived from the original on September 29, 2006. Retrieved November 26, 2006. - Lyttleton, Raymond A. (1969). "On the Internal Structures of Mercury and Venus". Astrophysics and Space Science. 5 (1): 18–35. Bibcode:1969Ap&SS...5...18L. doi:10.1007/BF00653933. - Gold, Lauren (May 3, 2007). "Mercury has molten core, Cornell researcher shows". Chronicle Online. Cornell University. Retrieved May 12, 2008. - Finley, Dave (May 3, 2007). "Mercury's Core Molten, Radar Study Shows". National Radio Astronomy Observatory. Retrieved May 12, 2008. - Spohn, Tilman; Sohl, Frank; Wieczerkowski, Karin; Conzelmann, Vera (2001). "The interior structure of Mercury: what we know, what we expect from BepiColombo". Planetary and Space Science. 49 (14–15): 1561–1570. Bibcode:2001P&SS...49.1561S. doi:10.1016/S0032-0633(01)00093-9. - Gallant, Roy A.; The National Geographic Picture Atlas of Our Universe, National Geographic Society, 1986, 2nd edition - Padovan, Sebastiano; Wieczorek, Mark A.; Margot, Jean-Luc; Tosi, Nicola; Solomon, Sean C. (2015). "Thickness of the crust of Mercury from geoid-to-topography ratios". Geophysical Research Letters. 42 (4): 1029. Bibcode:2015GeoRL..42.1029P. doi:10.1002/2014GL062487. - Schenk, Paul M.; Melosh, H. Jay (March 1994). "Lobate Thrust Scarps and the Thickness of Mercury's Lithosphere". Abstracts of the 25th Lunar and Planetary Science Conference. 1994: 1994LPI....25.1203S. Bibcode:1994LPI....25.1203S. - Benz, W.; Slattery, W. L.; Cameron, Alastair G. W. (1988). "Collisional stripping of Mercury's mantle". Icarus. 74 (3): 516–528. Bibcode:1988Icar...74..516B. doi:10.1016/0019-1035(88)90118-2. - Cameron, Alastair G. W. (1985). "The partial volatilization of Mercury". Icarus. 64 (2): 285–294. Bibcode:1985Icar...64..285C. doi:10.1016/0019-1035(85)90091-0. - Weidenschilling, Stuart J. (1987). "Iron/silicate fractionation and the origin of Mercury". Icarus. 35 (1): 99–111. Bibcode:1978Icar...35...99W. doi:10.1016/0019-1035(78)90064-7. - Sappenfield, Mark (September 29, 2011). "Messenger's message from Mercury: Time to rewrite the textbooks". The Christian Science Monitor. Retrieved August 21, 2017. - "BepiColombo". Science & Technology. European Space Agency. Retrieved April 7, 2008. - Cartwright, Jon (September 30, 2011). "Messenger sheds light on Mercury's formation". Chemistry World. Retrieved August 21, 2017. - "Scientists see Mercury in a new light". Science Daily. February 28, 2008. Retrieved April 7, 2008. - "The Giant Spider of Mercury". The Planetary Society. Retrieved June 9, 2017. - Blue, Jennifer (April 11, 2008). "Gazetteer of Planetary Nomenclature". US Geological Survey. Retrieved April 11, 2008. - Dunne, James A.; Burgess, Eric (1978). "Chapter Seven". The Voyage of Mariner 10 – Mission to Venus and Mercury. NASA History Office. Retrieved May 28, 2008. - "Categories for Naming Features on Planets and Satellites". US Geological Survey. Retrieved August 20, 2011. - Strom, Robert G. (1979). "Mercury: a post-Mariner assessment". Space Science Reviews. 24 (1): 3–70. Bibcode:1979SSRv...24....3S. doi:10.1007/BF00221842. - Broadfoot, A. Lyle; Kumar, Shailendra; Belton, Michael J. S.; McElroy, Michael B. (July 12, 1974). "Mercury's Atmosphere from Mariner 10: Preliminary Results". Science. 185 (4146): 166–169. Bibcode:1974Sci...185..166B. doi:10.1126/science.185.4146.166. PMID 17810510. - "Mercury". U.S. Geological Survey. August 5, 2003. Archived from the original on September 29, 2006. Retrieved April 7, 2008. - Head, James W.; Solomon, Sean C. (1981). "Tectonic Evolution of the Terrestrial Planets" (PDF). Science. 213 (4503): 62–76. Bibcode:1981Sci...213...62H. CiteSeerX 10.1.1.715.4402. doi:10.1126/science.213.4503.62. PMID 17741171. - Morris, Jefferson (November 10, 2008). "Laser Altimetry". Aviation Week & Space Technology. 169 (18): 18. Mercury's crust is more analogous to a marbled cake than a layered cake. - Spudis, Paul D. (2001). "The Geological History of Mercury". Workshop on Mercury: Space Environment, Surface, and Interior, Chicago (1097): 100. Bibcode:2001mses.conf..100S. - Ritzel, Rebecca (December 20, 2012). "Ballet isn't rocket science, but the two aren't mutually exclusive, either". Washington Post. Washington, D.C., United States. Retrieved December 22, 2012. - Shiga, David (January 30, 2008). "Bizarre spider scar found on Mercury's surface". NewScientist.com news service. - Schultz, Peter H.; Gault, Donald E. (1975). "Seismic effects from major basin formations on the moon and Mercury". Earth, Moon, and Planets. 12 (2): 159–175. Bibcode:1975Moon...12..159S. doi:10.1007/BF00577875. - Wieczorek, Mark A.; Zuber, Maria T. (2001). "A Serenitatis origin for the Imbrian grooves and South Pole-Aitken thorium anomaly". Journal of Geophysical Research. 106 (E11): 27853–27864. Bibcode:2001JGR...10627853W. doi:10.1029/2000JE001384. Retrieved May 12, 2008. - Denevi, Brett W.; Robinson, Mark S. (2008). "Albedo of Immature Mercurian Crustal Materials: Evidence for the Presence of Ferrous Iron". Lunar and Planetary Science. 39 (1391): 1750. Bibcode:2008LPI....39.1750D. - Wagner, Roland J.; Wolf, Ursula; Ivanov, Boris A.; Neukum, Gerhard (October 4–5, 2001). Application of an Updated Impact Cratering Chronology Model to Mercury' s Time-Stratigraphic System. Workshop on Mercury: Space Environment, Surface, and Interior. Proceedings of a workshop held at The Field Museum. Chicago, IL: Lunar and Planetary Science Institute. p. 106. Bibcode:2001mses.conf..106W. - Choi, Charles Q. (September 26, 2016). "Mercuryquakes May Currently Shake Up the Tiny Planet". Space.com. Retrieved September 28, 2016. - Dzurisin, Daniel (October 10, 1978). "The tectonic and volcanic history of Mercury as inferred from studies of scarps, ridges, troughs, and other lineaments". Journal of Geophysical Research. 83 (B10): 4883–4906. Bibcode:1978JGR....83.4883D. doi:10.1029/JB083iB10p04883. - Watters, Thomas R.; Daud, Katie; Banks, Maria E.; Selvans, Michelle M.; Chapman, Clark R.; Ernst, Carolyn M. (September 26, 2016). "Recent tectonic activity on Mercury revealed by small thrust fault scarps". Nature Geoscience. 9 (10): 743–747. Bibcode:2016NatGe...9..743W. doi:10.1038/ngeo2814. - Kerber, Laura; Head, James W.; Solomon, Sean C.; Murchie, Scott L.; Blewett, David T. (August 15, 2009). "Explosive volcanic eruptions on Mercury: Eruption conditions, magma volatile content, and implications for interior volatile abundances". Earth and Planetary Science Letters. 119 (3): 635–658. Bibcode:2009E&PSL.285..263K. doi:10.1016/j.epsl.2009.04.037. - Head, James W.; Chapman, Clark R.; Strom, Robert G.; Fassett, Caleb I.; Denevi, Brett W. (September 30, 2011). "Flood Volcanism in the Northern High Latitudes of Mercury Revealed by MESSENGER" (PDF). Science. 333 (6051): 1853–1856. Bibcode:2011Sci...333.1853H. doi:10.1126/science.1211997. PMID 21960625. - Thomas, Rebecca J.; Rothery, David A.; Conway, Susan J.; Anand, Mahesh (September 16, 2014). "Long-lived explosive volcanism on Mercury". Geophysical Research Letters. 41 (17): 6084–6092. Bibcode:2014GeoRL..41.6084T. doi:10.1002/2014GL061224. - Groudge, Timothy A.; Head, James W. (March 2014). "Global inventory and characterization of pyroclastic deposits on Mercury: New insights into pyroclastic activity from MESSENGER orbital data" (PDF). Journal of Geophysical Research. 119 (3): 635–658. Bibcode:2014JGRE..119..635G. doi:10.1002/2013JE004480. - Rothery, David A.; Thomas, Rebeca J.; Kerber, Laura (January 1, 2014). "Prolonged eruptive history of a compound volcano on Mercury: Volcanic and tectonic implications" (PDF). Earth and Planetary Science Letters. 385: 59–67. Bibcode:2014E&PSL.385...59R. doi:10.1016/j.epsl.2013.10.023. - Chang, Kenneth (November 29, 2012). "On Closest Planet to the Sun, NASA Finds Lots of Ice". The New York Times. p. A3. Archived from the original on November 29, 2012. Sean C. Solomon, the principal investigator for MESSENGER, said there was enough ice there to encase Washington, D.C., in a frozen block two and a half miles deep. - Lewis, John S. (2004). Physics and Chemistry of the Solar System (2nd ed.). Academic Press. p. 463. ISBN 978-0-12-446744-6. - Murdock, Thomas L.; Ney, Edward P. (1970). "Mercury: The Dark-Side Temperature". Science. 170 (3957): 535–537. Bibcode:1970Sci...170..535M. doi:10.1126/science.170.3957.535. PMID 17799708. - Lewis, John S. (2004). Physics and Chemistry of the Solar System. Academic Press. ISBN 978-0-12-446744-6. Retrieved June 3, 2008. - Ingersoll, Andrew P.; Svitek, Tomas; Murray, Bruce C. (1992). "Stability of polar frosts in spherical bowl-shaped craters on the moon, Mercury, and Mars". Icarus. 100 (1): 40–47. Bibcode:1992Icar..100...40I. doi:10.1016/0019-1035(92)90016-Z. - Slade, Martin A.; Butler, Bryan J.; Muhleman, Duane O. (1992). "Mercury radar imaging – Evidence for polar ice". Science. 258 (5082): 635–640. Bibcode:1992Sci...258..635S. doi:10.1126/science.258.5082.635. PMID 17748898. - Williams, David R. (June 2, 2005). "Ice on Mercury". NASA Goddard Space Flight Center. Retrieved May 23, 2008. - Rawlins, Katherine; Moses, Julianne I.; Zahnle, Kevin J. (1995). "Exogenic Sources of Water for Mercury's Polar Ice". Bulletin of the American Astronomical Society. 27: 1117. Bibcode:1995DPS....27.2112R. - Harmon, John K.; Perillat, Phil J.; Slade, Martin A. (2001). "High-Resolution Radar Imaging of Mercury's North Pole". Icarus. 149 (1): 1–15. Bibcode:2001Icar..149....1H. doi:10.1006/icar.2000.6544. - Domingue DL, Koehn PL, et al. (2009). "Mercury's Atmosphere: A Surface-Bounded Exosphere". Space Science Reviews. 131 (1–4): 161–186. Bibcode:2007SSRv..131..161D. doi:10.1007/s11214-007-9260-9. - Hunten, Donald M.; Shemansky, Donald Eugene; Morgan, Thomas Hunt (1988). "The Mercury atmosphere". In Vilas, Faith; Chapman, Clark R.; Shapley Matthews, Mildred (eds.). Mercury. University of Arizona Press. ISBN 978-0-8165-1085-6. - Lakdawalla, Emily (July 3, 2008). "MESSENGER Scientists 'Astonished' to Find Water in Mercury's Thin Atmosphere". Retrieved May 18, 2009. - Zurbuchen TH, Raines JM, et al. (2008). "MESSENGER Observations of the Composition of Mercury's Ionized Exosphere and Plasma Environment". Science. 321 (5885): 90–92. Bibcode:2008Sci...321...90Z. doi:10.1126/science.1159314. PMID 18599777. - "Instrument Shows What Planet Mercury Is Made Of". University of Michigan. June 30, 2008. Retrieved May 18, 2009. - Killen, Rosemary; Cremonese, Gabrielle; et al. (2007). "Processes that Promote and Deplete the Exosphere of Mercury". Space Science Reviews. 132 (2–4): 433–509. Bibcode:2007SSRv..132..433K. doi:10.1007/s11214-007-9232-0. - Killen, Rosemary M.; Hahn, Joseph M. (December 10, 2014). "Impact Vaporization as a Possible Source of Mercury's Calcium Exosphere". Icarus. 250: 230–237. Bibcode:2015Icar..250..230K. doi:10.1016/j.icarus.2014.11.035. hdl:2060/20150010116. - McClintock, William E.; Vervack, Ronald J.; et al. (2009). "MESSENGER Observations of Mercury's Exosphere: Detection of Magnesium and Distribution of Constituents". Science. 324 (5927): 610–613. Bibcode:2009Sci...324..610M. doi:10.1126/science.1172525 (inactive March 8, 2020). PMID 19407195. - Beatty, J. Kelly; Petersen, Carolyn Collins; Chaikin, Andrew (1999). The New Solar System. Cambridge University Press. ISBN 978-0-521-64587-4. - Seeds, Michael A. (2004). Astronomy: The Solar System and Beyond (4th ed.). Brooks Cole. ISBN 978-0-534-42111-3. - Williams, David R. (January 6, 2005). "Planetary Fact Sheets". NASA National Space Science Data Center. Retrieved August 10, 2006. - "Mercury's Internal Magnetic Field". NASA. January 30, 2008. Archived from the original on March 31, 2013. Retrieved April 7, 2008. - Gold, Lauren (May 3, 2007). "Mercury has molten core, Cornell researcher shows". Cornell University. Retrieved April 7, 2008. - Christensen, Ulrich R. (2006). "A deep dynamo generating Mercury's magnetic field". Nature. 444 (7122): 1056–1058. Bibcode:2006Natur.444.1056C. doi:10.1038/nature05342. PMID 17183319. - Steigerwald, Bill (June 2, 2009). "Magnetic Tornadoes Could Liberate Mercury's Tenuous Atmosphere". NASA Goddard Space Flight Center. Retrieved July 18, 2009. - Van Hoolst, Tim; Jacobs, Carla (2003). "Mercury's tides and interior structure". Journal of Geophysical Research. 108 (E11): 7. Bibcode:2003JGRE..108.5121V. doi:10.1029/2003JE002126. - "Space Topics: Compare the Planets: Mercury, Venus, Earth, The Moon, and Mars". Planetary Society. Archived from the original on July 28, 2011. Retrieved April 12, 2007. - Espenak, Fred (April 21, 2005). "Transits of Mercury". NASA/Goddard Space Flight Center. Retrieved May 20, 2008. - Biswas, Sukumar (2000). Cosmic Perspectives in Space Physics. Astrophysics and Space Science Library. Springer. p. 176. ISBN 978-0-7923-5813-8. - Margot, J. L.; Peale, S. J.; Jurgens, R. F.; Slade, M. A.; et al. (2007). "Large Longitude Libration of Mercury Reveals a Molten Core". Science. 316 (5825): 710–714. Bibcode:2007Sci...316..710M. doi:10.1126/science.1140514. PMID 17478713. - The Sun's total angular displacement during its apparent retrograde motion as seen from the surface of Mercury is ~1.23°, while the Sun's angular diameter when the apparent retrograde motion begins and ends is ~1.71°, increasing to ~1.73° at perihelion (midway through the retrograde motion). - Popular Astronomy: A Review of Astronomy and Allied Sciences. Goodsell Observatory of Carleton College. 1896. although in the case of Venus the libration in longitude due to the eccentricity of the orbit amounts to only 47' on either side of the mean position, in the case of Mercury it amounts to 23° 39' - Seligman, C., The Rotation of Mercury, cseligman.com, NASA Flash animation, retrieved July 31, 2019 - Mercury Closest Approaches to Earth generated with: 1. Solex 10 Archived April 29, 2009, at WebCite (Text Output file Archived March 9, 2012, at the Wayback Machine) 2. Gravity Simulator charts Archived September 12, 2014, at the Wayback Machine 3. JPL Horizons 1950–2200 Archived November 6, 2015, at the Wayback Machine (3 sources are provided to address original research concerns and to support general long-term trends) - Harford, Tim (January 11, 2019). "BBC Radio 4 – More or Less, Sugar, Outdoors Play and Planets". BBC. Oliver Hawkins, more or less alumnus and statistical legend, wrote some code for us, which calculated which planet was closest to the Earth on each day for the past 50 years, and then sent the results to David A. Rothery, professor of planetary geosciences at the Open University. - Stockman, Tom; Monroe, Gabriel; Cordner, Samuel (March 12, 2019). "Venus is not Earth's closest neighbor". Physics Today. doi:10.1063/PT.6.3.20190312a. - Stockman, Tom (March 7, 2019). Mercury is the closest planet to all seven other planets (video). YouTube. Retrieved May 29, 2019. - Davies, M. E., "Surface Coordinates and Cartography of Mercury," Journal of Geophysical Research, Vol. 80, No. 17, June 10, 1975. - Davies, M. E., S. E. Dwornik, D. E. Gault, and R. G. Strom, NASA Atlas of Mercury, NASA Scientific and Technical Information Office, 1978. - "USGS Astrogeology: Rotation and pole position for the Sun and planets (IAU WGCCRE)". Archived from the original on October 24, 2011. Retrieved October 22, 2009. - Archinal, Brent A.; A'Hearn, Michael F.; Bowell, Edward L.; Conrad, Albert R.; et al. (2010). "Report of the IAU Working Group on Cartographic Coordinates and Rotational Elements: 2009". Celestial Mechanics and Dynamical Astronomy. 109 (2): 101–135. Bibcode:2011CeMDA.109..101A. doi:10.1007/s10569-010-9320-4. ISSN 0923-2958. - Liu, Han-Shou; O'Keefe, John A. (1965). "Theory of Rotation for the Planet Mercury". Science. 150 (3704): 1717. Bibcode:1965Sci...150.1717L. doi:10.1126/science.150.3704.1717. PMID 17768871. - Colombo, Giuseppe; Shapiro, Irwin I. (1966). "The rotation of the planet Mercury". Astrophysical Journal. 145: 296. Bibcode:1966ApJ...145..296C. doi:10.1086/148762. - Correia, Alexandre C. M.; Laskar, Jacques (2009). "Mercury's capture into the 3/2 spin-orbit resonance including the effect of core–mantle friction". Icarus. 201 (1): 1–11. arXiv:0901.1843. Bibcode:2009Icar..201....1C. doi:10.1016/j.icarus.2008.12.034. - Correia, Alexandre C. M.; Laskar, Jacques (2004). "Mercury's capture into the 3/2 spin-orbit resonance as a result of its chaotic dynamics". Nature. 429 (6994): 848–850. Bibcode:2004Natur.429..848C. doi:10.1038/nature02609. PMID 15215857. - Noyelles, B.; Frouard, J.; Makarov, V. V. & Efroimsky, M. (2014). "Spin-orbit evolution of Mercury revisited". Icarus. 241 (2014): 26–44. arXiv:1307.0136. Bibcode:2014Icar..241...26N. doi:10.1016/j.icarus.2014.05.045. - Laskar, Jacques (March 18, 2008). "Chaotic diffusion in the Solar System". Icarus. 196 (1): 1–15. arXiv:0802.3371. Bibcode:2008Icar..196....1L. doi:10.1016/j.icarus.2008.02.017. - Laskar, Jacques; Gastineau, Mickaël (June 11, 2009). "Existence of collisional trajectories of Mercury, Mars and Venus with the Earth". Nature. 459 (7248): 817–819. Bibcode:2009Natur.459..817L. doi:10.1038/nature08096. PMID 19516336. - Le Verrier, Urbain (1859), (in French), "Lettre de M. Le Verrier à M. Faye sur la théorie de Mercure et sur le mouvement du périhélie de cette planète", Comptes rendus hebdomadaires des séances de l'Académie des sciences (Paris), vol. 49 (1859), pp. 379–383. (At p. 383 in the same volume Le Verrier's report is followed by another, from Faye, enthusiastically recommending to astronomers to search for a previously undetected intra-mercurial object.) - Baum, Richard; Sheehan, William (1997). In Search of Planet Vulcan, The Ghost in Newton's Clockwork Machine. New York: Plenum Press. ISBN 978-0-306-45567-4. - Clemence, Gerald M. (1947). "The Relativity Effect in Planetary Motions". Reviews of Modern Physics. 19 (4): 361–364. Bibcode:1947RvMP...19..361C. doi:10.1103/RevModPhys.19.361. - Gilvarry, John J. (1953). "Relativity Precession of the Asteroid Icarus". Physical Review. 89 (5): 1046. Bibcode:1953PhRv...89.1046G. doi:10.1103/PhysRev.89.1046. - Anonymous. "6.2 Anomalous Precession". Reflections on Relativity. MathPages. Retrieved May 22, 2008. - Nobili, Anna M. (March 1986). "The real value of Mercury's perihelion advance". Nature. 320 (6057): 39–41. Bibcode:1986Natur.320...39N. doi:10.1038/320039a0. - Hall, Shannon (March 24, 2020). "Life on the Planet Mercury? 'It's Not Completely Nuts' - A new explanation for the rocky world's jumbled landscape opens a possibility that it could have had ingredients for habitability". The New York Times. Retrieved March 26, 2020. - Roddriquez, J. Alexis P.; et al. (March 16, 2020). "The Chaotic Terrains of Mercury Reveal a History of Planetary Volatile Retention and Loss in the Innermost Solar System". Scientific Reports. 10 (4737). doi:10.1038/s41598-020-59885-5. PMC 7075900. Retrieved March 26, 2020. - Menzel, Donald H. (1964). A Field Guide to the Stars and Planets. The Peterson Field Guide Series. Boston: Houghton Mifflin Co. pp. 292–293. - Tezel, Tunç (January 22, 2003). "Total Solar Eclipse of 2006 March 29". Department of Physics at Fizik Bolumu in Turkey. Retrieved May 24, 2008. - Mallama, Anthony (2011). "Planetary magnitudes". Sky and Telescope. 121 (1): 51–56. - Espenak, Fred (1996). "NASA Reference Publication 1349; Venus: Twelve year planetary ephemeris, 1995–2006". Twelve Year Planetary Ephemeris Directory. NASA. Archived from the original on August 17, 2000. Retrieved May 24, 2008. - Walker, John. "Mercury Chaser's Calculator". Fourmilab Switzerland. Retrieved May 29, 2008. (look at 1964 and 2013) - "Mercury Elongation and Distance". Archived from the original on May 11, 2013. Retrieved May 30, 2008. – Numbers generated using the Solar System Dynamics Group, Horizons On-Line Ephemeris System - Kelly, Patrick, ed. (2007). Observer's Handbook 2007. Royal Astronomical Society of Canada. ISBN 978-0-9738109-3-6. - Alers, Paul E. (March 17, 2011). "Celebrating Mercury Orbit". NASA Multimedia. Retrieved March 18, 2011. - "NASA spacecraft now circling Mercury – a first". NBC News. March 17, 2011. Retrieved March 24, 2011. - Baumgardner, Jeffrey; Mendillo, Michael; Wilson, Jody K. (2000). "A Digital High-Definition Imaging System for Spectral Studies of Extended Planetary Atmospheres. I. Initial Results in White Light Showing Features on the Hemisphere of Mercury Unimaged by Mariner 10". The Astronomical Journal. 119 (5): 2458–2464. Bibcode:2000AJ....119.2458B. doi:10.1086/301323. - Schaefer, Bradley E. (2007). "The Latitude and Epoch for the Origin of the Astronomical Lore in Mul.Apin". American Astronomical Society Meeting 210, #42.05. 38: 157. Bibcode:2007AAS...210.4205S. - Hunger, Hermann; Pingree, David (1989). "MUL.APIN: An Astronomical Compendium in Cuneiform". Archiv für Orientforschung. 24: 146. - "MESSENGER: Mercury and Ancient Cultures". NASA JPL. 2008. Retrieved April 7, 2008. - Dunne, James A.; Burgess, Eric (1978). "Chapter One". The Voyage of Mariner 10 – Mission to Venus and Mercury. NASA History Office. - Στίλβων, Ἑρμῆς. Liddell, Henry George; Scott, Robert; A Greek–English Lexicon at the Perseus Project. - "Greek Names of the Planets". April 25, 2010. Retrieved July 14, 2012. Ermis is the Greek name of the planet Mercury, which is the closest planet to the Sun. It is named after the Greek God of commerce, Ermis or Hermes, who was also the messenger of the Ancient Greek gods.See also the Greek article about the planet. - Antoniadi, Eugène Michel (1974). The Planet Mercury. Translated from French by Moore, Patrick. Shaldon, Devon: Keith Reid Ltd. pp. 9–11. ISBN 978-0-904094-02-2. - Duncan, John Charles (1946). Astronomy: A Textbook. Harper & Brothers. p. 125. The symbol for Mercury represents the Caduceus, a wand with two serpents twined around it, which was carried by the messenger of the gods. - Heath, Sir Thomas (1921). A History of Greek Mathematics. II. Oxford: Clarendon Press. pp. vii, 273. - Goldstein, Bernard R. (1996). "The Pre-telescopic Treatment of the Phases and Apparent Size of Venus". Journal for the History of Astronomy. 27: 1. Bibcode:1996JHA....27....1G. doi:10.1177/002182869602700101. - Kelley, David H.; Milone, E. F.; Aveni, Anthony F. (2004). Exploring Ancient Skies: An Encyclopedic Survey of Archaeoastronomy. Birkhäuser. ISBN 978-0-387-95310-6. - De Groot, Jan Jakob Maria (1912). Religion in China: universism. a key to the study of Taoism and Confucianism. American lectures on the history of religions. 10. G. P. Putnam's Sons. p. 300. Retrieved January 8, 2010. - Crump, Thomas (1992). The Japanese numbers game: the use and understanding of numbers in modern Japan. Nissan Institute/Routledge Japanese studies series. Routledge. pp. 39–40. ISBN 978-0-415-05609-0. - Hulbert, Homer Bezaleel (1909). The passing of Korea. Doubleday, Page & company. p. 426. Retrieved January 8, 2010. - Pujari, R.M.; Kolhe, Pradeep; Kumar, N. R. (2006). Pride of India: A Glimpse Into India's Scientific Heritage. Samskrita Bharati. ISBN 978-81-87276-27-2. - Bakich, Michael E. (2000). The Cambridge Planetary Handbook. Cambridge University Press. ISBN 978-0-521-63280-5. - Milbrath, Susan (1999). Star Gods of the Maya: Astronomy in Art, Folklore and Calendars. University of Texas Press. ISBN 978-0-292-75226-9. - Samsó, Julio; Mielgo, Honorino (1994). "Ibn al-Zarqālluh on Mercury". Journal for the History of Astronomy. 25 (4): 289–96 . Bibcode:1994JHA....25..289S. doi:10.1177/002182869402500403. - Hartner, Willy (1955). "The Mercury Horoscope of Marcantonio Michiel of Venice". Vistas in Astronomy. 1 (1): 84–138. Bibcode:1955VA......1...84H. doi:10.1016/0083-6656(55)90016-7. at pp. 118–122. - Ansari, S. M. Razaullah (2002). History of oriental astronomy: proceedings of the joint discussion-17 at the 23rd General Assembly of the International Astronomical Union, organised by the Commission 41 (History of Astronomy), held in Kyoto, August 25–26, 1997. Springer Science+Business Media. p. 137. ISBN 1-4020-0657-8. - Goldstein, Bernard R. (1969). "Some Medieval Reports of Venus and Mercury Transits". Centaurus. 14 (1): 49–59. Bibcode:1969Cent...14...49G. doi:10.1111/j.1600-0498.1969.tb00135.x. - Ramasubramanian, K.; Srinivas, M. S.; Sriram, M. S. (1994). "Modification of the Earlier Indian Planetary Theory by the Kerala Astronomers (c. 1500 AD) and the Implied Heliocentric Picture of Planetary Motion" (PDF). Current Science. 66: 784–790. Archived from the original (PDF) on December 23, 2010. Retrieved April 23, 2010. - Sinnott, Roger W.; Meeus, Jean (1986). "John Bevis and a Rare Occultation". Sky and Telescope. 72: 220. Bibcode:1986S&T....72..220S. - Ferris, Timothy (2003). Seeing in the Dark: How Amateur Astronomers. Simon and Schuster. ISBN 978-0-684-86580-5. - Colombo, Giuseppe; Shapiro, Irwin I. (November 1965). "The Rotation of the Planet Mercury". SAO Special Report #188R. 188: 188. Bibcode:1965SAOSR.188.....C. - Holden, Edward S. (1890). "Announcement of the Discovery of the Rotation Period of Mercury [by Professor Schiaparelli]". Publications of the Astronomical Society of the Pacific. 2 (7): 79. Bibcode:1890PASP....2...79H. doi:10.1086/120099. - Davies ME, et al. (1978). "Surface Mapping". Atlas of Mercury. NASA Office of Space Sciences. Retrieved May 28, 2008. - Evans, John V.; Brockelman, Richard A.; Henry, John C.; Hyde, Gerald M.; Kraft, Leon G.; Reid, Wyatt A.; Smith, W. W. (1965). "Radio Echo Observations of Venus and Mercury at 23 cm Wavelength". Astronomical Journal. 70: 487–500. Bibcode:1965AJ.....70..486E. doi:10.1086/109772. - Moore, Patrick (2000). The Data Book of Astronomy. New York: CRC Press. p. 483. ISBN 978-0-7503-0620-1. - Butrica, Andrew J. (1996). "Chapter 5". To See the Unseen: A History of Planetary Radar Astronomy. NASA History Office, Washington D.C. ISBN 978-0-16-048578-7. - Pettengill, Gordon H.; Dyce, Rolf B. (1965). "A Radar Determination of the Rotation of the Planet Mercury". Nature. 206 (1240): 451–2. Bibcode:1965Natur.206Q1240P. doi:10.1038/2061240a0. - Mercury at Eric Weisstein's 'World of Astronomy' - Murray, Bruce C.; Burgess, Eric (1977). Flight to Mercury. Columbia University Press. ISBN 978-0-231-03996-3. - Colombo, Giuseppe (1965). "Rotational Period of the Planet Mercury". Nature. 208 (5010): 575. Bibcode:1965Natur.208..575C. doi:10.1038/208575a0. - Davies, Merton E.; et al. (1976). "Mariner 10 Mission and Spacecraft". SP-423 Atlas of Mercury. NASA JPL. Retrieved April 7, 2008. - Golden, Leslie M., A Microwave Interferometric Study of the Subsurface of the Planet Mercury (1977). PhD Dissertation, University of California, Berkeley - Mitchell, David L.; De Pater, Imke (1994). "Microwave Imaging of Mercury's Thermal Emission at Wavelengths from 0.3 to 20.5 cm (1994)". Icarus. 110 (1): 2–32. Bibcode:1994Icar..110....2M. doi:10.1006/icar.1994.1105. - Dantowitz, Ronald F.; Teare, Scott W.; Kozubal, Marek J. (2000). "Ground-based High-Resolution Imaging of Mercury". Astronomical Journal. 119 (4): 2455–2457. Bibcode:2000AJ....119.2455D. doi:10.1086/301328. - Harmon, John K.; Slade, Martin A.; Butler, Bryan J.; Head III, James W.; Rice, Melissa S.; Campbell, Donald B. (2007). "Mercury: Radar images of the equatorial and midlatitude zones". Icarus. 187 (2): 374–405. Bibcode:2007Icar..187..374H. doi:10.1016/j.icarus.2006.09.026. - Webster, Guy (June 10, 2014). "Mercury Passes in Front of the Sun, as Seen From Mars". NASA. Retrieved June 10, 2014. - Dunne, James A. & Burgess, Eric (1978). "Chapter Four". The Voyage of Mariner 10 – Mission to Venus and Mercury. NASA History Office. Retrieved May 28, 2008. - "Mercury". NASA Jet Propulsion Laboratory. May 5, 2008. Archived from the original on July 21, 2011. Retrieved May 29, 2008. - Leipold, Manfred E.; Seboldt, W.; Lingner, Stephan; Borg, Erik; Herrmann, Axel Siegfried; Pabsch, Arno; Wagner, O.; Brückner, Johannes (1996). "Mercury sun-synchronous polar orbiter with a solar sail". Acta Astronautica. 39 (1): 143–151. Bibcode:1996AcAau..39..143L. doi:10.1016/S0094-5765(96)00131-2. - Phillips, Tony (October 1976). "NASA 2006 Transit of Mercury". SP-423 Atlas of Mercury. NASA. Retrieved April 7, 2008. - "BepiColumbo – Background Science". European Space Agency. Retrieved June 18, 2017. - Malik, Tariq (August 16, 2004). "MESSENGER to test theory of shrinking Mercury". USA Today. Retrieved May 23, 2008. - Davies ME, et al. (1978). "Mariner 10 Mission and Spacecraft". Atlas of Mercury. NASA Office of Space Sciences. Retrieved May 30, 2008. - Ness, Norman F. (1978). "Mercury – Magnetic field and interior". Space Science Reviews. 21 (5): 527–553. Bibcode:1978SSRv...21..527N. doi:10.1007/BF00240907. - Aharonson, Oded; Zuber, Maria T; Solomon, Sean C (2004). "Crustal remanence in an internally magnetized non-uniform shell: a possible source for Mercury's magnetic field?". Earth and Planetary Science Letters. 218 (3–4): 261–268. Bibcode:2004E&PSL.218..261A. doi:10.1016/S0012-821X(03)00682-4. - Dunne, James A. & Burgess, Eric (1978). "Chapter Eight". The Voyage of Mariner 10 – Mission to Venus and Mercury. NASA History Office. - Grayzeck, Ed (April 2, 2008). "Mariner 10". NSSDC Master Catalog. NASA. Retrieved April 7, 2008. - "MESSENGER Engine Burn Puts Spacecraft on Track for Venus". SpaceRef.com. 2005. Retrieved March 2, 2006. - "Countdown to MESSENGER's Closest Approach with Mercury". Johns Hopkins University Applied Physics Laboratory. January 14, 2008. Archived from the original on May 13, 2013. Retrieved May 30, 2008. - "MESSENGER Gains Critical Gravity Assist for Mercury Orbital Observations". MESSENGER Mission News. September 30, 2009. Archived from the original on May 10, 2013. Retrieved September 30, 2009. - "NASA extends spacecraft's Mercury mission". UPI, November 15, 2011. Retrieved November 16, 2011. - "MESSENGER: Fact Sheet" (PDF). Applied Physics Laboratory. February 2011. Retrieved August 21, 2017. - Wall, Mike (March 29, 2015). "NASA Mercury Probe Trying to Survive for Another Month". Space.com. Retrieved April 4, 2015. - Chang, Kenneth (April 27, 2015). "NASA's Messenger Mission Is Set to Crash Into Mercury". The New York Times. Retrieved April 27, 2015. - Corum, Jonathan (April 30, 2015). "Messenger's Collision Course With Mercury". The New York Times. Retrieved April 30, 2015. - "Details of MESSENGER's Impact Location". MESSENGER Featured Images. JHU – APL. April 29, 2015. Archived from the original on April 30, 2015. Retrieved April 29, 2015. - "ESA gives go-ahead to build BepiColombo". European Space Agency. February 26, 2007. Retrieved May 29, 2008. - "BepiColombo Fact Sheet". European Space Agency. December 1, 2016. Retrieved December 19, 2016. - "Objectives". European Space Agency. February 21, 2006. Retrieved May 29, 2008. - Atlas of Mercury. NASA. 1978. SP-423. - Mercury nomenclature and map with feature names from the USGS/IAU Gazetteer of Planetary Nomenclature - Equirectangular map of Mercury by Applied Coherent Technology Corp - 3D globe of Mercury by Google - Mercury at Solarviews.com - Mercury by Astronomy Cast - MESSENGER mission web site - BepiColombo mission web site
What are Gas laws ? To study gas behaviors, we should first of all consider three physical properties of gas, which are pressure volume and temperature. It is based on these three properties that the three gas laws were formulated. Note that among the three states of matter gas is most movable because of high kinetic energy they have. The three gas laws are 1. Boyle’s law 2. Charles’ law 3. Pressure law or Gay Lussac’s law. This is one of the gas laws, It states that the volume of a fixed mass of gas varies inversely as its pressure provided the temperature remains constant. Boyle’s law can be mathematically stated as Or P1V1 =P2V2 , K=constant, p =pressure and V= volume Applications of boyle’s law 1.It is used in determination of a volume of a given mass of gas at a constant temperature when pressure is known 2. It can also be used to find the pressure of a given mass off gas at constant temperature when the volume is known One of the gas laws also is Charles’ law The Charles’ law state that the volume of a fixed mass of gas is directly proportional to its absolute temperature(T), provided the pressure remains constant Charles law can be represented in mathematical form as V1 and V2 are initial and final volumes while T1 and T2 are initial and final temperatures and K=constant Applications of Charles’ law 1.With Charles’ law absolute zero temperature, that is the temperature at which volume of gas is theoretically zero can be determined by plotting the volume of the gas against its temperature - The cubic expansivity of gas is also a result of Charles’ law. Therefore cubic expansivity is given as Hence Charles’ law can be restated as the volume of fixed mass of gas at a constant pressure increase by 1/273 of its volume at 0oC (Celsius or Kelvin) for every degree rise in temperature Pressure law or Gay Lussac’s law This deals with change in pressure and absolute temperature of a gas when the volume is kept constant It states that the pressure of a fixed mass of a gas is directly proportional to its absolute temperature provided the volume is kept constant Mathematically it is written as P=KT, K= constant, p= pressure and T=temperature Absolute temperature and pressure In gas laws In gas laws if P0 = gas pressure at temperature 0oC and PѲ = gas pressure at temperature ѲoC. The pressure law states mathematical that P0 = PѲ (1 + γѲ) Where γ= 1/273k-1 =pressure cubic expansivity at constant volume General gas laws equation The general gas laws equation is derived from the combination of these three laws the boyle’s law Charles’ law and pressure law or Gay Lussac’s law From boyle’s law PV =k at constant temperature From Charles’ law V/T =k at constant pressure From pressure law P/T =k constant volume When you combine these three equation of gas laws you abstain general gas laws equation as Standard temperature and pressure The temperature and pressure of volume of gases can be converted to a standard temperature and pressure. This conversion makes it easy for possible comparison. The standard temperature is taken as 0oC or 273K while the standard pressure is taken as 760mm of mercury. Van der waals force and equation for real gas What is Vander waals force? It is the force of attraction between gases molecule. It can also be called weak Vander waal force, because is indeed weak but not negligible. Vander waals equation Vander waals equation is given as Where a =constant, b= volume of gas, (V-b)=volume of the space insider container and R also =constant. Kinetic molecular theory and gas laws The kinetic theory state that: (I). All matters are made of atoms and molecular (II). The molecules are in a constant state of motion (II). That the molecule posses kinetic energy because of it continuous motion (IV). The molecule exert attractive force on one another (V). The nearer the molecules to one another the greater the attraction force Using of kinetic theory to explain the gas laws Explanation of gas laws using kinetic theory 1. Using kinetic theory to explain pressure exerted by a gas, kinetic theory considers gas as made up of large number of molecules which behave like elastic sphere. The molecule moves about in their containers with random velocities, colliding with one another and with the walls of the container. As gas molecules hit the walls of the containing vessel and rebound they experience a change in velocity and therefore change in momentum. The walls of the containers also experience forces due to change in momentum of the molecules of gas, thereby exacting pressure on the walls of the vessel. 2. Using kinetic theory to explain Boyle’s law If the Volume of the vessel containing gas is reduced at constant temperature the gas molecules will have less spaces to occupy. Therefore they take less time to hit the vessel. In this case more bombardments are made on the wall per unit time and therefore the pressure exerted by the gas increases. On the other hand when the volume of gas is increased the molecules will take more time to hit the wall, and this lead to the decrease in pressure of gas. In conclusion the pressure of a gas is therefore inversely proportional to the volume at a constant temperature( Boyle’s law) 3 Using kinetic theory to explain Charles’s law If you Increase the temperature of gas molecule at constant pressure the monocle we gain kinetic energy and thereby increasing the velocities of the molecules. With the increase in velocities the molecules will hit the wall more frequently and this brings about the increase pressure but to keep the pressure constant the volume must increase. therefore an increase in temperature at constant pressure will lead to an increase volume (Charles’s law) 4 using kinetic theory to explain pressure law(Gay Lussac’s Law) If the temperature of a given mass of gas is increased at constant volume the average kinetic energy of the molecules increases and this results in the increase in speed of molecules which make them to hit the wall of containing vessel more frequently. Consequently the pressure of this given mass of gas increases with increase in temperature at constant volume. (Gay Lussac’s Law) Questions and answers on gas laws - Calculate the volume of fraction change of a fixed mass of gas whose pressure is tripled at constant temperature Answer: Using Boyle’s law But P2 = 3P1, therefore So the final volume becomes one-third of the initial volume - The pressure of fixed mass of gas is given as 850cmHg when its volume is 20cm3. Calculate the volume of the gas when its pressure is 600cmHg Answer: using Boyle’s law. P1V1=P2V2 P1=pressure= 850cmHg, P2=600cmHg, V1=20cm3 850 x 20 = 600 x V2 - A fixed mass of gas with volume 600cm3 at 0oC is heated at constant pressure. Calculate the volume of the gas at 130oC Answer: Using Charles’ law T1=0 + 273 =273K, T2= 130 + 273 =403K, and V1 = 600cm3 - The pressure of a gas at constant volume is 100cmHg at 27oC. Calculate the pressure at 87oC Answer: using pressure law ( Gay lussac’s law) T1= 27 + 273 =300K T2= 87 + 273 =360K - Some quantity of dry air has a volume of 500cm3 at 40oC and under a pressure of 76cmHg. At a temperature of 120oC its volume becomes 600cm3, calculate the pressure of the gas at this temperature. Answer: Using gas laws general equation P1=76cmHg, p2=?, V1=500cm3, V2= 600cm3, T1 = 40 + 273=313K, and T2= 120 + 273 =393K
In analytical geometry (usually taught in high school), two lines are drawn on a paper that are perpendicular to each other. The vertical line represents the “y-axis,” and the horizontal line represents the “x-axis.” Using these two axes, every point on the paper can be given a value that defines where the point is. If the place where the two lines cross is the zero point or origin, its coordinates (x, y) are simply, (0, 0). Along the horizontal x-axis, starting to the right of the (0, 0) point, write little numbers like a ruler has, 1, 2, 3, and so forth. To the left of that point, write, -1, -2, -3, and so on. For the y-axis, the 1, 2, 3, and such go upward, whereas the -1, -2, -3 and the rest go downwards. Coordinate Axes: Equation of a Line – Examples OK. We’ve prepared our coordinate system. Using it as a kind of mapping aid, we will draw the simplest of equations, that of a straight line. What does the general equation for a line look like? Before we discuss that, we’ll first consider three examples. First, the line, x = 1 This equation means that no matter what value y has, x has the value one. Let’s draw some points to demonstrate what we mean. We’ll pick y = 0, y = 2, y = 5, y = -3. Then, we get Locating these on our coordinate axes, we see that all we are doing is drawing a line parallel to the y-axis, but to the right one notch! Correspondingly, if we next consider, y = 1 we end up with a line one notch above the x-axis! For our final example of a line. Let’s choose, y = 2x. Some example points are, This line goes from the bottom left-hand side of the coordinate axes up to the right-hand side. The line is just a bit more vertical than horizontal. This is because we included the number 2 in the equation. If we had written y = 1/2 x instead, the line would have been a little more horizontal than vertical. It is for good reason that the number in front of the x is called the “slope.” As in skiing, the number determines the slope of the line. General Equation for a Line OK. We’re ready to consider the general equation for a line. It is, y = mx + b We’ve already seen that m is a number representing the slope of the line. What, then, is b? It determines where the line crosses the y-axis. To demonstrate that, choose x = 0. Then if b = 2.7, for instance, y = 2.7. The line crosses the y-axis at (0, 2.7). This is the reason why b is called the “intercept.” It represents the point where the line intercepts the y-axis. Note: You might also enjoy Introduction to Cylindrical Coordinates
Carpentry Teacher Resources Find Carpentry educational ideas and activities Showing 61 - 80 of 109 resources Students role-play to show how bank loans made to people can have an impact on others in the community. In small groups, they analyze hypothetical loans, using flow charts or other diagrams to describe the probable impact of each. Learners use a pie (circle) to study fractions. They create their own fraction problems and demonstrate to their classmates how to solve the problems. They discuss ways they use fractions in their daily lives. Students examine how national events effect them in New York. They examine case studies about individuals who represent different groups in society. Learners examine laws that have affected women in history: the 1780's, following the United States independence from England; the 1880's, the time of westward expansion, the silver/gold era, and the coming of the Industrial Revolution. Students recall unique characteristics of Nepalese painting, sculpture, and music, create an example from one area of Nepalese art,s and share this with the class. Students demonstrate, in writing, the proper technique to measure, square, and cut a piece of stock to length using a circular saw. Students plan a meal including shopping, preparing and meal and serving it. They become aware of different foods from different ethnic backgrounds and describe the displays in the store and why the items are displayed as they are. Students compare and contrast the various types of architecture in the Southern colonies. Using slides, they discuss how the homes were made and the materials used. In groups, they identify how the types of homes reflected the lifestyles of the colonists living in the Southern colonies. Students discuss the impact of not keeping the environment in balance for future generations. As a class, they are introduced to the concept of "Balance of Nature" and what it means. In groups, they research the role of trees and how to start a recycling program in their community. To end the lesson, they make their own compost pile and use the material to grow their own plants. Fourth graders are introduced to the different cultural groups that have settled in Ohio. In groups, they research and describe the products and cultural practices of each group. Using the information, they create an accordion book for a quick reference guide to each group. Pupils watch a demonstration on how to perform a proper internet search. Individually, they visit the Occupational Outlook Handbook and identify its purpose. They identify what professionals in a specific career do on the job and the requirements to hold the position. Students are introduced to the elements of interior design. Using the these elements, they redesign an interior space and use proper colors. They summarize the elements of design and create color schemes for the interior and exterior of a house. Students use the graph and equation of a perimeter function to find characteristics that minimizes the perimeter of a rectangle. They comprehend that in business situations, such as packaging and city planning, minimizing perimeter is an effective way to reduce production expenses. Students go the Minimize Perimeter activity and explore the minimum perimeter of a rectangle with a fixed area. Students research the plight of homeless individuals and families, locally, within their country, or elsewhere. They problem-solve ideas to help alleviate homelessness. They then dramatically illustrate their solutions to be the problem of homelessness. In this irregular verbs worksheet, pupils write the correct form of irregular verbs. Students identify the principle parts of given verbs. Pupils answer forty fill in the blank questions. In this finding reasonable answers practice worksheet, students sharpen their problem solving skills as they solve 6 story problems. Middle schoolers explore organizations founded for the common good. In this character education lesson, students read about organizations that developed for the common good during the Civil War and Reconstruction. In small groups, middle schoolers present this information to the class. Students evaluate the Lengths of rooftop Rafters. In this measurement lesson, students calculate the lengths, slope, and width of different rooftop rafters. Students collaboratively investigate the data and prepare a presentation to discuss their findings. Fourth graders explore the culture and tradition of the settlers of Ohio. In this Ohio history instructional activity, 4th graders research Native Americans, European immigrants, Amish/Appalachian cultural groups, African-Americans, and other immigrant groups who settled Ohio. Students create accordion books based on their findings. Third graders compare how the lives of African American slave children differed from children's lives today. In this analysis of slavery lesson, 3rd graders evaluate and discuss the conditions of slavery in collaborative groups. Using information viewed and discussed over the prior two days, students create a poster project comparing their lives today to that of enslaved children.
In geometry, a locus (plural: loci) (Latin word for "place", "location") is a set of all points (commonly, a line, a line segment, a curve or a surface), whose location satisfies or is determined by one or more specified conditions. In other words, the set of the points that satisfy some property is often called the locus of a point satisfying this property. The use of the singular in this formulation is a witness that, until the end of the 19th century, mathematicians did not consider infinite sets. Instead of viewing lines and curves as sets of points, they viewed them as places where a point may be located or may move. Until the beginning of the 20th century, a geometrical shape (for example a curve) was not considered as an infinite set of points; rather, it was considered as an entity on which a point may be located or on which it moves. Thus a circle in the Euclidean plane was defined as the locus of a point that is at a given distance of a fixed point, the center of the circle. In modern mathematics, similar concepts are more frequently reformulated by describing shapes as sets; for instance, one says that the circle is the set of points that are at a given distance from the center. In contrast to the set-theoretic view, the old formulation avoids considering infinite collections, as avoiding the actual infinite was an important philosophical position of earlier mathematicians. Once set theory became the universal basis over which the whole mathematics is built, the term of locus became rather old-fashioned. Nevertheless, the word is still widely used, mainly for a concise formulation, for example: More recently, techniques such as the theory of schemes, and the use of category theory instead of set theory to give a foundation to mathematics, have returned to notions more like the original definition of a locus as an object in itself rather than as a set of points. Other examples of loci appear in various areas of mathematics. For example, in complex dynamics, the Mandelbrot set is a subset of the complex plane that may be characterized as the connectedness locus of a family of polynomial maps. To prove a geometric shape is the correct locus for a given set of conditions, one generally divides the proof into two stages: Find the locus of a point P that has a given ratio of distances k = d1/d2 to two given points. In this example k = 3, A(−1, 0) and B(0, 2) are chosen as the fixed points. Choose an orthonormal coordinate system such that A(−c/2, 0), B(c/2, 0). C(x, y) is the variable third vertex. The center of [BC] is M((2x + c)/4, y/2). The median from C has a slope y/x. The median AM has slope 2y/(2x + 3c). The locus of the vertex C is a circle with center (−3c/4, 0) and radius 3c/4. A locus can also be defined by two associated curves depending on one common parameter. If the parameter varies, the intersection points of the associated curves describe the locus. A locus of points need not be one-dimensional (as a circle, line, etc.). For example, the locus of the inequality 2x + 3y – 6 < 0 is the portion of the plane that is below the line of equation 2x + 3y – 6 = 0.
Attitudes are associated beliefs and behaviors towards some object. They are not stable, and because of the communication and behavior of other people, are subject to change by social influences, as well as by the individual's motivation to maintain cognitive consistency when cognitive dissonance occurs—when two attitudes or attitude and behavior conflict. Attitudes and attitude objects are functions of affective and cognitive components. It has been suggested that the inter-structural composition of an associative network can be altered by the activation of a single node. Thus, by activating an affective or emotional node, attitude change may be possible, though affective and cognitive components tend to be intertwined. Compliance refers to a change in behavior based on consequences, such as an individual's hopes to gain rewards or avoid punishment from another group or person. The individual does not necessarily experience changes in beliefs or evaluations of an attitude object, but rather is influenced by the social outcomes of adopting a change in behavior. The individual is also often aware that he or she is being urged to respond in a certain way. Compliance was demonstrated through a series of laboratory experiments known as the Asch experiments. Experiments led by Solomon Asch of Swarthmore College asked groups of students to participate in a "vision test". In reality, all but one of the participants were confederates of the experimenter, and the study was really about how the remaining student would react to the confederates' behavior. Participants were asked to pick, out of three line options, the line that is the same length as a sample and were asked to give the answer out loud. Unbeknown to the participants, Asch had placed a number of confederates to deliberately give the wrong answer before the participant. The results showed that 75% of responses were in line with majority influence and were the same answers the confederates picked. Variations in the experiments showed that compliance rates increased as the number of confederates increased, and the plateau was reached with around 15 confederates. The likelihood of compliance dropped with minority opposition, even if only one confederate gave the correct answer. The basis for compliance is founded on the fundamental idea that people want to be accurate and right. Identification explains one's change of beliefs and affect in order to be similar to someone one admires or likes. In this case, the individual adopts the new attitude, not due to the specific content of the attitude object, but because it is associated with the desired relationship. Often, children's attitudes on race, or their political party affiliations are adopted from their parents' attitudes and beliefs. Internalization refers to the change in beliefs and affect when one finds the content of the attitude to be intrinsically rewarding, and thus leads to actual change in beliefs or evaluations of an attitude object. The new attitude or behavior is consistent with the individual's value system, and tends to be merged with the individual's existing values and beliefs. Therefore, behaviors adopted through internalization are due to the content of the attitude object. The expectancy-value theory is based on internalization of attitude change. This model states that the behavior towards some object is a function of an individual's intent, which is a function of one's overall attitude towards the action. Emotion plays a major role in persuasion, social influence, and attitude change. Much of attitude research has emphasised the importance of affective or emotion components. Emotion works hand-in-hand with the cognitive process, or the way we think, about an issue or situation. Emotional appeals are commonly found in advertising, health campaigns and political messages. Recent examples include no-smoking health campaigns (see tobacco advertising) and political campaigns emphasizing the fear of terrorism. Attitude change based on emotions can be seen vividly in serial killers who are faced with major stress. There is considerable empirical support for the idea that emotions in the form of fear arousal, empathy, or a positive mood can enhance attitude change under certain conditions. Important factors that influence the impact of emotional appeals include self-efficacy, attitude accessibility, issue involvement, and message/source features. Attitudes that are central to one's being are highly resistant to change while others that are less fixed may change with new experiences or information. A new attitude (e.g. to time-keeping or absenteeism or quality) may challenge existing beliefs or norms so creating a feeling of psychological discomfort known as cognitive dissonance. It is difficult to measure attitude change since attitudes may only be inferred and there might be significant divergence between those publicly declared and privately held. Self-efficacy is a perception of one's own human agency; in other words, it is the perception of our own ability to deal with a situation. It is an important variable in emotional appeal messages because it dictates a person's ability to deal with both the emotion and the situation. For example, if a person is not self-efficacious about their ability to impact the global environment, they are not likely to change their attitude or behaviour about global warming. Affective forecasting, otherwise known as intuition or the prediction of emotion, also impacts attitude change. Research suggests that predicting emotions is an important component of decision making, in addition to the cognitive processes. How we feel about an outcome may override purely cognitive rationales. In terms of research methodology, the challenge for researchers is measuring emotion and subsequent impacts on attitude. Since we cannot see into the brain, various models and measurement tools have been constructed to obtain emotion and attitude information. Measures may include the use of physiological cues like facial expressions, vocal changes, and other body rate measures. For instance, fear is associated with raised eyebrows, increased heart rate and increased body tension. Other methods include concept or network mapping, and using primes or word cues. Dual models: depth of processingEdit Many dual process models are used to explain the affective (emotional) and cognitive processing and interpretations of messages, as well as the different depths of attitude change. These include the heuristic-systematic model of information processing and the elaboration likelihood model. Heuristic-systematic model of information processingEdit The heuristic-systematic model of information processing describes two depths in the processing of attitude change, systematic processing and heuristic processing. In this model information is either processed in a high-involvement and high-effort systematic way, or information is processed through shortcuts known as heuristics. For example, emotions are affect-based heuristics, in which feelings and gut-feeling reactions are often used as shortcuts. Systematic processing occurs when individuals are motivated and have high cognition to process a message. Individuals using systematic processing are motivated to pay attention and have the cognitive ability to think deeply about a message; they are persuaded by the content of the message, such as the strength or logic of the argument. Motivation can be determined by many factors, such as how personally relevant the topic is, and cognitive ability can be determined by how knowledgeable an individual is on the message topic, or whether or not there is a distraction in the room. Individuals who receive a message through systematic processing usually internalize the message, resulting in a longer and more stable attitude change. According to the heuristic-systematic model of information processing, people are motivated to use systematic processing when they want to achieve a "desired level of confidence" in their judgments. There are factors that have been found to increase the use of systematic processing; these factors are associated with either decreasing an individual's actual confidence or increasing an individual's perceived confidence. These factors may include framing persuasive messages in an unexpected manner; self-relevancy of the message. Systematic processing has been shown to be beneficial in social influence settings. Systematic reasoning has been shown to be successful in producing more valid solutions during group discussions and greater solution accuracy. Shestowsky's (1998) research in dyad discussions revealed that the individual in the dyad who had high motivation and high need in cognition had the greater impact on group decisions. Heuristic processing occurs when individuals have low motivation and/or low cognitive ability to process a message. Instead of focusing on the argument of the message, recipients using heuristic processing focus on more readily accessible information and other unrelated cues, such as the authority or attractiveness of the speaker. Individuals who process a message through heuristic processing do not internalize the message, and thus any attitude change resulting from the persuasive message is temporary and unstable. For example, people are more likely to grant favors if reasons are provided. A study shows that when people said, "Excuse me, I have five pages to xerox. May I use the copier?" they received a positive response of 60%. The statement, "Excuse me, I have five pages to xerox. I am in a rush. May I use the copier?" produced a 95% success rate. - Social proof is the means by which we utilize other people's behaviors in order to form our own beliefs. Our attitudes toward following the majority change when a situation appears uncertain or ambiguous to us, when the source is an expert, or when the source is similar to us. In a study conducted by Sherif, he discovered the power of crowds when he worked with experimenters who looked up in the middle of New York City. As the number of the precipitating group increased, the percentage of passers-by who looked up increased as well. - Reciprocity is returning a favor. People are more likely to return a favor if they have a positive attitude towards the other party. Reciprocities also develop interdependence and societal bonds. - Authority plays a role in attitude change in situations where there are superior-inferior relationships. We are more likely to become obedient to authorities when the authority's expertise is perceived as high and when we anticipate receiving rewards. A famous study that constitutes the difference in attitude change is the Milgram experiment, where people changed their attitude to "shocking their partner" more when they followed authorities whereas the subjects themselves would have not done so otherwise. - Liking has shown that if one likes another party, one is more inclined to carry out a favor. The attitude changes are based on whether an individual likes an idea or person, and if he or she does not like the other party, he/she may not carry out the favor or do so out of obligation. Liking can influence one's opinions through factors such as physical attractiveness, similarities, compliments, contact and cooperation. Elaboration likelihood modelEdit The elaboration likelihood model is similar in concept to and shares many ideas with other dual processing models, such as the heuristic-systematic model of information processing. In the elaboration likelihood model, cognitive processing is the central route and affective/emotion processing is often associated with the peripheral route. The central route pertains to an elaborate cognitive processing of information while the peripheral route relies on cues or feelings. The ELM suggests that true attitude change only happens through the central processing route that incorporates both cognitive and affective components as opposed to the more heuristics-based peripheral route. This suggests that motivation through emotion alone will not result in an attitude change. Cognitive dissonance theoryEdit Cognitive dissonance, a theory originally developed by Festinger (1957), is the idea that people experience a sense of guilt or uneasiness when two linked cognitions are inconsistent, such as when there are two conflicting attitudes about a topic, or inconsistencies between one's attitude and behavior on a certain topic. The basic idea of the Cognitive Dissonance Theory relating to attitude change, is that people are motivated to reduce dissonance which can be achieved through changing their attitudes and beliefs. Cooper & Fazio's (1984) have also added that cognitive dissonance does not arise from any simple cognitive inconsistency, but rather results from freely chosen behavior that may bring about negative consequences. These negative consequences may be threats to the consistency, stability, predictability, competence, moral goodness of the self-concept, or violation of general self-integrity. Research has suggested multiple routes that cognitive dissonance can be reduced. Self-affirmation has been shown to reduce dissonance, however it is not always the mode of choice when trying to reduce dissonance. When multiple routes are available, it has been found that people prefer to reduce dissonance by directly altering their attitudes and behaviors rather than through self-affirmation. People who have high levels of self-esteem, who are postulated to possess abilities to reduce dissonance by focusing on positive aspects of the self, have also been found to prefer modifying cognitions, such as attitudes and beliefs, over self-affirmation. A simple example of cognitive dissonance resulting in attitude change would be when a heavy smoker learns that his sister died young from lung cancer due to heavy smoking as well, this individual experiences conflicting cognitions: the desire to smoke, and the knowledge that smoking could lead to death and a desire not to die. In order to reduce dissonance, this smoker could change his behavior (i.e. stop smoking), change his attitude about smoking (i.e. smoking is harmful), or retain his original attitude about smoking and modify his new cognition to be consistent with the first one--"I also work out so smoking won't be harmful to me". Thus, attitude change is achieved when individuals experience feelings of uneasiness or guilt due to cognitive dissonance, and actively reduce the dissonance through changing their attitude, beliefs, or behavior relating in order to achieve consistency with the inconsistent cognitions. Sorts of studiesEdit - High-credibility sources lead to more attitude change immediately following the communication act, but a sleeper effect occurs in which the source is forgotten after a period of time. - Mild fear appeals lead to more attitude change than strong fear appeals.Propagandists had often used fear appeals. Hoveland's evidence about the effect of such appeals suggested that a source should be cautious in using fear appeals, because strong fear messages may interfere with the intended persuasion attempt. The process of how people change their own attitudes has been studied for years. Belief rationalization has been recognized as an important aspect to understand this process. The stability of people's past attitudes can be influenced if they hold beliefs that are inconsistent with their own behaviors. The influence of past behavior on current attitudes is stable when little information conflicts with the behavior. Alternatively, people's attitudes may lean more radically toward the prior behavior if the conflict makes it difficult to ignore, and forces them to rationalize their past behavior. For example, if you believe that Greece is a beautiful country and that the people there are very hardworking, then your belief will be stable or even become reinforced when you visit the country and are amazed by its beauty. However, when you interact with local Greeks, you realize they are in actuality not as hard-working as you had imagined. This inconsistency becomes difficult to ignore once it stands out. Attitudes are often restructured at the time people are asked to report them. As a result, inconsistencies between the information that enters into the reconstruction and the original attitudes can produce changes in prior attitudes, whereas consistency between these elements often elicits stability in prior attitudes. Individuals need to resolve the conflict between their own behaviors and the subsequent beliefs. However, people usually align themselves with their attitudes and beliefs instead of their behaviors. More importantly, this process of resolving people's cognitive conflicts that emerges cuts across both self-perception and dissonance even when the associated effect may only be strong in changing prior attitudes Human judgment is comparative in nature. Departing from identifying people's need to justify their own beliefs in the context of their own behaviors, psychologists also believe that people have the need to carefully evaluate new messages on the basis of whether these messages support or contradict with prior messages, regardless of whether they can recall the prior messages after they reach a conclusion. This comparative processing mechanism is built on "information-integration theory" and "social judgement theory". Both of these theories have served to model people's attitude change in judging the new information while they haven't adequately explained the influential factors that motivate people to integrate the information. More recent work in the area of persuasion has further explored this "comparative processing" from the perspective of focusing on comparing between different sets of information on one single issue or object instead of simply making comparisons among different issues or objects. As previous research demonstrated, analyzing information on one target product may trigger less impact of comparative information than comparing this product with the same product under competing brands. When people compare different sets of information on one single issue or object, the effect of people's effort to compare new information with prior information seemed to correlate with the perceived strength of the new, strong information when considered jointly with the initial information. Comparison processes can be enhanced when prior evaluations, associated information, or both are accessible. People will simply construct a current judgment based on the new information or adjust the prior judgment when they are not able to retrieve the information from prior messages. The impact this comparative process can have on people's attitude change is mediated by changes in the strength of new information perceived by receivers. The effects of comparison on judgment change were mediated by changes in the perceived strength of the information. These findings above have wide range of applications in social marketing, political communication, and health promotion. For example, designing an advertisement that is counteractive against an existing attitude towards a behavior or policy is perhaps most effective if the advertisement uses the same format, characters, or music of ads associated with the initial attitudes. - McGuire, W., Lindzey, G., & Aronson, E. (1985). Attitudes and attitude change. Handbook of social psychology: Special fields and applications, 2, 233–346. - Eagly, A., & Chaiken, S. (1995). Attitude strength, attitude structure and resistance to change. In R. Petty and J. Kosnik (Eds.), Attitude Strength. (pp. 413–432). Mahwah, NJ: Erlbaum. - Kelman, H.C. (1938). Compliance, identification, and internalization: Three processes of attitude change. Journal of Conflict Resolution, 2(1), 51–60. - Asch, S. E. (1956). Studies of independence and conformity: A minority of one against a unanimous majority. Psychological Monographs, 70 (Whole no. 416). - Cialdini, Robert B.; Goldstein, Noah J. (2004). "SOCIAL INFLUENCE: Compliance and Conformity". Annu. Rev. Psychol. 55: 591–621. doi:10.1146/annurev.psych.55.090902.142015. - Breckler, S. J., & Wiggins, E. C. (1992). On defining attitude and attitude theory: Once more with feeling. In A. R. Pratkanis, S. J. Breckler, & A. G. Greenwald (Eds.) Attitude Structure and Function (pp. 407–427). Hillsdale, NJ: Erlbaum. - Davis, E. E. (1965). Attitude change: A review and bibliography of selected research. Paris: Unesco. - Leventhal, 1970 - Maddux & Rogers, 1980 - Shelton & Rogers, 1981 - Janis, Kaye, & Kirschner, 1965 - Leventhal, H. A. (1970)Findings and theory in the study of fear communications.In L. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 5, pp. 120-186). Orlando, FL: Academic Press. - Bandura, A. (1982). "Self-efficacy mechanism in human agency". American Psychologist. 37: 122–147. doi:10.1037/0003-066x.37.2.122. - Loewenstein, G. (2007). Affect regulation and affective forecasting. In Gross, J. J. (Ed.) Handbook of Emotion Regulation (pp. 180–203). New York: Guilford. - Dillard, J. (1994). Rethinking the study of fear appeals: An emotional perspective. Communication Theory, 4, 295-323. - Shavelson, R. J., & Stanton, G. C. (1975). Construct validation: Methodology and application to three measures of cognitive structure. Journal of Educational Measurement, 12, 67-85. - Chaiken, S., Liberman, A. & Eagly, A. H. (1989). Heuristic and systematic information processing within and beyond the persuasion context. In J. S. Uleman & J. A. Bargh. (Eds.), Unintended thought, pp. 212-252. New York: Guilford. - Chaiken, S. (1980). "Heuristic Versus Systematic Information Processing and the Use of Source Versus Message Cues in Persuasion. Journal of". Personality & Social Psychology. 39 (5): 752–766. doi:10.1037/0022-35184.108.40.2062. - Wood, Wendy (2000). "Attitude Change: Persuasion and Social Influence". Annu. Rev. Psychol. 51: 539–570. doi:10.1146/annurev.psych.51.1.539. PMID 10751980. - Smith, SM; Petty, RE. (1996). "Message framing and persuasion: a message processing analysis". Pers. Soc. Psychol. Bull. 22: 257–68. - Shestowsky, D; Wegener, DT; Fabrigar, LR. (1998). "Need for cognition and interpersonal influence: individual differences in impact on dyadic decision". J. Pers. Soc. Psychol. 74: 1317–28. doi:10.1037/0022-35220.127.116.117. - Langer, Blank, & Chanowitz (1978) - Sherif (1936) - Cialdini, R.B. (2008). Influence: Science and practice (5th edition). New York: Harper Collins. - Albarracin, D., Johnson, B. T., & Zanna, M. P. (20050. Then handbook of attitudes. Mahwah, N.J: Lawrence Erlbaum Associates Publishers. - Petty, Richard E.; Cacioppo, John T. (1986). "The elaboration likelihood model of persuasion" (PDF). Advances in Experimental Social Psychology. 19: 123–205. - Festinger, L. (1957). A theory of cognitive dissonance. Stanford, CA: Stanford University Press. - Cooper J, Fazio RH. 1984. A new look at dissonance theory. Adv. Exp. Soc. Psychol.17:229–66 - Aronson E. 1992. The return of the repressed:Dissonance theory makes a comeback. Psychol.Inq. 3:303–11 - Steele CM. 1988. The psychology of selfaffirmation.Adv. Exp. Soc. Psychol.21:261–302 - Stone J, Wiegand AW, Cooper J, Aronson E.1997. When exemplification fails: hypocrisy and the motive for self-integrity. J.Pers. Soc. Psychol. 72:54–65. - Gibbons FX, Eggleston TJ, Benthin AC. 1997.Cognitive reactions to smoking relapse: the reciprocal relation between dissonance and self-esteem. J. Pers. Soc. Psychol. 72:184–95 - Rogers, Everett M:A history of communication study - Albarracin, D., & McNatt, P. S. (2005). Maintenance and decay of past behavior influences: Anchoring attitudes on beliefs following inconsistent actions. Personality and Social Psychology Bulletin, 31, 719-733. - Festinger, L. (1957). A theory of cognitive dissonance. Evanston, IL: Row, Peterson. - Erber, M. W., Hodges, S. D., & Wilson, T. D. (1995). Attitude strength, attitude stability, and the effects of analyzing reasons. In R. E. Petty & J. A. Krosnick (Eds.), Attitude strength: Antecedents and consequences. Ohio State University series on attitudes and persuasion (Vol. 4, pp. 433-454. Hillsdale, NJ: Lawrence Erlbaum - Judd, C. M., & Brauer, M. (1995). Repetition and evaluative extremity. In R. E. Petty & J. A. Krosnick (Eds.), Attitude strength: Antecedents and consequences. Ohio State University series on attitudes and persuasion (Vol. 4, pp. 43-71). Hillsdale, NJ: Lawrence Erlbaum. - Mussweiler, T. (2003). Comparison processes in social judgment: mechanisms and consequences. Psychological Review, 110(3), 472 - Crano, W. D., & Prislin, R. (2006). Attitudes and persuasion. Annual Review of Psychology, 57, 345–374. - Anderson, N. H. (1959). A test model of opinion change. Journal of Abnormal and Social Psychology, 59, 371–381. - Sherif, M., & Hovland, C. I. (1961). Placement of items on controversial issues. In M. Sherif & C. Hovland (Eds.), Social judgment (pp. 99–126). New Haven, CT: Yale University Press. - Albarracin, D., Wallace, H. M., Hart, W., & Brown, R. D. (2012). How judgments change following comparison of current and prior information. Basic and Applied Social Psychology, 34, 44-55 - Gentner, D., & Markman, A. B. (1997). Structural alignment in analogy and similarity. American Psychologist, 52, 45–56 - Muthukrishnan, A. V., Pham, M. T., & Mungale´, A. (2001). Does greater amount of information always bolster attitudinal resistance? Marketing Letters, 12, 131–144
PDF (Acrobat) Document File Be sure that you have an application to open this file type before downloading and/or purchasing. Engage your students in the mysteries of Space! This resource gives students a fantastic introduction to black holes i.e. what they are, how they are formed, and what their role is within the universe. This lesson is part of a 5-lesson introductory unit on Space. After teaching the 5 lessons, I ask the students to complete a research project on any space related topic of their choosing. I am always amazed at how interested my students are in Space and how motivated they are to learn more about it. Have fun!! This Lesson Includes: - A definition of Black Holes - The function of Black Holes - The discovery of Black Holes - The size of Black Holes - Black Holes and their relation to Galaxies - The dangers of Black Holes - 7 Discussion/Comprehension questions Thanks so much, Your Feedback = Free Products After leaving feedback on one of our products, email us at CreativeLab25@gmail.com and we will send you any Creative Lab product for free (excludes bundles). We look forward to hearing from you! Followers Get Premium Access • Look for the green star next to our store logo at the top right corner of this page and click it to become a follower. This will allow you to be notified each time we debut a new product or freebie! Tags: black holes, blackholes, black hole, blackhole, space worksheets, the big bang questions and answers, non fiction big bang, origin of the universe, creation of the universe, space, outer space, multiverse, Space, worksheet, comprehension questions
Consensus: what it is and how it works Consensus algorithms are one of the most important parts required for blockchains to function properly. They verify that all transactions are performed correctly and ensure that the entire system works. A consensus algorithm ensures interconnection between nodes, namely, computers connected to a particular blockchain network. By interconnecting with each other, they check that all blocks are written correctly and prevent passing nodes that are suspected of improper operation, for example, after a hacker attack or due to the computer owner's fraud. This supports the continued integrity and security of the blockchain. At the same time, the key difference between a protocol and an algorithm is something crucial to grasp. For example, a protocol is a set of rules by which a blockchain operates, while a consensus algorithm is a mechanism for verifying the accuracy of such rules. Selection of different algorithms Today, there are many new blockchains in the crypto sector, with more and more of them appearing all the time. Consequently, in the market appear new protocols and new consensus algorithms to verify them and ensure their stable operation. The most common types of algorithms include: - Proof-of-Work (PoW)v - Proof-of-Stake (PoS) - Proof-of-Burn (PoB) - Proof-of-Authority (PoA) These algorithms are based on certain procedures designed to protect the blockchain from improperly executed transactions. Let's take a closer look at each of these consensus algorithms to understand the key differences between them. This consensus algorithm is actively used in Bitcoin and Ethereum. The protocol works as follows. In order to confirm a transaction, miners in a given blockchain network must solve certain mathematical problems to prevent the same digital coins from being used twice. The first one who solves the problem gets a reward. The pros and cons of this method derive from its basic principle, namely, the complexity of mathematical problems. Due to the solution difficulty, requiring a large amount of computing power, the system gets effective protection against most hacker attacks. However, most computing power is wasted since the new block writes down only that node that solved the problem first. Besides, huge capacities consume a lot of electricity, causing environmental problems. Still, we shouldn't ignore that PoW was the first consensus algorithm that allowed cryptocurrencies to emerge. For its time, it was a breakthrough in the security field. The Proof-of-Stake consensus algorithm was proposed as an alternative to PoW on the Bitcointalk platform in 2011, which is now used by such blockchains as Cardano, Binance Chain, IOTA, Nano, TRON, TomoChain, and Ziliqa. Moreover, such a popular platform as Ethereum is currently completing the process of switching to this algorithm. The principal difference between this algorithm and its predecessor is the complete absence of mining. It has been replaced by such a process as staking, which is similar in its operation to the usual bank deposit. Validators assume the role of miners, being users who put digital coins into stacking (long-term storage of coins in the blockchain network). The more coins are frozen in the network, the greater the chance of being the one to make a new blockchain entry and consequently get rewarded. The PoS advantage is that there is no electricity and computer power waste. Besides, there is no need to keep increasing the hash rate by constantly buying new equipment. Also, this consensus contributes to high transaction levels and, consequently, scalability, both of which are crucial for international projects. Still, staking, just like mining, requires expenses and technical skills. In order to become a validator, you have to possess the minimum required number of coins. For example, in Ethereum 2.0, it is 32 ETN (about $41,000 at the current exchange rate). You need to keep these coins frozen in your wallet for at least a few months. You will also need to set up the equipment and make sure it is constantly connected to the network. It acts as an alternative to PoW and PoS. Miners send coins to a special address, to which no private keys can be matched. In this way, the coins from this wallet cannot be spent either since they are burned. As a reward, the miner creates a new block and gets rewarded for it with new network coins. The more coins you burn, the higher the chances of getting a block reward. If we draw an analogy, it is similar to a lottery, where the miner destroys some of his coins so as to win the same coins, but in larger quantities. The algorithm advantages include low power consumption and cost-effectiveness, as there is no need to spend money on expensive mining hardware. Moreover, if demand persists or grows, the algorithm can increase the value of the remaining coins since their number is constantly decreasing. The PoB's biggest drawback is that it is suitable only for fully developed projects where the main coin emission is already completed, so they have something to burn. That is why this algorithm isn't popular. Still, it is sometimes used, like in the Counterparty blockchain (XCP). This is a consensus algorithm that considers the merits and rankings of validators and combines the capabilities of PoW and PoS. There is no mining at all, which means no computing hardware competition, as well as no huge energy consumption. In PoA, validators use their own reputation rather than the power of hardware or the number of coins to generate blocks. For instance, a fixed number of validators, chosen by network participants or project developers, are responsible for the network's performance. Such an approach guarantees high transaction processing speed and good scalability. At the same time, validators make sure that their work is honest and transparent, otherwise they will lose their status and reputation as a reliable network participant. The key drawback of PoA is the potential for excessive centralization and lack of motivation for users, who are not rewarded for mining or staking. Moreover, in a classic PoA, the average user has no influence on the blockchain network because it is handled by trusted nodes, which are usually owned by the same company. This algorithm was used in the creation of UMI cryptocurrency. Subscribe to our Telegram channel for the most relevant, interesting, and informative news from the crypto industry.
Table of Contents What does the insert function do? If you’re having trouble finding the right function, the Insert Function command lets you search for the function you want. It also guides you through inserting the arguments, which is helpful for complex functions. Click the cell where you want to add a formula. How do I use the insert function in Excel? Go to the Formulas ribbon – choose either the Insert Function icon to bring up the Insert Function dialog box (same dialog box you would get with the first method), or click the arrow next to the correct category in the Function Library Group, and then choose the desired function. Which sign is used for entering function? Just like a basic formula, you need to start with the equal sign. After that, you would put the function name, then the range of cells inside parentheses, separated with a colon. For example: =SUM(B2:B5). What are 3 different ways that you can insert a function into Excel? There are several ways you can insert your functions: - Formulas tab, Insert Function. - Pressing (Shift + F3) - Clicking the Insert Function button “fx”. - Typing an equal sign directly into a cell. This method does not display the “Insert Function” dialog box. - Using the Name Box on the left of the formula bar. What is insert ()? The Python insert() method adds an element to a specific position in a list. insert() accepts the position at which you want to add the new item and the new item you want to add as arguments. The parameters for the insert() method are: index: The position where an element should be inserted. What is insert dialog? The Insert Function dialog box (shown in the following figure) is designed to simplify the task of using functions in your worksheet. The dialog box not only helps you locate the proper function for the task at hand, but also provides information about the arguments that the function takes. What is the purpose of the Insert function dialog box? What does B2 C10 mean in Excel? CELL REFERENCES Cells B2:C10 are the entries from column B row 2 in the top left to column C row 10 in the bottom right. This is 2 columns times 9 rows yielding 18 entries. Cell references are most often relative but can also be absolute. How do you use the sign function? Excel SIGN Function - Summary. The Excel SIGN function returns the sign of a number as +1, -1 or 0. - Get the sign of a number. - One if positive. Negative one if negative. - =SIGN (number) - number – The number to get the sign of. - The SIGN function returns the sign of a number as +1, -1 or 0. If number is positive, SIGN returns 1. What is the use of equal sign in Excel? Using calculation operators in Excel formulas |= (equal sign)||Equal to||=A1=B1| |> (greater than sign)||Greater than||=A1>B1| |< (less than sign)||Less than||=A1| |>= (greater than or equal to sign)||Greater than or equal to||=A1>=B1| Where is the Insert function button in Excel? You can display the Insert Function dialog box in three ways: - Click the Insert Function button on the Formulas Ribbon. - On the Formula Bar, click the smaller Insert Function button (which looks like fx). - Click the small arrow to the right of the AutoSum feature on the Formulas Ribbon, and select More Functions. What are the rules to enter a function? The rules to enter a Function are: - All Excel functions must begin with = sign. - Function name must be a valid Excel name. For example: SUM, AVERAGE. - Function name must be followed by an opening and closing parenthesis. - Arguments are enclosed in the parenthesis. For example, =SUM (A1:A5) .
Three independent lines of evidence support this conclusion: the first measurements of excess hydrogen at Mercury’s north pole with MESSENGER’s Neutron Spectrometer, the first measurements of the reflectance of Mercury’s polar deposits at near-infrared wavelengths with the Mercury Laser Altimeter (MLA), and the first detailed models of the surface and near-surface temperatures of Mercury’s north polar regions that utilize the actual topography of Mercury’s surface measured by the MLA. These findings are presented in three papers published online today in Science Express. Given its proximity to the Sun, Mercury would seem to be an unlikely place to find ice. But the tilt of Mercury’s rotational axis is almost zero — less than one degree — so there are pockets at the planet’s poles that never see sunlight. Scientists suggested decades ago that there might be water ice and other frozen volatiles trapped at Mercury’s poles. The idea received a boost in 1991, when the Arecibo radio telescope in Puerto Rico detected unusually radar-bright patches at Mercury’s poles, spots that reflected radio waves in the way one would expect if there were water ice. Many of these patches corresponded to the location of large impact craters mapped by the Mariner 10 spacecraft in the 1970s. But because Mariner saw less than 50 percent of the planet, planetary scientists lacked a complete diagram of the poles to compare with the images. MESSENGER’s arrival at Mercury last year changed that. Images from the spacecraft’s Mercury Dual Imaging System taken in 2011 and earlier this year confirmed that radar-bright features at Mercury’s north and south poles are within shadowed regions on Mercury’s surface, findings that are consistent with the water-ice hypothesis. Now the newest data from MESSENGER strongly indicate that water ice is the major constituent of Mercury’s north polar deposits, that ice is exposed at the surface in the coldest of those deposits, but that the ice is buried beneath an unusually dark material across most of the deposits, areas where temperatures are a bit too warm for ice to be stable at the surface itself. MESSENGER uses neutron spectroscopy to measure average hydrogen concentrations within Mercury’s radar-bright regions. Water-ice concentrations are derived from the hydrogen measurements. “The neutron data indicate that Mercury’s radar-bright polar deposits contain, on average, a hydrogen-rich layer more than tens of centimeters thick beneath a surficial layer 10 to 20 centimeters thick that is less rich in hydrogen,” writes David Lawrence, a MESSENGER Participating Scientist based at The Johns Hopkins University Applied Physics Laboratory and the lead author of one of the papers. “The buried layer has a hydrogen content consistent with nearly pure water ice.” Data from MESSENGER’s Mercury Laser Altimeter (MLA) — which has fired more than 10 million laser pulses at Mercury to make detailed maps of the planet’s topography — corroborate the radar results and Neutron Spectrometer measurements of Mercury’s polar region, writes Gregory Neumann of the NASA Goddard Space Flight Center. In a second paper, Neumann and his colleagues report that the first MLA measurements of the shadowed north polar regions reveal irregular dark and bright deposits at near-infrared wavelength near Mercury’s north pole. “These reflectance anomalies are concentrated on poleward-facing slopes and are spatially collocated with areas of high radar backscatter postulated to be the result of near-surface water ice,” Neumann writes. “Correlation of observed reflectance with modeled temperatures indicates that the optically bright regions are consistent with surface water ice.” The MLA also recorded dark patches with diminished reflectance, consistent with the theory that the ice in those areas is covered by a thermally insulating layer. Neumann suggests that impacts of comets or volatile-rich asteroids could have provided both the dark and bright deposits, a finding corroborated in a third paper led by David Paige of the University of California, Los Angeles. Paige and his colleagues provided the first detailed models of the surface and near-surface temperatures of Mercury’s north polar regions that utilize the actual topography of Mercury’s surface measured by the MLA. The measurements “show that the spatial distribution of regions of high radar backscatter is well matched by the predicted distribution of thermally stable water ice,” he writes. According to Paige, the dark material is likely a mix of complex organic compounds delivered to Mercury by the impacts of comets and volatile-rich asteroids, the same objects that likely delivered water to the innermost planet.The organic material may have been darkened further by exposure to the harsh radiation at Mercury’s surface, even in permanently shadowed areas. This dark insulating material is a new wrinkle to the story, says Sean Solomon of the Columbia University’s Lamont-Doherty Earth Observatory, principal investigator of the MESSENGER mission. “For more than 20 years the jury has been deliberating on whether the planet closest to the Sun hosts abundant water ice in its permanently shadowed polar regions. MESSENGER has now supplied a unanimous affirmative verdict.” “But the new observations have also raised new questions,” adds Solomon. “Do the dark materials in the polar deposits consist mostly of organic compounds? What kind of chemical reactions has that material experienced? Are there any regions on or within Mercury that might have both liquid water and organic compounds? Only with the continued exploration of Mercury can we hope to make progress on these new questions.” The EarthSky team has a blast bringing you daily updates on your cosmos and world. We love your photos and welcome your news tips. Earth, Space, Human World, Tonight.
This last week we will work in a mathematical model to explain how the evolution has occured. So please watch this video. And here you have information about this topic. And exercises to develop to get extra point. Use a piece of paper to copy and solve them. POPULATION GENETICS AND THE HARDY-WEINBERG LAW The Hardy-Weinberg formulas allow scientists to determine whether evolution has occurred. Any changes in the gene frequencies in the population over time can be detected. The law essentially states that if no evolution is occurring, then an equilibrium of allele frequencies will remain in effect in each succeeding generation of sexually reproducing individuals. In order for equilibrium to remain in effect (i.e. that no evolution is occurring) then the following five conditions must be met: - No mutations must occur so that new alleles do not enter the population. - No gene flow can occur (i.e. no migration of individuals into, or out of, the population). - Random mating must occur (i.e. individuals must pair by chance) - The population must be large so that no genetic drift (random chance) can cause the allele frequencies to change. - No selection can occur so that certain alleles are not selected for, or against. p2 + 2pq + q2 = 1 and p + q = 1 q = frequency of the recessive allele in the population p2 = percentage of homozygous dominant individuals q2 = percentage of homozygous recessive individuals 2pq = percentage of heterozygous individuals Individuals that have aptitude for math find that working with the above formulas is ridiculously easy. However, for individuals who are unfamiliar with algebra, it takes some practice working problems before you get the hang of it. Below I have provided a series of practice problems that you may wish to try out. Note that I have rounded off some of the numbers in some problems to the second decimal place: - PROBLEM #1.You have sampled a population in which you know that the percentage of the homozygous recessive genotype (aa) is 36%. Using that 36%, calculate the following: - The frequency of the "aa" genotype. - The frequency of the "a" allele. - The frequency of the "A" allele. - The frequencies of the genotypes "AA" and "Aa." - The frequencies of the two possible phenotypes if "A" is completely dominant over "a." - PROBLEM #2.Sickle-cell anemia is an interesting genetic disease. Normal homozygous individials (SS) have normal blood cells that are easily infected with the malarial parasite. Thus, many of these individuals become very ill from the parasite and many die. Individuals homozygous for the sickle-cell trait (ss) have red blood cells that readily collapse when deoxygenated. Although malaria cannot grow in these red blood cells, individuals often die because of the genetic defect. However, individuals with the heterozygous condition (Ss) have some sickling of red blood cells, but generally not enough to cause mortality. In addition, malaria cannot survive well within these "partially defective" red blood cells. Thus, heterozygotes tend to survive better than either of the homozygous conditions. If 9% of an African population is born with a severe form of sickle-cell anemia (ss), what percentage of the population will be more resistant to malaria because they are heterozygous (Ss) for the sickle-cell gene? - PROBLEM #3.There are 100 students in a class. Ninety-six did well in the course whereas four blew it totally and received a grade of F. Sorry. In the highly unlikely event that these traits are genetic rather than environmental, if these traits involve dominant and recessive alleles, and if the four (4%) represent the frequency of the homozygous recessive condition, please calculate the following: - The frequency of the recessive allele. - The frequency of the dominant allele. - The frequency of heterozygous individuals. - PROBLEM #4.Within a population of butterflies, the color brown (B) is dominant over the color white (b). And, 40% of all butterflies are white. Given this simple information, which is something that is very likely to be on an exam, calculate the following: - The percentage of butterflies in the population that are heterozygous. - The frequency of homozygous dominant individuals. - PROBLEM #5.A rather large population of Biology instructors have 396 red-sided individuals and 557 tan-sided individuals. Assume that red is totally recessive. Please calculate the following: - The allele frequencies of each allele. - The expected genotype frequencies. - The number of heterozygous individuals that you would predict to be in this population. - The expected phenotype frequencies. - Conditions happen to be really good this year for breeding and next year there are 1,245 young "potential" Biology instructors. Assuming that all of the Hardy-Weinberg conditions are met, how many of these would you expect to be red-sided and how many tan-sided? - PROBLEM #6.A very large population of randomly-mating laboratory mice contains 35% white mice. White coloring is caused by the double recessive genotype, "aa". Calculate allelic and genotypic frequencies for this population. - PROBLEM #7.After graduation, you and 19 of your closest friends (lets say 10 males and 10 females) charter a plane to go on a round-the-world tour. Unfortunately, you all crash land (safely) on a deserted island. No one finds you and you start a new population totally isolated from the rest of the world. Two of your friends carry (i.e. are heterozygous for) the recessive cystic fibrosis allele (c). Assuming that the frequency of this allele does not change as the population grows, what will be the incidence of cystic fibrosis on your island? - PROBLEM #8.You sample 1,000 individuals from a large population for the MN blood group, which can easily be measured since co-dominance is involved (i.e., you can detect the heterozygotes). They are typed accordingly: BLOOD TYPE GENOTYPE NUMBER OF INDIVIDUALS RESULTING FREQUENCY M MM 490 0.49 MN MN 420 0.42 N NN 90 0.09 Using the data provide above, calculate the following: - The frequency of each allele in the population. - Supposing the matings are random, the frequencies of the matings. - The probability of each genotype resulting from each potential cross. - PROBLEM #9.Cystic fibrosis is a recessive condition that affects about 1 in 2,500 babies in the Caucasian population of the United States. Please calculate the following. - The frequency of the recessive allele in the population. - The frequency of the dominant allele in the population. - The percentage of heterozygous individuals (carriers) in the population. - PROBLEM #10.In a given population, only the "A" and "B" alleles are present in the ABO system; there are no individuals with type "O" blood or with O alleles in this particular population. If 200 people have type A blood, 75 have type AB blood, and 25 have type B blood, what are the alleleic frequencies of this population (i.e., what are p and q)? - PROBLEM #11.The ability to taste PTC is due to a single dominate allele "T". You sampled 215 individuals in biology, and determined that 150 could detect the bitter taste of PTC and 65 could not. Calculate all of the potential frequencies.
Ocean thermal energy conversion (OTEC) uses the temperature difference between cooler deep and warmer shallow or surface seawaters to run a heat engine and produce useful work, usually in the form of electricity. OTEC can operate with a very high capacity factor and so can operate in base load mode. Among ocean energy sources, OTEC is one of the continuously available renewable energy resources that could contribute to base-load power supply. The resource potential for OTEC is considered to be much larger than for other ocean energy forms [World Energy Council, 2000]. Up to 88,000 TWh/yr of power could be generated from OTEC without affecting the ocean’s thermal structure [Pelc and Fujita, 2002]. Systems may be either closed-cycle or open-cycle. Closed-cycle OTEC uses working fluids that are typically thought of as refrigerants such as ammonia or R-134a. These fluids have low boiling points, and are therefore suitable for powering the system’s generator to generate electricity. The most commonly used heat cycle for OTEC to date is the Rankine cycle, using a low-pressure turbine. Open-cycle engines use vapor from the seawater itself as the working fluid. OTEC can also supply quantities of cold water as a by-product. This can be used for air conditioning and refrigeration and the nutrient-rich deep ocean water can feed biological technologies. Another by-product is fresh water distilled from the sea. OTEC theory was first developed in the 1880s and the first bench size demonstration model was constructed in 1926. Currently the world's only operating OTEC plant is in Japan, overseen by Saga University. Attempts to develop and refine OTEC technology started in the 1880s. In 1881, Jacques Arsene d'Arsonval, a French physicist, proposed tapping the thermal energy of the ocean. D'Arsonval's student, Georges Claude, built the first OTEC plant, in Matanzas, Cuba in 1930. The system generated 22 kW of electricity with a low-pressure turbine. The plant was later destroyed in a storm. In 1935, Claude constructed a plant aboard a 10,000-ton cargo vessel moored off the coast of Brazil. Weather and waves destroyed it before it could generate net power. (Net power is the amount of power generated after subtracting power needed to run the system). In 1962, J. Hilbert Anderson and James H. Anderson, Jr. focused on increasing component efficiency. They patented their new "closed cycle" design in 1967. This design improved upon the original closed-cycle Rankine system, and included this in an outline for a plant that would produce power at lower cost than oil or coal. At the time, however, their research garnered little attention since coal and nuclear were considered the future of energy. Japan is a major contributor to the development of OTEC technology. Beginning in 1970 the Tokyo Electric Power Company successfully built and deployed a 100 kW closed-cycle OTEC plant on the island of Nauru. The plant became operational on 14 October 1981, producing about 120 kW of electricity; 90 kW was used to power the plant and the remaining electricity was used to power a school and other places. This set a world record for power output from an OTEC system where the power was sent to a real (as opposed to an experimental) power grid. 1981 also saw a major development in OTEC technology when Russian engineer, Dr. Alexander Kalina, used a mixture of ammonia and water to produce electricity. This new ammonia-water mixture greatly improved the efficiency of the power cycle. In 1994 Saga University designed and constructed a 4.5 kW plant for the purpose of testing a newly invented Uehara cycle, also named after its inventor Haruo Uehara. This cycle included absorption and extraction processes that allow this system to outperform the Kalina cycle by 1-2%. Currently, the Institute of Ocean Energy, Saga University, is the leader in OTEC power plant research and also focuses on many of the technology's secondary benefits. The 1970s saw an uptick in OTEC research and development during the post 1973 Arab-Israeli War, which caused oil prices to triple. The U.S. federal government poured $260 million into OTEC research after President Carter signed a law that committed the US to a production goal of 10,000 MW of electricity from OTEC systems by 1999. In 1974, The U.S. established the Natural Energy Laboratory of Hawaii Authority (NELHA) at Keahole Point on the Kona coast of Hawaii. Hawaii is the best US OTEC location, due to its warm surface water, access to very deep, very cold water, and high electricity costs. The laboratory has become a leading test facility for OTEC technology. In the same year, Lockheed received a grant from the U.S. National Science Foundation to study OTEC. This eventually led to an effort by Lockheed, the US Navy, Makai Ocean Engineering, Dillingham Construction, and other firms to build the world's first and only net-power producing OTEC plant, dubbed "Mini-OTEC" For three months in 1979, a small amount of electricity was generated. Research related to making open-cycle OTEC a reality began earnestly in 1979 at the Solar Energy Research Institute (SERI) with funding from the US Department of Energy. Evaporators and suitably configured direct-contact condensers were developed and patented by SERI (see ). An original design for a power-producing experiment, then called the 165-kW experiment was described by Kreith and Bharathan (, and ) as the Max Jakob Memorial Award Lecture. The initial design used two parallel axial turbines, using last stage rotors taken from large steam turbines. Later, a team led by Dr. Bharathan at the National Renewable Energy Laboratory (NREL) developed the initial conceptual design for up-dated 210 kW open-cycle OTEC experiment (). This design integrated all components of the cycle, namely, the evaporator, condenser and the turbine into one single vacuum vessel, with the turbine mounted on top to prevent any potential for water to reach it. The vessel was made of concrete as the first process vacuum vessel of its kind. Attempts to make all components using low-cost plastic material could not be fully achieved, as some conservatism was required for the turbine and the vacuum pumps developed as the first of their kind. Later Dr. Bharathan worked with a team of engineers at the Pacific Institute for High Technology Research (PICHTR) to further pursue this design through preliminary and final stages. It was renamed the Net Power Producing Experiment (NPPE) and was constructed at the Natural Energy Laboratory of Hawaii (NELH) by PICHTR by a team led by Chief Engineer Don Evans and the project was managed by Dr. Luis Vega. In 2002, India tested a 1 MW floating OTEC pilot plant near Tamil Nadu. The plant was ultimately unsuccessful due to a failure of the deep sea cold water pipe. Its government continues to sponsor research. In 2006, Makai Ocean Engineering was awarded a contract from the U.S. Office of Naval Research (ONR) to investigate the potential for OTEC to produce nationally significant quantities of hydrogen in at-sea floating plants located in warm, tropical waters. Realizing the need for larger partners to actually commercialize OTEC, Makai approached Lockheed Martin to renew their previous relationship and determine if the time was ready for OTEC. And so in 2007, Lockheed Martin resumed work in OTEC and became a subcontractor to Makai to support their SBIR, which was followed by other subsequent collaborations In March 2011, Ocean Thermal Energy Corporation signed an Energy Services Agreement (ESA) with the Baha Mar resort, Nassau, Bahamas, for the world's first and largest seawater air conditioning (SWAC) system. In June 2015, the project was put on pause while the resort resolved financial and ownership issues. In August 2016, it was announced that the issues had been resolved and that the resort would open in March 2017. It is expected that the SWAC system's construction will resume at that time. In July 2011, Makai Ocean Engineering completed the design and construction of an OTEC Heat Exchanger Test Facility at the Natural Energy Laboratory of Hawaii. The purpose of the facility is to arrive at an optimal design for OTEC heat exchangers, increasing performance and useful life while reducing cost (heat exchangers being the #1 cost driver for an OTEC plant). And in March 2013, Makai announced an award to install and operate a 100 kilowatt turbine on the OTEC Heat Exchanger Test Facility, and once again connect OTEC power to the grid. In July 2016, the Virgin Islands Public Services Commission approved Ocean Thermal Energy Corporation's application to become a Qualified Facility. The Company is thus permitted to begin negotiations with the Virgin Islands Water and Power Authority (WAPA) for a Power Purchase Agreement (PPA) pertaining to an Ocean Thermal Energy Conversion (OTEC) plant on the island of St. Croix. This would be the world's first commercial OTEC plant. In March 2013, Saga University with various Japanese industries completed the installation of a new OTEC plant. Okinawa Prefecture announced the start of the OTEC operation testing at Kume Island on April 15, 2013. The main aim is to prove the validity of computer models and demonstrate OTEC to the public. The testing and research will be conducted with the support of Saga University until the end of FY 2016. IHI Plant Construction Co. Ltd, Yokogawa Electric Corporation, and Xenesys Inc were entrusted with constructing the 100 kilowatt class plant within the grounds of the Okinawa Prefecture Deep Sea Water Research Center. The location was specifically chosen in order to utilize existing deep seawater and surface seawater intake pipes installed for the research center in 2000. The pipe is used for the intake of deep sea water for research, fishery, and agricultural use. The plant consists of two 50 kW units in double Rankine configuration. The OTEC facility and deep seawater research center are open to free public tours by appointment in English and Japanese. Currently, this is one of only two fully operational OTEC plants in the world. This plant operates continuously when specific tests are not underway. In 2011, Makai Ocean Engineering completed a heat exchanger test facility at NELHA. Used to test a variety of heat exchange technologies for use in OTEC, Makai has received funding to install a 105 kW turbine. Installation will make this facility the largest operational OTEC facility, though the record for largest power will remain with the Open Cycle plant also developed in Hawaii. In July 2014, DCNS group partnered with Akuo Energy announced NER 300 funding for their NEMO project. If successful, the 16MW gross 10MW net offshore plant will be the largest OTEC facility to date. DCNS plans to have NEMO operational by 2020. An ocean thermal energy conversion power plant built by Makai Ocean Engineering went operational in Hawaii in August 2015 . The governor of Hawaii, David Ige, "flipped the switch" to activate the plant. This is the first true closed-cycle ocean Thermal Energy Conversion (OTEC) plant to be connected to a U.S. electrical grid . It is a demo plant capable of generating 105 kilowatts, enough to power about 120 homes. A heat engine gives greater efficiency when run with a large temperature difference. In the oceans the temperature difference between surface and deep water is greatest in the tropics, although still a modest 20 to 25 °C. It is therefore in the tropics that OTEC offers the greatest possibilities. OTEC has the potential to offer global amounts of energy that are 10 to 100 times greater than other ocean energy options such as wave power. OTEC plants can operate continuously providing a base load supply for an electrical power generation system. The main technical challenge of OTEC is to generate significant amounts of power efficiently from small temperature differences. It is still considered an emerging technology. Early OTEC systems were 1 to 3 percent thermally efficient, well below the theoretical maximum 6 and 7 percent for this temperature difference. Modern designs allow performance approaching the theoretical maximum Carnot efficiency. Cold seawater is an integral part of each of the three types of OTEC systems: closed-cycle, open-cycle, and hybrid. To operate, the cold seawater must be brought to the surface. The primary approaches are active pumping and desalination. Desalinating seawater near the sea floor lowers its density, which causes it to rise to the surface. The alternative to costly pipes to bring condensing cold water to the surface is to pump vaporized low boiling point fluid into the depths to be condensed, thus reducing pumping volumes and reducing technical and environmental problems and lowering costs. Closed-cycle systems use fluid with a low boiling point, such as ammonia (having a boiling point around -33 °C at atmospheric pressure), to power a turbine to generate electricity. Warm surface seawater is pumped through a heat exchanger to vaporize the fluid. The expanding vapor turns the turbo-generator. Cold water, pumped through a second heat exchanger, condenses the vapor into a liquid, which is then recycled through the system. In 1979, the Natural Energy Laboratory and several private-sector partners developed the "mini OTEC" experiment, which achieved the first successful at-sea production of net electrical power from closed-cycle OTEC. The mini OTEC vessel was moored 1.5 miles (2.4 km) off the Hawaiian coast and produced enough net electricity to illuminate the ship's light bulbs and run its computers and television. Open-cycle OTEC uses warm surface water directly to make electricity. The warm seawater is first pumped into a low-pressure container, which causes it to boil. In some schemes, the expanding vapor drives a low-pressure turbine attached to an electrical generator. The vapor, which has left its salt and other contaminants in the low-pressure container, is pure fresh water. It is condensed into a liquid by exposure to cold temperatures from deep-ocean water. This method produces desalinized fresh water, suitable for drinking water, irrigation or aquaculture. In other schemes, the rising vapor is used in a gas lift technique of lifting water to significant heights. Depending on the embodiment, such vapor lift pump techniques generate power from a hydroelectric turbine either before or after the pump is used. In 1984, the Solar Energy Research Institute (now known as the National Renewable Energy Laboratory) developed a vertical-spout evaporator to convert warm seawater into low-pressure steam for open-cycle plants. Conversion efficiencies were as high as 97% for seawater-to-steam conversion (overall steam production would only be a few percent of the incoming water). In May 1993, an open-cycle OTEC plant at Keahole Point, Hawaii, produced close to 80 kW of electricity during a net power-producing experiment. This broke the record of 40 kW set by a Japanese system in 1982. A hybrid cycle combines the features of the closed- and open-cycle systems. In a hybrid, warm seawater enters a vacuum chamber and is flash-evaporated, similar to the open-cycle evaporation process. The steam vaporizes the ammonia working fluid of a closed-cycle loop on the other side of an ammonia vaporizer. The vaporized fluid then drives a turbine to produce electricity. The steam condenses within the heat exchanger and provides desalinated water (see heat pipe). A popular choice of working fluid is ammonia, which has superior transport properties, easy availability, and low cost. Ammonia, however, is toxic and flammable. Fluorinated carbons such as CFCs and HCFCs are not toxic or flammable, but they contribute to ozone layer depletion. Hydrocarbons too are good candidates, but they are highly flammable; in addition, this would create competition for use of them directly as fuels. The power plant size is dependent upon the vapor pressure of the working fluid. With increasing vapor pressure, the size of the turbine and heat exchangers decreases while the wall thickness of the pipe and heat exchangers increase to endure high pressure especially on the evaporator side. OTEC has the potential to produce gigawatts of electrical power, and in conjunction with electrolysis, could produce enough hydrogen to completely replace all projected global fossil fuel consumption. Reducing costs remains an unsolved challenge, however. OTEC plants require a long, large diameter intake pipe, which is submerged a kilometer or more into the ocean's depths, to bring cold water to the surface. Land-based and near-shore facilities offer three main advantages over those located in deep water. Plants constructed on or near land do not require sophisticated mooring, lengthy power cables, or the more extensive maintenance associated with open-ocean environments. They can be installed in sheltered areas so that they are relatively safe from storms and heavy seas. Electricity, desalinated water, and cold, nutrient-rich seawater could be transmitted from near-shore facilities via trestle bridges or causeways. In addition, land-based or near-shore sites allow plants to operate with related industries such as mariculture or those that require desalinated water. Favored locations include those with narrow shelves (volcanic islands), steep (15-20 degrees) offshore slopes, and relatively smooth sea floors. These sites minimize the length of the intake pipe. A land-based plant could be built well inland from the shore, offering more protection from storms, or on the beach, where the pipes would be shorter. In either case, easy access for construction and operation helps lower costs. Land-based or near-shore sites can also support mariculture or chilled water agriculture. Tanks or lagoons built on shore allow workers to monitor and control miniature marine environments. Mariculture products can be delivered to market via standard transport. One disadvantage of land-based facilities arises from the turbulent wave action in the surf zone. OTEC discharge pipes should be placed in protective trenches to prevent subjecting them to extreme stress during storms and prolonged periods of heavy seas. Also, the mixed discharge of cold and warm seawater may need to be carried several hundred meters offshore to reach the proper depth before it is released, requiring additional expense in construction and maintenance. One way that OTEC systems can avoid some of the problems and expenses of operating in a surf zone is by building them just offshore in waters ranging from 10 to 30 meters deep (Ocean Thermal Corporation 1984). This type of plant would use shorter (and therefore less costly) intake and discharge pipes, which would avoid the dangers of turbulent surf. The plant itself, however, would require protection from the marine environment, such as breakwaters and erosion-resistant foundations, and the plant output would need to be transmitted to shore. To avoid the turbulent surf zone as well as to move closer to the cold-water resource, OTEC plants can be mounted to the continental shelf at depths up to 100 meters (330 ft). A shelf-mounted plant could be towed to the site and affixed to the sea bottom. This type of construction is already used for offshore oil rigs. The complexities of operating an OTEC plant in deeper water may make them more expensive than land-based approaches. Problems include the stress of open-ocean conditions and more difficult product delivery. Addressing strong ocean currents and large waves adds engineering and construction expense. Platforms require extensive pilings to maintain a stable base. Power delivery can require long underwater cables to reach land. For these reasons, shelf-mounted plants are less attractive. Floating OTEC facilities operate off-shore. Although potentially optimal for large systems, floating facilities present several difficulties. The difficulty of mooring plants in very deep water complicates power delivery. Cables attached to floating platforms are more susceptible to damage, especially during storms. Cables at depths greater than 1000 meters are difficult to maintain and repair. Riser cables, which connect the sea bed and the plant, need to be constructed to resist entanglement. As with shelf-mounted plants, floating plants need a stable base for continuous operation. Major storms and heavy seas can break the vertically suspended cold-water pipe and interrupt warm water intake as well. To help prevent these problems, pipes can be made of flexible polyethylene attached to the bottom of the platform and gimballed with joints or collars. Pipes may need to be uncoupled from the plant to prevent storm damage. As an alternative to a warm-water pipe, surface water can be drawn directly into the platform; however, it is necessary to prevent the intake flow from being damaged or interrupted during violent motions caused by heavy seas. Connecting a floating plant to power delivery cables requires the plant to remain relatively stationary. Mooring is an acceptable method, but current mooring technology is limited to depths of about 2,000 meters (6,600 ft). Even at shallower depths, the cost of mooring may be prohibitive. Because OTEC facilities are more-or-less stationary surface platforms, their exact location and legal status may be affected by the United Nations Convention on the Law of the Sea treaty (UNCLOS). This treaty grants coastal nations 12- and 200-nautical-mile (370 km) zones of varying legal authority from land, creating potential conflicts and regulatory barriers. OTEC plants and similar structures would be considered artificial islands under the treaty, giving them no independent legal status. OTEC plants could be perceived as either a threat or potential partner to fisheries or to seabed mining operations controlled by the International Seabed Authority. Because OTEC systems have not yet been widely deployed, cost estimates are uncertain. A 2010 study by University of Hawaii estimated the cost of electricity for OTEC to 94.0 cent (US) per kilowatt hour (kWh) for a 1.4 MW plant, 44.0 cent per kWh for a 10 MW plant and 18.0 cent per kWh for a 100 MW plant. A 2015 report by the organization Ocean Energy Systems under the International Energy Agency gives an estimate of about 20 cent per kWh for 100 MW plants. Another study estimates power generation costs as low as 7 cent per kWh. Comparing to other energy sources, a 2019 study by Lazard estimated the unsubsidized cost of electricity to 3.2 to 4.2 cent per kWh for Solar PV at utility scale and 2.8 to 5.4 cent per kWh for wind power. Beneficial factors that should be taken into account include OTEC's lack of waste products and fuel consumption, the area in which it is available (often within 20° of the equator), the geopolitical effects of petroleum dependence, compatibility with alternate forms of ocean power such as wave energy, tidal energy and methane hydrates, and supplemental uses for the seawater. OTEC projects under consideration include a small plant for the U.S. Navy base on the British overseas territory island of Diego Garcia in the Indian Ocean. Ocean Thermal Energy Corporation (formerly OCEES International, Inc.) is working with the U.S. Navy on a design for a proposed 13-MW OTEC plant, to replace the current diesel generators. The OTEC plant would also provide 1.25 million gallons[clarification needed] per day of potable water. This project is currently[when?] waiting for changes in US military contract policies. OTE has proposed building a 10-MW OTEC plant on Guam. Ocean Thermal Energy Corporation (OTE) currently[when?] has plans to install two 10 MW OTEC plants in the US Virgin Islands and a 5-10 MW OTEC facility in The Bahamas. OTE has also designed the world’s largest Seawater Air Conditioning (SWAC) plant for a resort in The Bahamas, which will use cold deep seawater as a method of air-conditioning. In mid-2015, the 95%-complete project was temporarily put on hold while the resort resolved financial and ownership issues. In August 22, 2016, the government of the Bahamas announced that a new agreement had been signed under which the Baha Mar resort will be completed. On September 27, 2016, Bahamian Prime Minister Perry Christie announced that construction had resumed on Baha Mar, and that the resort was slated to open in March 2017. OTE expects to have the SWAC plant up and running within two years of Baha Mar's opening. Lockheed Martin's Alternative Energy Development team has partnered with Makai Ocean Engineering to complete the final design phase of a 10-MW closed cycle OTEC pilot system which planned to become operational in Hawaii in the 2012-2013 time frame. This system was designed to expand to 100-MW commercial systems in the near future. In November, 2010 the U.S. Naval Facilities Engineering Command (NAVFAC) awarded Lockheed Martin a US$4.4 million contract modification to develop critical system components and designs for the plant, adding to the 2009 $8.1 million contract and two Department of Energy grants totaling over $1 million in 2008 and March 2010. A small but operational ocean thermal energy conversion (OTEC) plant was inaugurated in Hawaii in August 2015. The opening of the research and development 100-kilowatt facility marked the first time a closed-cycle OTEC plant was connected to the U.S. grid. On April 13, 2013 Lockheed contracted with the Reignwood Group to build a 10 megawatt plant off the coast of southern China to provide power for a planned resort on Hainan island. A plant of that size would power several thousand homes. The Reignwood Group acquired Opus Offshore in 2011 which forms its Reignwood Ocean Engineering division which also is engaged in development of deepwater drilling. Currently the only continuously operating OTEC system is located in Okinawa Prefecture, Japan. The Governmental support, local community support, and advanced research carried out by Saga University were key for the contractors, IHI Plant Construction Co. Ltd, Yokogawa Electric Corporation, and Xenesys Inc, to succeed with this project. Work is being conducted to develop a 1MW facility on Kume Island requiring new pipelines. In July 2014, more than 50 members formed the Global Ocean reSource and Energy Association (GOSEA) an international organization formed to promote the development of the Kumejima Model and work towards the installation of larger deep seawater pipelines and a 1MW OTEC Facility. The companies involved in the current OTEC projects, along with other interested parties have developed plans for offshore OTEC systems as well. - For more details, see "Currently Operating OTEC Plants" above. On March 5, 2014, Ocean Thermal Energy Corporation (OTEC) and the 30th Legislature of the United States Virgin Islands (USVI) signed a Memorandum of Understanding to move forward with a study to evaluate the feasibility and potential benefits to the USVI of installing on-shore Ocean Thermal Energy Conversion (OTEC) renewable energy power plants and Seawater Air Conditioning (SWAC) facilities. The benefits to be assessed in the USVI study include both the baseload (24/7) clean electricity generated by OTEC, as well as the various related products associated with OTEC and SWAC, including abundant fresh drinking water, energy-saving air conditioning, sustainable aquaculture and mariculture, and agricultural enhancement projects for the Islands of St Thomas and St Croix. The Honorable Shawn-Michael Malone, President of the USVI Senate, commented on his signing of the Memorandum of Understanding (MOU) authorizing OTE's feasibility study. “The most fundamental duty of government is to protect the health and welfare of its citizens," said Senator Malone. "These clean energy technologies have the potential to improve the air quality and environment for our residents, and to provide the foundation for meaningful economic development. Therefore, it is our duty as elected representatives to explore the feasibility and possible benefits of OTEC and SWAC for the people of USVI.” On July 18, 2016, OTE's application to be a Qualifying Facility was approved by the Virgin Islands Public Services Commission. OTE also received permission to begin negotiating contracts associated with this project. South Korea's Research Institute of Ships and Ocean Engineering (KRISO) received Approval in Principal from Bureau Veritas for their 1MW offshore OTEC design. No timeline was given for the project which will be located 6 km offshore of the Republic of Kiribati. Akuo Energy and DCNS were awarded NER300 funding on July 8, 2014 for their NEMO (New Energy for Martinique and Overseas) project which is expected to be a 10.7MW-net offshore facility completed in 2020. The award to help with development totaled 72 million Euro. On February 16, 2018, Global OTEC Resources announced plans to build a 150 kW plant in the Maldives, designed bespoke for hotels and resorts. “All these resorts draw their power from diesel generators. Moreover, some individual resorts consume 7,000 litres of diesel a day to meet demands which equates to over 6,000 tonnes of CO2 annually” said Director, Dan Grech. The EU awarded a grant and Global OTEC resources launched a crowdfunding campaign for the rest. OTEC has uses other than power production. Desalinated water can be produced in open- or hybrid-cycle plants using surface condensers to turn evaporated seawater into potable water. System analysis indicates that a 2-megawatt plant could produce about 4,300 cubic metres (150,000 cu ft) of desalinated water each day. Another system patented by Richard Bailey creates condensate water by regulating deep ocean water flow through surface condensers correlating with fluctuating dew-point temperatures. This condensation system uses no incremental energy and has no moving parts. On March 22, 2015, Saga University opened a Flash-type desalination demonstration facility on Kumejima. This satellite of their Institute of Ocean Energy uses post-OTEC deep seawater from the Okinawa OTEC Demonstration Facility and raw surface seawater to produce desalinated water. Air is extracted from the closed system with a vacuum pump. When raw sea water is pumped into the flash chamber it boils, allowing pure steam to rise and the salt and remaining seawater to be removed. The steam is returned to liquid in a heat exchanger with cold post-OTEC deep seawater. The desalinated water can be used in hydrogen production or drinking water (if minerals are added). The 41 °F (5 °C) cold seawater made available by an OTEC system creates an opportunity to provide large amounts of cooling to industries and homes near the plant. The water can be used in chilled-water coils to provide air-conditioning for buildings. It is estimated that a pipe 1 foot (0.30 m) in diameter can deliver 4,700 gallons of water per minute. Water at 43 °F (6 °C) could provide more than enough air-conditioning for a large building. Operating 8,000 hours per year in lieu of electrical conditioning selling for 5-10¢ per kilowatt-hour, it would save $200,000-$400,000 in energy bills annually. The InterContinental Resort and Thalasso-Spa on the island of Bora Bora uses an SWAC system to air-condition its buildings. The system passes seawater through a heat exchanger where it cools freshwater in a closed loop system. This freshwater is then pumped to buildings and directly cools the air. In 2010, Copenhagen Energy opened a district cooling plant in Copenhagen, Denmark. The plant delivers cold seawater to commercial and industrial buildings, and has reduced electricity consumption by 80 percent. Ocean Thermal Energy Corporation (OTE) has designed a 9800-ton SDC system for a vacation resort in The Bahamas. OTEC technology supports chilled-soil agriculture. When cold seawater flows through underground pipes, it chills the surrounding soil. The temperature difference between roots in the cool soil and leaves in the warm air allows plants that evolved in temperate climates to be grown in the subtropics. Dr. John P. Craven, Dr. Jack Davidson and Richard Bailey patented this process and demonstrated it at a research facility at the Natural Energy Laboratory of Hawaii Authority (NELHA). The research facility demonstrated that more than 100 different crops can be grown using this system. Many normally could not survive in Hawaii or at Keahole Point. Japan has also been researching agricultural uses of Deep Sea Water since 2000 at the Okinawa Deep Sea Water Research Institute on Kume Island. The Kume Island facilities use regular water cooled by Deep Sea Water in a heat exchanger run through pipes in the ground to cool soil. Their techniques have developed an important resource for the island community as they now produce spinach, a winter vegetable, commercially year round. An expansion of the deep seawater agriculture facility was completed by Kumejima Town next to the OTEC Demonstration Facility in 2014. The new facility is for researching the economic practicality of chilled-soil agriculture on a larger scale. Aquaculture is the best-known byproduct, because it reduces the financial and energy costs of pumping large volumes of water from the deep ocean. Deep ocean water contains high concentrations of essential nutrients that are depleted in surface waters due to biological consumption. This "artificial upwelling" mimics the natural upwellings that are responsible for fertilizing and supporting the world's largest marine ecosystems, and the largest densities of life on the planet. Cold-water delicacies, such as salmon and lobster, thrive in this nutrient-rich, deep, seawater. Microalgae such as Spirulina, a health food supplement, also can be cultivated. Deep-ocean water can be combined with surface water to deliver water at an optimal temperature. Non-native species such as salmon, lobster, abalone, trout, oysters, and clams can be raised in pools supplied by OTEC-pumped water. This extends the variety of fresh seafood products available for nearby markets. Such low-cost refrigeration can be used to maintain the quality of harvested fish, which deteriorate quickly in warm tropical regions. In Kona, Hawaii, aquaculture companies working with NELHA generate about $40 million annually, a significant portion of Hawaii’s GDP. The NELHA plant established in 1993 produced an average of 7,000 gallons of freshwater per day. KOYO USA was established in 2002 to capitalize on this new economic opportunity. KOYO bottles the water produced by the NELHA plant in Hawaii. With the capacity to produce one million bottles of water every day, KOYO is now Hawaii’s biggest exporter with $140 million in sales. Hydrogen can be produced via electrolysis using OTEC electricity. Generated steam with electrolyte compounds added to improve efficiency is a relatively pure medium for hydrogen production. OTEC can be scaled to generate large quantities of hydrogen. The main challenge is cost relative to other energy sources and fuels. The ocean contains 57 trace elements in salts and other forms and dissolved in solution. In the past, most economic analyses concluded that mining the ocean for trace elements would be unprofitable, in part because of the energy required to pump the water. Mining generally targets minerals that occur in high concentrations, and can be extracted easily, such as magnesium. With OTEC plants supplying water, the only cost is for extraction. The Japanese investigated the possibility of extracting uranium and found developments in other technologies (especially materials sciences) were improving the prospects. A rigorous treatment of OTEC reveals that a 20 °C temperature difference will provide as much energy as a hydroelectric plant with 34 m head for the same volume of water flow. The low temperature difference means that water volumes must be very large to extract useful amounts of heat. A 100MW power plant would be expected to pump on the order of 12 million gallons (44,400 tonnes) per minute. For comparison, pumps must move a mass of water greater than the weight of the battleship Bismarck, which weighed 41,700 tonnes, every minute. This makes pumping a substantial parasitic drain on energy production in OTEC systems, with one Lockheed design consuming 19.55 MW in pumping costs for every 49.8 MW net electricity generated. For OTEC schemes using heat exchangers, to handle this volume of water the exchangers need to be enormous compared to those used in conventional thermal power generation plants, making them one of the most critical components due to their impact on overall efficiency. A 100 MW OTEC power plant would require 200 exchangers each larger than a 20-foot shipping container making them the single most expensive component. The total insolation received by the oceans (covering 70% of the earth's surface, with clearness index of 0.5 and average energy retention of 15%) is: 5.45×1018 MJ/yr × 0.7 × 0.5 × 0.15 = 2.87×1017 MJ/yr We can use Beer–Lambert–Bouguer's law to quantify the solar energy absorption by water, where, y is the depth of water, I is intensity and μ is the absorption coefficient. Solving the above differential equation, The absorption coefficient μ may range from 0.05 m−1 for very clear fresh water to 0.5 m−1 for very salty water. Since the intensity falls exponentially with depth y, heat absorption is concentrated at the top layers. Typically in the tropics, surface temperature values are in excess of 25 °C (77 °F), while at 1 kilometer (0.62 mi), the temperature is about 5–10 °C (41–50 °F). The warmer (and hence lighter) waters at the surface means there are no thermal convection currents. Due to the small temperature gradients, heat transfer by conduction is too low to equalize the temperatures. The ocean is thus both a practically infinite heat source and a practically infinite heat sink.[clarification needed] In this scheme, warm surface water at around 27 °C (81 °F) enters an evaporator at pressure slightly below the saturation pressures causing it to vaporize. Where Hf is enthalpy of liquid water at the inlet temperature, T1. This temporarily superheated water undergoes volume boiling as opposed to pool boiling in conventional boilers where the heating surface is in contact. Thus the water partially flashes to steam with two-phase equilibrium prevailing. Suppose that the pressure inside the evaporator is maintained at the saturation pressure, T2. Here, x2 is the fraction of water by mass that vaporizes. The warm water mass flow rate per unit turbine mass flow rate is 1/x2. The low pressure in the evaporator is maintained by a vacuum pump that also removes the dissolved non-condensable gases from the evaporator. The evaporator now contains a mixture of water and steam of very low vapor quality (steam content). The steam is separated from the water as saturated vapor. The remaining water is saturated and is discharged to the ocean in the open cycle. The steam is a low pressure/high specific volume working fluid. It expands in a special low pressure turbine. The above equation corresponds to the temperature at the exhaust of the turbine, T5. x5,s is the mass fraction of vapor at state 5. The enthalpy at T5 is, This enthalpy is lower. The adiabatic reversible turbine work = H3-H5,s . Actual turbine work WT = (H3-H5,s) x polytropic efficiency The condenser temperature and pressure are lower. Since the turbine exhaust is to be discharged back into the ocean, a direct contact condenser is used to mix the exhaust with cold water, which results in a near-saturated water. That water is now discharged back to the ocean. H6=Hf, at T5. T7 is the temperature of the exhaust mixed with cold sea water, as the vapor content now is negligible, The temperature differences between stages include that between warm surface water and working steam, that between exhaust steam and cooling water, and that between cooling water reaching the condenser and deep water. These represent external irreversibilities that reduce the overall temperature difference. The cold water flow rate per unit turbine mass flow rate, Turbine mass flow rate, Warm water mass flow rate, Cold water mass flow rate As developed starting in the 1960s by J. Hilbert Anderson of Sea Solar Power, Inc., in this cycle, QH is the heat transferred in the evaporator from the warm sea water to the working fluid. The working fluid exits the evaporator as a gas near its dew point. The high-pressure, high-temperature gas then is expanded in the turbine to yield turbine work, WT. The working fluid is slightly superheated at the turbine exit and the turbine typically has an efficiency of 90% based on reversible, adiabatic expansion. From the turbine exit, the working fluid enters the condenser where it rejects heat, -QC, to the cold sea water. The condensate is then compressed to the highest pressure in the cycle, requiring condensate pump work, WC. Thus, the Anderson closed cycle is a Rankine-type cycle similar to the conventional power plant steam cycle except that in the Anderson cycle the working fluid is never superheated more than a few degrees Fahrenheit. Owing to viscosity effects, working fluid pressure drops in both the evaporator and the condenser. This pressure drop, which depends on the types of heat exchangers used, must be considered in final design calculations but is ignored here to simplify the analysis. Thus, the parasitic condensate pump work, WC, computed here will be lower than if the heat exchanger pressure drop was included. The major additional parasitic energy requirements in the OTEC plant are the cold water pump work, WCT, and the warm water pump work, WHT. Denoting all other parasitic energy requirements by WA, the net work from the OTEC plant, WNP is The thermodynamic cycle undergone by the working fluid can be analyzed without detailed consideration of the parasitic energy requirements. From the first law of thermodynamics, the energy balance for the working fluid as the system is where WN = WT + WC is the net work for the thermodynamic cycle. For the idealized case in which there is no working fluid pressure drop in the heat exchangers, so that the net thermodynamic cycle work becomes Subcooled liquid enters the evaporator. Due to the heat exchange with warm sea water, evaporation takes place and usually superheated vapor leaves the evaporator. This vapor drives the turbine and the 2-phase mixture enters the condenser. Usually, the subcooled liquid leaves the condenser and finally, this liquid is pumped to the evaporator completing a cycle. Carbon dioxide dissolved in deep cold and high pressure layers is brought up to the surface and released as the water warms. Mixing of deep ocean water with shallower water brings up nutrients and makes them available to shallow water life. This may be an advantage for aquaculture of commercially important species, but may also unbalance the ecological system around the power plant. OTEC plants use very large flows of warm surface seawater and cold deep seawater to generate constant renewable power. The deep seawater is oxygen deficient and generally 20-40 times more nutrient rich (in nitrate and nitrite) than shallow seawater. When these plumes are mixed, they are slightly denser than the ambient seawater. Though no large scale physical environmental testing of OTEC has been done, computer models have been developed to simulate the effect of OTEC plants. In 2010, a computer model was developed to simulate the physical oceanographic effects of one or several 100 megawatt OTEC plant(s). The model suggests that OTEC plants can be configured such that the plant can conduct continuous operations, with resulting temperature and nutrient variations that are within naturally occurring levels. Studies to date suggest that by discharging the OTEC flows downwards at a depth below 70 meters, the dilution is adequate and nutrient enrichment is small enough so that 100-megawatt OTEC plants could be operated in a sustainable manner on a continuous basis. The nutrients from an OTEC discharge could potentially cause increased biological activity if they accumulate in large quantities in the photic zone. In 2011 a biological component was added to the hydrodynamic computer model to simulate the biological response to plumes from 100 megawatt OTEC plants. In all cases modeled (discharge at 70 meters depth or more), no unnatural variations occurs in the upper 40 meters of the ocean's surface. The picoplankton response in the 110 - 70 meter depth layer is approximately a 10-25% increase, which is well within naturally occurring variability. The nanoplankton response is negligible. The enhanced productivity of diatoms (microplankton) is small. The subtle phytoplankton increase of the baseline OTEC plant suggests that higher-order biochemical effects will be very small. A previous Final Environmental Impact Statement (EIS) for the United States' NOAA from 1981 is available, but needs to be brought up to current oceanographic and engineering standards. Studies have been done to propose the best environmental baseline monitoring practices, focusing on a set of ten chemical oceanographic parameters relevant to OTEC. Most recently, NOAA held an OTEC Workshop in 2010 and 2012 seeking to assess the physical, chemical, and biological impacts and risks, and identify information gaps or needs. The performance of direct contact heat exchangers operating at typical OTEC boundary conditions is important to the Claude cycle. Many early Claude cycle designs used a surface condenser since their performance was well understood. However, direct contact condensers offer significant disadvantages. As cold water rises in the intake pipe, the pressure decreases to the point where gas begins to evolve. If a significant amount of gas comes out of solution, placing a gas trap before the direct contact heat exchangers may be justified. Experiments simulating conditions in the warm water intake pipe indicated about 30% of the dissolved gas evolves in the top 8.5 meters (28 ft) of the tube. The trade-off between pre-dearation of the seawater and expulsion of non-condensable gases from the condenser is dependent on the gas evolution dynamics, deaerator efficiency, head loss, vent compressor efficiency and parasitic power. Experimental results indicate vertical spout condensers perform some 30% better than falling jet types. Because raw seawater must pass through the heat exchanger, care must be taken to maintain good thermal conductivity. Biofouling layers as thin as 25 to 50 micrometres (0.00098 to 0.00197 in) can degrade heat exchanger performance by as much as 50%. A\1977 study in which mock heat exchangers were exposed to seawater for ten weeks concluded that although the level of microbial fouling was low, the thermal conductivity of the system was significantly impaired. The apparent discrepancy between the level of fouling and the heat transfer impairment is the result of a thin layer of water trapped by the microbial growth on the surface of the heat exchanger. Another study concluded that fouling degrades performance over time, and determined that although regular brushing was able to remove most of the microbial layer, over time a tougher layer formed that could not be removed through simple brushing. The study passed sponge rubber balls through the system. It concluded that although the ball treatment decreased the fouling rate it was not enough to completely halt growth and brushing was occasionally necessary to restore capacity. The microbes regrew more quickly later in the experiment (i.e. brushing became necessary more often) replicating the results of a previous study. The increased growth rate after subsequent cleanings appears to result from selection pressure on the microbial colony. Continuous use of 1 hour per day and intermittent periods of free fouling and then chlorination periods (again 1 hour per day) were studied. Chlorination slowed but did not stop microbial growth; however chlorination levels of .1 mg per liter for 1 hour per day may prove effective for long term operation of a plant. The study concluded that although microbial fouling was an issue for the warm surface water heat exchanger, the cold water heat exchanger suffered little or no biofouling and only minimal inorganic fouling. Besides water temperature, microbial fouling also depends on nutrient levels, with growth occurring faster in nutrient rich water. The fouling rate also depends on the material used to construct the heat exchanger. Aluminium tubing slows the growth of microbial life, although the oxide layer which forms on the inside of the pipes complicates cleaning and leads to larger efficiency losses. In contrast, titanium tubing allows biofouling to occur faster but cleaning is more effective than with aluminium. The evaporator, turbine, and condenser operate in partial vacuum ranging from 3% to 1% of atmospheric pressure. The system must be carefully sealed to prevent in-leakage of atmospheric air that can degrade or shut down operation. In closed-cycle OTEC, the specific volume of low-pressure steam is very large compared to that of the pressurized working fluid. Components must have large flow areas to ensure steam velocities do not attain excessively high values. An approach for reducing the exhaust compressor parasitic power loss is as follows. After most of the steam has been condensed by spout condensers, the non-condensible gas steam mixture is passed through a counter current region which increases the gas-steam reaction by a factor of five. The result is an 80% reduction in the exhaust pumping power requirements. In winter in coastal Arctic locations, the delta T between the seawater and ambient air can be as high as 40 °C (72 °F). Closed-cycle systems could exploit the air-water temperature difference. Eliminating seawater extraction pipes might make a system based on this concept less expensive than OTEC. This technology is due to H. Barjot, who suggested butane as cryogen, because of its boiling point of −0.5 °C (31.1 °F) and its non-solubility in water. Assuming a level of efficiency of realistic 4%, calculations show that the amount of energy generated with one cubic meter water at a temperature of 2 °C (36 °F) in a place with an air temperature of −22 °C (−8 °F) equals the amount of energy generated by letting this cubic meter water run through a hydroelectric plant of 4000 feet (1,200 m) height. Barjot Polar Power Plants could be located on islands in the polar region or designed as swimming barges or platforms attached to the ice cap. The weather station Myggbuka at Greenlands east coast for example, which is only 2,100 km away from Glasgow, detects monthly mean temperatures below −15 °C (5 °F) during 6 winter months in the year.
The lesson I am going to talk about is from the teaching material <Primary English For China > Book One ,Unit 8 <Fruit>.the third part that is used by the kids in Grade One . Analysis of the teaching material This is a dialogue that happens in the fruit shop .several sentences surround selling and buying the fruit will be learned .During the first and the second part in this unit ,the kids have understood simple instructions and act accordingly ,and they can say simple words ,phrases or sentences by looking at objects and the pictures .eg: lychee, banana, apple,“What’s this ?It’s an apple.”In Unit seven ,we grasped the numbers from one to ten .The main language points in this unit is to make sentences using the fruit and numbers freely and communicate with others in English in the fruit shop. And pay close attention to the single and plural forms of the nouns . According to the kids’ English level and the corresponding content in the daily life ,I give them some extra extending .To train their ability of communicate with the others in English ,I prepare the following design . 1.knowledge and skill aims : Review the names of the ten different kinds of fruit and recognize the numbers from one to ten . Understand simple instructions about the numbers and act accordingly. Practice English and communicate with others in the situation. 2.Equip them with the emotion ,attitude and value goals : Cultivate the spirit of co-operations in the group work..Bring up the good quality of protect and make friends with the animals . Teaching importance : 1.Make sentences using the fruit and the numbers . ―Six oranges ,please . 2.Distinguish the difference between the single form and the plural forms of the nouns .“one apple / two apples „” 3.The sentences used when selling and buying the fruit in a fruit shop. 1.Distinguish the difference between the single and the plural forms of the nouns . 2.Train their ability of communicating with others in English . Teaching aids ;Multimedia , flash cards ,fresh fruit and arrangements and decorations of the fruit shop . Teaching methods : Task objective teaching method .TPR method , performance and games methods . 二、Analysis of the learners : We are facing the 5 to 6-year-old little kids who just graduated from the kindergarten ,and they can not tell the difference between kindergarten and the primary school. sometimes they even don’t know how to behave in the class. So ,I think the most important thing for me to do is to attract their interests and make them love English and feel confident in this subject .so ,I will play some interesting games with them ,show them the funny cartoon movie and role the plays in the text or have a competation. we should not only focus on the language point itself ,but also set up the real circumstance where I can encourage them to express themselves better .What I try my best to do is to arouse the kids’ interests and protect their enthusiasm。 三、Analysis of the teaching methods New English Lesson Standards> says that during the Foundation Education period ,the total goal for English lesson is to improve the pupils’ ability of comprehensive using language .It promotes task teaching structure .According to the little kids’ physical and psychological characteristics of keeping curios ,active and imitating and showing themselves .I adopt the ―task –research—construct ‖ teaching methods and organize the class to focus on the importance and solve the difficulties .I give the pupils an open and relaxed circumstance in which they can learn to observe ,think and discuss .during this procedure ,the pupils’ ability of thinking and using language is developed very well . 四、Analysis of the teaching procedures . 1. Warm up . All the class sing English song ―Ten little Indian boys ‖ to arouse their interests and help them to step into English learning circumstance happily. 2. Review the fruit and the numbers those we learned in the first and the second part in this unit . A.Watch a funny video.and answer the questions .(learn more fruit and practice more sentence patterns eg: strawberry / watermelon/ pineapple /cherry) ask some questions . What’s this ? What colour is it ? How many bananas are there? Do you like eating bananas? What is your favourite fruit? Encourage them to open their mouth and speak English as much as they can . B.Play guessing game .to review the spelling of the words using the basic pronunciation knowledge. C.Play a game named―up and down ‖.emphasis on distinguishing the single and plural forms of the nouns . 3.Guide the pupils to the main teaching points .To comprehensive use the numbers and the fruit that is a needed in a fruit shop .Ask two volunteers to come tothe front and choose the right number cards and stick it beside the right fruit according to the other pupils’ instructions .The quicker one will be the winner . Eg: Six oranges ,please . 4.Time to practice for all the class .The pupils choose the right cards they have prepared and put them up above their heads when they hear the teacher’s instructions and give them to the teaching answering loudly: ―Here you are .’ 5. Watch a video ,understand what’s happening in the story ,(This part is important, reasonable and effective)and guide them to protect and make friends with the animals .Present the situation of a fruit shop .the teacher will act a shopkeeper and invite a better pupil to be the customer and finish all the buying steps . Shopkeeper : Good morning . Customer : Good morning . Shopkeeper : Can I help you ? Customer : Yes ,six oranges ,please . Shopkeeper : Here you are . Customer : Thank you very much . Shopkeeper : You are welcome . 6.Consolidation and Practice Group work : Divide the class into eight groups and every group will be decorated into a fruit shop, ask one pupil to be the shopkeeper and the other members in this group will be the customers .Encourage them to buy and sell the fruit with what they learned in this part ,I design a real situation that is common in our daily life and the kids will not feel uncomfortable or unfamiliar with it, The teacher will go around the class and supply the help to the unable ones .In such a peace and pleasant situation they like to speak the dialogue they learned to express themselves .They can feel the success and become confident in speaking English . 7.Conclusion The teacher would lead the class to read the sentences on the board and ask some more difficult questions .Maybe the pupils can not understand them clearly ,but it doesn’t matter .we just give the pupils more information about the language and give them the better language circumstances that can help them in the future learning . 8.Homework Encourage the pupils to design a little fruit shop at home and teach their family the dialogue in the fruit shop .When they practice this , they should take photos and show the other pupils the next day . In this lesson , what I design (not only the presentation of the main teaching points ,but also the activities) attract the pupils interests .They learn and practice while playing . I think it’s really a good lesson of high quality.
It is a statistical process for estimating the relationships among variables. It includes many techniques for modeling and analyzing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables (or ‘predictors’). More specifically, regression analysis helps one understand how the typical value of the dependent variable (or ‘criterion variable’) changes when any one of the independent variables is varied, while the other independent variables are held fixed. Regression analysis is widely used for prediction and forecasting. Many techniques for carrying out regression analysis have been developed. Familiar methods such as linear regression and ordinary least squares regression are parametric, in that the regression function is defined in terms of a finite number of unknown parameters that are estimated from the data. Nonparametric regression refers to techniques that allow the regression function to lie in a specified set of functions, which may be infinite-dimensional. The performance of regression analysis methods in practice depends on the form of the data generating process, and how it relates to the regression approach being used. Since the true form of the data-generating process is generally not known, regression analysis often depends to some extent on making assumptions about this process. These assumptions are sometimes testable if a sufficient quantity of data is available. In a multiple relationship, called multiple regression, two or more independent variables are used to predict one dependent variable. For example, an educator may wish to investigate the relationship between a student’s success in college and factors such as the number of hours devoted to studying, the student’s GPA, and the student’s high school background. This type of study involves several variables. Simple relationships can also be positive or negative. A positive relationship exists when both variables increase or decrease at the same time. For instance, a person’s height and weight are related; and the relationship is positive, since the taller a person is, generally, the more the person weighs. In a negative relationship, as one variable increases, the other variable decreases, and vice versa. In simple regression studies, the researcher collects data on two numerical or quantitative variables to see whether a relationship exists between the variables. For example, if a researcher wishes to see whether there is a relationship between number of hours of study and test scores on an exam, she must select a random sample of students, determine the hours each studied, and obtain their grades on the exam. The two variables for this study are called the independent variable and the dependent variable. The independent variable is the variable in regression that can be controlled or manipulated. The independent and dependent variables can be plotted on a graph called a scatter plot. The independent variable x is plotted on the horizontal axis, and the dependent variable y is plotted on the vertical axis. It displays multiple XY coordinate data points represent the relationship between two different variables on X and Y-axis. It is also called as correlation chart. It depicts the relationship strength between an independent variable on the vertical axis and a dependent variable on the horizontal axis. It enables strategizing on how to control the effect of the relationship on the process. It is also called scatter plots, X-Y graphs or correlation charts. It is used when two variables are related or evaluating paired continuous data. It is also helpful to identify potential root causes of a problem by relating two variables. The tighter the data points along the line, the stronger the relationship amongst them and the direction of the line indicates whether the relationship is positive or negative. The degree of association between the two variables is calculated by the correlation coefficient. If the points show no significant clustering, there is probably no correlation. Statisticians use a measure called the correlation coefficient to determine the strength of the linear relationship between two variables. There are several types of correlation coefficients. The correlation coefficient computed from the sample data measures the strength and direction of a linear relationship between two variables. The symbol for the sample correlation coefficient is r. The symbol for the population correlation coefficient is r (Greek letter rho). The range of the correlation coefficient is from -1 to +1. If there is a strong positive linear relationship between the variables, the value of r will be close to +1. If there is a strong negative linear relationship between the variables, the value of r will be close to -1. When there is no linear relationship between the variables or only a weak relationship, the value of r will be close to 0. Formula for the Correlation Coefficient r where n is the number of data pairs. Population Correlation Coefficient The population correlation coefficient is computed from taking all possible (x, y) pairs; it is designated by the Greek letter r (rho). The sample correlation coefficient can then be used as an estimator of r if the following assumptions are valid. - The variables x and y are linearly related. - The variables are random variables. - The two variables have a bivariate normal distribution. A biviarate normal distribution means that for the pairs of (x, y) data values, the corresponding y values have a bell-shaped distribution for any given x value, and the x values for any given y value have a bell-shaped distribution. Formally defined, the population correlation coefficient r is the correlation computed by using all possible pairs of data values ( x, y) taken from a population. Significance of the Correlation coefficients In hypothesis testing, one of these is true: H0: r equals 0 – This null hypothesis means that there is no correlation between the x and y variables in the population. H1: r not equals 0 – This alternative hypothesis means that there is a significant correlation between the variables in the population. When the null hypothesis is rejected at a specific level, it means that there is a significant difference between the value of r and 0. When the null hypothesis is not rejected, it means that the value of r is not significantly different from 0 (zero) and is probably due to chance. Several methods can be used to test the significance of the correlation coefficient like the t test, as with degrees of freedom equal to n – 2. Although hypothesis tests can be one-tailed, most hypotheses involving the correlation coefficient are two-tailed. Recall that r represents the population correlation coefficient. Standard Error of Estimate The standard error of the estimate is a measure of the accuracy of predictions made with a regression line. The standard error of the estimate is a measure of the accuracy of predictions. S represents the average distance that the observed values fall from the regression line. Conveniently, it tells you how wrong the regression model is on average using the units of the response variable. Smaller values are better because it indicates that the observations are closer to the fitted line. S becomes smaller when the data points are closer to the line. The standard error for the estimate is calculated by the following formula: The formula may look formidable, but we already have calculated all of the components except for squaring the This approximate value for the standard error of the estimate tells us the accuracy to expect from our prediction.
Blog / Jesus San-Miguel-Ayanz / September 28, 2020 Nearly 400 million hectares of natural areas are burnt every year by fires, causing large environmental and economic damages and contributing to the increase of carbon emissions worldwide. The ongoing wildfires in California and recent episodes of the 2019-2020 season in Australia, the Arctic region, and the Amazon among others, show that wildfires are being exacerbated by the already noticeable impacts of climate change. The Global Wildfire Information System (GWIS) is a joint initiative of the Group on Earth Observations (GEO), the NASA Applied Research and the EU Copernicus work programmes. Using advanced methods on data processing for wildfire detection and monitoring, numerical weather prediction models, and remote sensing, GWIS enables enhanced wildfire prevention, preparedness and effectiveness in wildfire management. It is supported by the scientific community and space agencies through the Global Observation of Forest Cover Fire Implementation Team, national research centers, and universities. Furthermore, GWIS provides the first global database of wildfire events for a continuous time frame (between 2001-2019) enabling the analysis of wildfire regimes worldwide and providing the basis for the assessment of potential effects of climate change. The data and services are readily available through a user-friendly web interface that allows data analysis, visualization and download access on the GWIS portal here. Ultimately, GWIS analyzes the impact of wildfires in terms of fire emissions, damage to human infrastructure and the environment. GWIS is composed of several modules covering the different phases of the fire cycle from the pre-fire to the post-fire stages. In the pre-fire stage, it provides a fire danger forecast up to 10 days in advance of the current date, supporting prevention and preparedness actions. During the crisis or emergency phase, it supports firefighting operations with multiple daily updates of information on active fires, fire extent and progression. GWIS, its tools and applications are promoted by NASA Applied Remote Sensing Training (ARSET) and the Food and Agriculture Organization (FAO). Results generated from the application of GWIS and wildfire impacts are essential to UN organizations, such as the United Nations Office of Disaster Risk Reduction (UNDRR), as well as support to others such as UN Environment Programme (UNEP) and UN Office for the Coordination of Humanitarian Affairs (OCHA). At the global level, GWIS is set to be a unique resource supporting developing countries that may not have access to national-level information on wildfires. Unfortunately, those being most affected by disasters more often include middle and low-income and developing countries according to the UN Development Programme (2018). This global open data portal is providing necessary information to those countries that need it most. GWIS – Support to the Australian government during the critical 2019-2020 bushfire season Over the 2019-2020 summer, bushfires heavily impacted various regions in Australia and caused widespread harm to people and animals and damage to the economy. Multiple states of emergency were declared across New South Wales, Victoria, and the Australian Capital Territory. During the most critical phase between December 2019 and January 2020, the European Commission established contact with the Australian government to offer support in terms of physical means and analysis of the situation. Given the unique capabilities of GWIS to provide synoptic information of the ongoing wildfires and predictions of wildfire danger, the European Commission’s Joint Research Center (JRC) was requested to use GWIS to provide an overview of the situation across the Australian territory (Artes et al. 2020). Furthermore, GWIS was used to support the Australian government through Geoscience Australia by providing daily updates of active bushfires and burnt areas in all the Australian states. The two maps summarize the information provided by GWIS at different stages of the bushfire season. The first map, produced on the 15th of January enabled the authorities in the EU to assess the situation in Australia, depict the size and spatial extent of the fires, as well as the number and type of resources that were already deployed by the Australian government for the extinction of the fires. This helped in the assessment of resource capacity in Australia and the type and amount of resources with which the EU could provide in the management of the fires. The second map, produced once the fires were contained on the 5th of February, provided a synoptic overview of the final state and impact of the fires, while still providing a warning on the wildfire danger forecast that was still critical at that stage. Australia, at the time of the events of this case study, did not have a unified fire information system for collecting or providing information on the evolution of wildfires at national level. Information was obtained through the aggregation of state-level information. The wildfire danger forecast provided by GWIS in the case study provided benefits for wildfire management at the national level as it enabled access to standardized information of wildfire danger in all states, which was accompanied with up to date data on the evolution of wildfires. This information was simultaneously available to the Emergency Response Coordination Centre (ERCC) and the EU services, including the EU delegation in the country, which facilitated the dialogue between these services at the EU and Australian national level and the assessment of the potential support needed by Australia and available at EU level. GWIS did support the information flow, increase in preparedness and efficient management of wildfire management resources. Calling for Partner Agencies: Building a Caribbean Wide Satellite Oil Spill Monitoring Program after Trinidad and Tobago’s Success GEOARC Reports and Datasets Released on Global Terrestrial Ecosystems, Typical Lakes, Eurasian Grassland and Food Security to support Sustainable Development Goals Thank you for your subscription to the GEO Week 2019 mailing list. Follow us on:
It's not a bird or a plane but it might be a solar storm. We like to think of astronauts as our super heroes, but the reality is astronauts are not built like Superman who gains strength from the sun. In fact, much of the energy radiating from the sun is harmful to us mere mortals. Outside Earth's protective magnetic field and atmosphere, the ionizing radiation in space will pose a serious risk to astronauts as they travel to Mars. High-energy galactic cosmic rays (GCRs) which are remnants from supernovas and solar storms like solar particle events (SPEs) and coronal mass ejections (CMEs) from the sun can cause harm to the body and spacecraft. These are all components of space weather. When astronauts travel in space they can't see or even feel radiation. However, NASA's Human Research Program (HRP) is studying the effects radiation plays on the human body and developing ways to monitor and protect against this silent hazard. "Dosimeters and modeling techniques are used to determine how much energy is deposited in the space explorer's bodies along with inflight tools to try to estimate what type of biological effects they might be experiencing," said Tony Slaba, Ph.D., NASA research physicist. "Solar storms can cause acute radiation sickness during space flight which has to be dealt with in real time. There's also an additional risk from exposure to GCRs which may cause central nervous system effects and delayed effects related to cancer and cardiovascular disease after the mission." While shielding strategies for GCRs remain difficult due to their extremely high energies, pharmaceutical countermeasures may be more effective than thick shielding to protect the crew from GCRs. NASA also is developing space weather forecasting tools to provide advance warning of SPEs. Solar protons can be easily shielded against for protection. The HRP is performing a variety of research to identify and validate biological countermeasures for protection. It researches an array of shielding design strategies that include ways to mitigate exposure from all forms of space weather. Historical worst and best case space weather scenarios are used to drive designs. Habitat design and overall vehicle optimization is being investigated to reduce the inflight risks from solar storms. These design strategies, coupled with the human research on the biological effects of space radiation will allow astronauts to travel farther from Earth than ever before. As NASA embarks on the next big journey to send humans to Mars, it is imperative to protect our super heroes against the dangers of space. By implementing the best methods and technologies against the villain of space radiation, the journey may not be faster than a speeding bullet, but it will be safer. NASA's Human Research Program (HRP) is dedicated to discovering the best methods and technologies to support safe, productive human space travel. HRP enables space exploration by reducing the risks to astronaut health and performance using ground research facilities, the International Space Station, and analog environments. This leads to the development and delivery of a program focused on: human health, performance, and habitability standards; countermeasures and risk mitigation solutions; and advanced habitability and medical support technologies. HRP supports innovative, scientific human research by funding more than 300 research grants to respected universities, hospitals and NASA centers to over 200 researchers in more than 30 states. NASA Human Research Strategic Communications Amy Blanchett | EurekAlert! Further reports about: > Human > Mars > NASA > Space Center > astronauts > cardiovascular disease > central nervous system > coronal mass ejections > galactic cosmic rays > human body > ionizing radiation > magnetic field > nervous system > solar storms > space exploration > space radiation > space travel FAST detects neutral hydrogen emission from extragalactic galaxies for the first time 02.07.2020 | Chinese Academy of Sciences Headquarters First exposed planetary core discovered 01.07.2020 | Universität Bern Solar cells based on perovskite compounds could soon make electricity generation from sunlight even more efficient and cheaper. The laboratory efficiency of these perovskite solar cells already exceeds that of the well-known silicon solar cells. An international team led by Stefan Weber from the Max Planck Institute for Polymer Research (MPI-P) in Mainz has found microscopic structures in perovskite crystals that can guide the charge transport in the solar cell. Clever alignment of these "electron highways" could make perovskite solar cells even more powerful. Solar cells convert sunlight into electricity. During this process, the electrons of the material inside the cell absorb the energy of the light.... Empa researchers have succeeded in applying aerogels to microelectronics: Aerogels based on cellulose nanofibers can effectively shield electromagnetic radiation over a wide frequency range – and they are unrivalled in terms of weight. Electric motors and electronic devices generate electromagnetic fields that sometimes have to be shielded in order not to affect neighboring electronic... A promising operating mode for the plasma of a future power plant has been developed at the ASDEX Upgrade fusion device at Max Planck Institute for Plasma... Live event – July 1, 2020 - 11:00 to 11:45 (CET) "Automation in Aerospace Industry @ Fraunhofer IFAM" The Fraunhofer Institute for Manufacturing Technology and Advanced Materials IFAM l Stade is presenting its forward-looking R&D portfolio for the first time at... With an X-ray experiment at the European Synchrotron ESRF in Grenoble (France), Empa researchers were able to demonstrate how well their real-time acoustic monitoring of laser weld seams works. With almost 90 percent reliability, they detected the formation of unwanted pores that impair the quality of weld seams. Thanks to a special evaluation method based on artificial intelligence (AI), the detection process is completed in just 70 milliseconds. Laser welding is a process suitable for joining metals and thermoplastics. It has become particularly well established in highly automated production, for... 02.07.2020 | Event News 19.05.2020 | Event News 07.04.2020 | Event News 03.07.2020 | Life Sciences 03.07.2020 | Studies and Analyses 03.07.2020 | Power and Electrical Engineering
One of the easiest and most useful tools in Matlab is polyval, a very nice function that evaluates a polynomial function given its parameters and the range to evaluate… huh? All right, all right, we wanna be clear here, right? Suppose we are given a polynomial function, let’s say: and we would like to represent it in Matlab like that. Well, we cannot just write it like that. Remember Matlab is a numerical computation programm, which means, that it won’t compute any symbol. So forget it if you wanna computate something writting letters. Matlabs wants only numbers. You might put names to the variables, but still, Matlab computes only with numbers. Now, what to do? There’s where our great friend polyval comes to the rescue! Skip this section if you already know how a Polynomial Equation works. As you probably have noticed (you’re readinf this because you use them), any polynomial equation is written as the form: That is, for every variable x, there is a constant a that multiplies it. And they all sum up. Remeber that the values for a can be any real value. To be practical (or just seem fancy), some guys write it with a sumation as: Now have a look at the last two terms of the long equation. Notice that there are no powers… or are there? Actually yes, there are, but follow their logic. You can see that the variable n is decreasing until it becomes 0. Check this: Do you find the exact red elements of the equation above in the original polynomial equation? Well, that’s because: –> = = –> = = In the first case it’s not necessary to write that an element has power 1, we all know that. In the second case, any number times 1 is the same number: a * 1 = a. So why to write redundant powers and multiplications? Keep it simple. “The shorter the better” is the battle cry when you write any scientific paper. Therefore we keep it “short”: So yes, every sub-index of a matches the power of its corresponding x, and they decrease from n to 0. You can start from any value of n, provided that n is a positive integer. Too much mathematical formality? Ok, give n a value greater than 0 without decimals. We need numbers like 3, 10, 100 or 249583745. AND don’t put numbers after the point, i.e. no 3.1415 nor 7.5, etc. The highest power in an x, defines the degree of the polynomial. So, if you say you have a polynomial of degree 4, it will look like: Of degree 6: and so on… But what about our very first original equation? Its highest power is 3, thus is a polynomial of degree 3 (a.k.a. cubic). It is also defined as a polynomial equation like the others, even if it doesn’t have any squared power written: You see how x2 is multiplying a zero? Our equation also had its corresponding constant a2, but it is equal to 0. A very decent way to identify a friend-zoned variable. And the last element is equal to 2. If you enumerate the parameters (the constants a) of our equation, it would look like: a = 3, 0, 4, 2 From now on, get use to the fact that names of lists are noted with bolded letters, because we are gonna treat them as vectors. How to write that list as a vector? a = (3 0 4 2) = (3, 0, 4, 2) That is a list that runs from left to right. In other words, is a row-vector. With or without commas, it is already understood that we’re handling a list of elements, especially with Matlab. If we want to convert it into a column-vector, we can simply transpose it and write an upperscripted T at the end of the vector: a = (3, 0, 4, 2)T Did you get the idea so far? Great. Now that you know how polynomial equations are built and represented, we can see how to use them in Matlab. Going to Matlab Polyval is a very simple function that asks mainly for two things: the parameters and the variables. I like to think about it as in a block diagram: You can see it demands two inputs and gives you back a value. What are those? The parameters are expressed as a vector (a list if you want it), where we numerate them from the left-most parameter to the right-most. That is from an to a0. For our equation it would look like: Written in Matlab, it is: p = [3 0 4 2] That line of code will take the list of parameters and put it in a vector named p (I chose p because it’s easier to relate it to “parameters”). When you write an array of elements in Matlab, a comma and a simple space are equally used to separate the elements. So p = [3 0 4 2] is the same as to write p = [3, 0, 4, 2] The variables are the quantities we want to evaluate with. For example, we want to know what is the value of our function when x = 5. It is written as: The x in the equation was substituted by 5 in each element. Now we just do simple first grade math and we discover that our function is equal to 397 when x is equal to 5. Using polyval we can simply write it as: polyval([3 0 4 2], 5) As you can see, we give the list of parameters (inside brackets) and then the variable. Don’t forget to separate the inputs with a coma. But hey! We already have those values in the vector p, right? So, we can then write: Both lines will give us the same value: Not convinced yet? Let’s try another example. We have to get the value (evaluate) the following polynomial equation: when x = 8. What’s its value? Polyval knows: p = [6 0 0 0 0 5.5 0 0] Did you see what I did here? First put the parameters of the equation in p, where only a7 and a2 have non-zero values. Then evaluate when the variable is equal to 8 with polyval to get: And it is as simple as that. Yuhuuuu! More than one variable But hey! What happens if I want to evaluate my equation with more than one variable? Should I write polyval many times? Should I use a loop? Absolutely… not! A good thing with polyval is that you can use another list of variables in the second input, instead of a single number. Like this: polyval(p, [1.5 2.8 3.4]) In this case, polyval will evaluate our equation with the values 1.5, 2.8 and 3.4 (given our fist list of parameters p = [3 0 4 2]): 18.1250 79.0560 133.5120 Now it gives 3 values back, each one is the result of evaluating our function with the three given values for x: And it becomes even bigger (or shorter). We now want to evaluate it with more values… yes, a list stored in a vector is also used if desired. First store it and then write it in polyval: x = [1.5 2.8 3.4 5.6 8.7] Summary with Examples And finally I wanna summarize all this info with two examples. In a first example we are given the function: evaluate it when x = 2. Using polyval we write: polyval([2 -5 0.1 0 -2], 2) And the result should be: In a second example we have the function: We want to evaluate it in the range from -5 to 8 and store all the values in the vector y. For this evaluation, we better use vectors containing the lists of elements: p = [4 1.1 0 -8]; x = -5:8; y = polyval(p, x); Boom! You have it! Now you can use polyval to evaluate any polynomial function with any variable. Try visualizing your data. Write: Do you see your cubic function? It should look something like this: Great, right? I hope this post was useful for you. Play around and leave a comment of your results or doubts. Pingback: A fancy visualization of planes intersecting – Part 2 | Mayitzin I think everything wrote was actually very logical. But, think about this, what if you were to write a killer headline? I mean, I don’t want to tell you how to run your website, but suppose you added a post title that makes people want more? I mean How to use polyval | Mayitzin is a little vanilla. You could glance at Yahoo’s home page and see how they create article titles to get people to click. You might add a related video or a picture or two to get people interested about everything’ve got to say. Just my opinion, it might make your blog a little livelier. I have noticed you don’t monetize your website, don’t waste your traffic, you can earn extra cash every month because you’ve got high quality content. If you want to know how to make extra $$$, search for: Boorfe’s tips best adsense alternative
The field of population genetics examines the amount of genetic variation within populations and the processes that influence this variation. A population is defined as a group of interbreeding individuals that exist together at the same time. Genetic variation refers to the degree of difference found among individuals, for instance in height, coat color, or other less observable traits. The particular set of genes carried by an individual is known as his or her genotype, while all the genes in a population together comprise the "gene pool." The foundation for population genetics was laid in 1908, when Godfrey Hardy and Wilhelm Weinberg independently published what is now known as the Hardy-Weinberg equilibrium. The "equilibrium" is a simple prediction of genotype frequencies in any given generation, and the observation that the genotype frequencies are expected to remain constant from generation to generation as long as several simple assumptions are met. This description of stasis provides a counterpoint to studies of how populations change over time. The 1920s and 1930s witnessed the real development of population genetics, with important contributions by Ronald Fisher, Sewall Wright, and John B. S. Haldane. They, with many others, clearly established the basic processes which caused populations to change over time: selection, genetic drift, migration, and mutation. The change in the genetic makeup of a population over time, usually measured in terms of allele frequencies, is equivalent to evolutionary change. For this reason, population genetics provides the groundwork for scientists' understanding of evolution, in particular microevolution, or changes within one or several populations over a limited time span. The questions addressed by population genetics are quite varied, but many fall within several broad categories. How much genetic variation is found in populations, and what processes govern this? How will a population change over time, and can a stable endpoint be determined? How much and why do populations of the same species differ? The answer is always cast in terms of selection, drift, mutation, migration, and the complex interplay among them. Of the four, selection and genetic drift are usually given credit as the major forces. Simply put, selection occurs when some genotypes in the population are on average more successful in reproduction. These genotypes may survive better, produce more offspring, or be more successful in attracting mates; the alleles responsible for these traits are then passed on to offspring. There is broad theoretical consensus and abundant empirical data to suggest that selection can change populations radically and quickly. If one genetic variant, or allele, increases survivorship or fertility, selection will increase the frequency of the favored allele, and concurrently eliminate other alleles. This type of selection, called directional selection, decreases the amount of genetic variation in populations. Alternatively, an individual carrying two different alleles for the same gene (a heterozygote) may have advantages, as exemplified by the well-known example of the sickle-cell allele in Africa, in which heterozygotes are more resistant to malaria. In this case, called overdominant selection, genetic variation is preserved in the population. Although a number of similar examples are known, directional selection is much more common than overdominant selection; this implies that the common action of selection is to decrease genetic variation within populations. It is equally clear that if different (initally similar) populations occupy different habitats, selection can create differences among populations by favoring different alleles in different areas. Often overlooked by the layperson, genetic drift is given a place of importance in population genetics. While some analyses of genetic drift quickly become complicated, the basic process of drift is simple and involves random changes in allele frequency. In sexual species, the frequency of alleles contained in the progeny may not perfectly match the frequency of the alleles contained in the parents. As an analogy, consider flipping a coin twenty times. Although one might expect ten heads and ten tails, the actual outcome may be slightly different; in this example, the outcome (progeny) does not perfectly represent the relative frequency of heads and tails (the parents). What does this mean for populations? Start by considering neutral alleles, which have no impact on survival or reproduction. (An example is the presence or absence of a widow's peak hairline.) The frequency of a neutral allele may shift slightly between generations, sometimes increasing and sometimes decreasing. What outcomes are expected from this process? Suppose that a particular allele shifts frequency at random for a number of generations, eventually becoming very rare, with perhaps only one copy in the population. If the individual carrying this allele does not pass it on to any offspring or fails to have any offspring, the allele will be lost to the population. Once lost, the allele is gone from the population forever. In this light, drift causes the loss of genetic variation over time. All populations are subject to this process, with smaller populations more strongly affected than larger ones. Perhaps better known than the pervasive, general effects of genetic drift are special examples of drift associated with unusually small populations. Genetic bottlenecks occur when a small number of individuals from a much larger population are the sole contributors to future generations; this occurs when a catastrophe kills most of the population, or when a few individuals start a new population in different area. Genetic bottlenecks reduce the genetic variation in the new or subsequent population relative to the old. Cheetahs, which have very little genetic variation, are presumed to have gone through several genetic bottlenecks. Occasionally, these new populations may have particular alleles that are much more common than in the original population, by chance alone. This is usually called the founder effect. HALDANE, J. B. S. (1862–1964) British biologist and author who immigrated to India. Haldane was famous for both his flamboyant personality and his influence on genetics and evolutionary biology. Haldane, along with Ronald Fisher, showed that evolution is the change in frequency of individual genes over time. Migration and Mutation Migration may also be important in shaping the genetic variation within populations and the differences among them. To geneticists, the word "migration" is synonymous with the term "gene flow." Immigration may change allele frequencies within a population if the immigrants differ genetically. The general effect of gene flow among populations is to make all of the populations of a species more similar. It can also restore alleles lost through genetic drift, or introduce new alleles formed by mutation in another population. Migration is often seen as the "glue" that binds the subpopulation of a species together. Emigration is not expected to change populations unless the migrants are genetically different from those that remain; this is rarely observed, so emigration is often ignored. The last important process is mutation. Mutation is now understood in great detail at the molecular level, and consists of any change in the deoxyribonucleic acid (DNA) sequence of an organism. These mutations range from single base substitutions to the deletion or addition of tens or hundreds of bases to the duplication or reorganization of entire chromosomes . Mutation is most important as the sole source of all new genetic variation, which can then be spread from the population of origin by migration. This importance should not be undervalued, although the impact of mutation on most populations is negligible at any given time. This is because mutation rates are typically very low. Questions and Contributions The real challenge of population genetics has been in understanding how the four processes work together to produce the observable patterns. For instance, genetic drift eliminates variation from populations, as do the most common modes of natural selection. How then can the abundance of genetic variation in the world be explained? This question has many complicated answers, but some cases, such as the observation of deleterious alleles in humans (for example, alleles for phenylketonuria, a genetic disease), might be explained in terms of mutation and selection. Mutation adds these alleles to a population, and selection removes them; although the rate of mutation is likely to be nearly constant, the rate at which selection removes them increases as the abundance of the allele increases. This is certainly true for recessive alleles, which are only expressed when an individual has two copies. With only one, the allele remains unexpressed and therefore not selected. At some point, predictable from the mutation rate and physical consequences of the disease, the two opposing forces balance, producing the stable persistence of the disease allele at low frequency. As a discipline, population genetics has contributed greatly to scientists' understanding of many disparate topics, including the development of resistance of insects to insecticides and of pathogenic bacteria to antibiotics, an explanation of human genetic variation like the alleles for sickle-cell anemia and blood groups, the evolutionary relationships among species, and many others. Of particular interest is the use of genetic data in conservation biology. By definition, endangered and threatened species have reduced population sizes, making them subject to the vagaries of genetic drift and also to inbreeding. Inbreeding is mating between genetically related individuals, and often leads to inbreeding depression, a reduction of health, vigor, and fertility. Genetic drift leads to a loss of genetic variation, which limits what selection can do to produce adaptations if the environment changes. Keeping these two issues in mind, greatly reduced populations may be at increasingly greater risk for genetic reasons, leading to further declines. see also Conservation; Endangered Species; Evolution; Extinction; Hardy-Weinberg Equilibrium; Natural Selection; Sexual Reproduction Paul R. Cabe Gillespie, John H. Population Genetics: A Concise Guide. Baltimore, MD: Johns Hopkins University Press, 1998. Hardy, Godfrey. "Mendelian Proportions in Mixed Populations." Science 28 (1908): 49–50. Hartl, Daniel. A Primer of Population Genetics. Sunderland, MA: Sinauer Associates, 1999. Hedrick, Philip W. Genetics of Populations. Boston, MA: Jones and Bartlett, 2000. Smith, John Maynard. Evolutionary Genetics, 2nd ed. Oxford, England: Oxford University Press, 1998. Population genetics is the study of the genetic structure of populations, the frequencies of alleles and genotypes . A population is a local group of organisms of the same species that normally interbreed. Defining the limits of a population can be somewhat arbitrary if neighboring populations regularly interbreed. All the humans in a small town in the rural United States could be defined as a population, but what about the humans in a suburb of Los Angeles? They can interbreed directly with nearby populations, and, indirectly, with populations extending continuously north and south for a hundred or more miles. In addition, a large human population often consists of subpopulations that do not readily interbreed because of differences in education, income, and ethnicity. Despite these complexities, one can make some simple definitions. Gene Pool and Genetic Structure All of the alleles shared by all of the individuals in a population make up the population's gene pool. In diploid organisms such as humans, every gene is represented by two alleles. The pair of alleles may differ from one another, in which case it is said that the individual is "heterozygous" for that gene. If the two alleles are identical, it is said that the individual is "homozygous" for that gene. If every member of a population is homozygous for the same allele, the allele is said to be fixed. Most human genes are fixed and help define humans as a species. The most interesting genes to geneticists are those represented by more than one allele. Population genetics looks at how common an allele is in the whole population and how it is distributed. Imagine, for example, an allele "b " that when homozygous, "bb," produces blue-eyed individuals. Allele bmight have an overall frequency in the population of 20 percent; that is, 20 percent of all the eye-color alleles are b. However, not everyone who has the b allele will be homozygous for b. Some people will have b combined with another allele, "B," which gives them brown eyes (because B is dominant and b is recessive). Others won't have the b allele at all and instead will be homozygous for B. The frequency of each genotype—whether bb, Bb, or BB —in the population is also of interest to population geneticists. The frequency of alleles and genotypes is called a population's genetic structure. Populations vary in their genetic structure. For example, the same allele may have a frequency of 3 percent among Europeans, 10 percent among Asians, and 94 percent among Africans. Blood types vary across different ethnic groups in this way. The frequency of genotypes depends partly on the overall allele frequencies, but also on other factors. Large, isolated populations whose members mate randomly and do not experience any selection pressure will tend to maintain a frequency of genotypes predicted by a simple equation called the Hardy-Weinberg Theorem. For example, if b has a frequency of 20 percent and B has a frequency of 80 percent, we can predict the frequency of the three genotypes (bb, Bb, and BB ). The total of all the genotype frequencies is 100 percent (b + B ), and the frequencies of each are given by (b + B )2 100 percent. This can be restated as the following equation: 100% = b 2 + 2(bB + B2). And we can calculate the genotype frequencies as: 100% = (20%)2 + 2(20% × 80%) + (80%)2 = 4% + 32% + 64%. So even though 20 percent of all the genes in this imaginary population are b alleles, only 4 percent of the population is homozygous for b and actually has blue eyes. Furthermore, this same distribution will be maintained over time, as long as the conditions of the Hardy-Weinberg Theorem are met. However, few, if any, natural populations (including human ones) actually conform to the assumptions of Hardy-Weinberg, so both genotype frequencies and allele frequencies can and do change from generation to generation. For example, humans do not mate randomly. Instead, they tend to take partners of similar height and intelligence. And even in modern human populations, genetic diseases such as Tay-Sachs kill children long before they grow up and reproduce. A difference in survival and reproduction due to differences in genotype is called selection. Even subtle selection can change gene frequencies over long periods of time. Another assumption of the Hardy-Weinberg theorem is that individuals from different populations do not mate, so that gene flow, the passage of new genetic information from one gene pool into another, is zero. Such isolation does characterize many animal and plant populations, but almost no modern human populations are isolated from all other populations. Instead, humans travel to different countries, intermarrying and producing children who reflect the novel intermingling of unusual alleles. In very small populations, rare alleles can become common or disappear because of genetic drift—random changes in gene frequency that are not due to selection, gene mutation, or immigration. We can explain this as follows. When flipping a coin 1,000 times, it is likely to get 50 percent heads and 50 percent tails (if it's a fair coin). But flip it only five or ten times, and it is unlikely to get exactly half heads and half tails. Chances are good that the results will be something quite different. In the same way, if 10,000 people mate and produce children, the bb genotype will pretty much conform to the Hardy-Weinberg equation described above (provided the other assumptions are approximately true). For example, in a sample of just twenty people, instead of getting a group of children of whom 4 percent have blue eyes, the result could end up none with blue eyes, or maybe half having blue eyes. It all depends on how the alleles happen to combine when eggs meet sperm. Because of genetic drift, small, isolated populations often have unusual frequencies of a few alleles. Although similar to other people in most important respects, such isolated populations may harbor high frequencies of one or more alleles that are rare in most other populations. For example, in 1814, fifteen people founded a British colony on a group of small islands in the mid-Atlantic, called Tristan de Cunha. They brought with them a rare recessive allele that causes progressive blindness, and the disease, extraordinarily rare in most places, is common on Tristan de Cunha. Such "inbreeding" produces more homozygotes than usual and increases the probability of children born with genetic diseases. The Old Order Amish have a high frequency of Ellis-van Creveld syndrome, and Ashkenazi Jews were, until a few years ago, susceptible to Tay-Sachs disease. Fortunately, genetic testing has greatly reduced the incidence of Tay-Sachs and many other such genetic diseases. Population genetics also provides information about evolution. It is known, for example, that populations that have unusual allele frequencies must have been isolated from other populations. And we can surmise that populations that share similar frequencies of certain rare alleles may have interbred at some point in the past. Human populations in sub-Sarahan Africa show the greatest diversity of all human populations. On the basis, in part, of this diversity, one theory of human evolution suggests that all humans originated in Africa, and then emigrated to Asia, Europe, and the rest of the world. Jones, J. S. "How Different Are Human Races?" Nature 293 (1981): 188-190. Klug, W. S., and M. R. Cummings. Concepts of Genetics, 6th ed. Upper Saddle River, NJ: Prentice Hall, 2000. Lewontin, R. Human Diversity. Redding, CT: W. H. Freeman, 1982. Population genetics is the statistical study of the natural differences found within a group of the same organisms. Instead of examining the genes of individuals, it looks at the dominant (the trait that first appears or is visibly expressed in the organism) and recessive (the trait that is present at the gene level but is masked and does not show itself in the organism) genes found within an entire population. Population genetics seeks to understand the factors that control which genes are expressed. It also creates mathematical models to try to predict which differences will be expressed and with what frequency. In the life sciences, a population consists of all the individuals of the same species (all of the same kinds of organisms, like all the tigers) that live in a particular habitat at the same time. Scientists know that in any population, whether it be tigers or people, the individuals that make it up are all different. They may all be tigers, but each has individual and very recognizable traits. Although some might think that all animals of the same species look exactly alike, it is known that once someone becomes familiar with a certain group of the same species, he or she can usually tell one from another. At first all black labrador retrievers look alike. After a closer look, it can be seen that there are very obvious and easily recognizable differences among them. It is known that it is mostly the individual's genetic inheritance that accounts for these minor differences. This means that the unique combination of dominant and recessive genes that the individual has inherited is responsible for all of its individual traits (color, size, abilities, and tendencies, to name only a few). WHAT IS POPULATION GENETICS? Population genetics is a tool used to study the genetic basis of evolution (the process by which gradual genetic change occurs over time to a group of living things), and it is helpful in allowing scientists to understand the relative importance of the many factors that influence evolution. It studies a given population's gene pool (which is the total of all of the genes available to a generation). Knowing what the gene pool consists of enables scientists to establish a sort of genetic base out of which future offspring will be composed. This assumes that over time, the population is made up of individuals that breed only with others of their species that live in the same habitat. Once the gene pool is established, scientists are able to use Mendel's laws of inheritance (concerning patterns of dominant and recessive genes) and predict what differences there will be among individuals in that population. Scientists are able to establish what are called gene frequencies, or percentages at which certain genes will be expressed. Scientists also have been able to establish a law that actually measures what changes will take place. Called the Hardy-Weinberg law, since it was proposed independently in 1906 by the English mathematician Godfrey H. Hardy (1877–1947), and the German physician Wilhelm Weinberg (1862–1937), this is a mathematical formula that has become the basis of population genetics. Using this formula (which only works perfectly when certain ideal conditions are met), scientists are able to describe a steady state called genetic equilibrium. In this state, gene frequencies stay the same and nothing changes unless some outside force intervenes. Naturally, the real world, especially that involving human beings, is not perfect and there are many factors always at work that make conditions less than ideal. Chance events happen all the time. Reproduction does not always work and individuals leave populations while others may wander in. These are only a few potential variables. However, the Hardy-Weinberg formula is still useful and helps in being able to arrive at some relative frequencies, so it is still applies in some way to the real world. WHY STUDY POPLUATION GENETICS? By studying what makes individuals in the same population different, science is able to learn more about evolutionary change. Population genetics can also draw very useful conclusions. For example, when populations of interbreeding individuals are very small, they are highly susceptible to extinction by any number of chance events. This is because their interbreeding has not given them much genetic variation (differences). When something in their habitat changes, they may be unable to adapt quickly enough. Population genetics, therefore, is a valuable, if not always statistically perfect, tool for life scientists.
Welcome to another round of Math Tip Monday! This month we are all going to share some of our favorite measurement ideas. In first grade we focus on nonstandard measurement. We look at the length of items, proper ways to measure, and also how we can use an object to compare the lengths of two other objects (transitivity). Vocabulary: measure, order, length, height, width, longer than, shorter than, nonstandard, unit, gap, overlap I like to always tie in real world connections. One way I do that is by starting my lesson with a "Who Cares?" anchor chart. We brainstorm a list of professions that care about measurement and who use it daily. Here is what my kiddos came up with: How do doctors (for people) use measurement for their job? Do animal doctors use it in the same way or differently? When is a time you needed to measure at home? Which tool did you use? How do builders/designers use measurement? What would happen if they didn’t know how to measure? Agree of disagree (and justify)- Nonstandard Measurement is a necessary skill to learn. In addition to a discussion, we also start each lesson with a problem. Here are a few of the problems that were in this week's lesson plan: Nigel and Corey each have new pencils that are the same length. Corey uses his pencil so much that he needs to sharpen it several times. Nigel doesn’t use his at all. Nigel and Corey compare pencils. Whose pencil is longer? Draw a picture to show your thinking. Jordan has 3 stuffed animals: a giraffe, a bear, and a monkey. The giraffe is longer than the monkey. The bear is shorter than the monkey. Sketch the animals from shortest to longest to show how tall each animal is. Draw a picture to match each of these two sentences: The book is longer than the index card. The book is shorter than the folder. Which is longer, the index card or the folder? Write a statement comparing the two objects. Use your drawings to help you answer the question. Joe ran a string from his room to his sister’s room to measure the distance between them. When he tried to use the same string to measure the distance from his room to his brother’s room, the string didn’t reach! Which room was closer to Joe’s room, his sister’s or his brother’s? Julia’s lollipop is 15 cubes long. She measured the lollipop with 9 red cubes and some blue cubes. How many blue cubes did she use? Of course in addition to talking about measurement, we have to get really get into it and measure! My kids love using different nonstandard tools to measure classroom items (especially if I lay on the floor and they get to measure me!). We have discussions about not having gaps or overlaps when measuring. We also problem solve and discuss why two people might get a different answer when measuring the same object (measured wrong, used two different tools, one measured length and one measured width, etc.) I am a huge book nerd. So of course, I need to tie in literacy somehow! One of my favorite books for this unit is How Big Is A Foot. We listen to it read aloud on YouTube. If you haven't read this book yet it is simply adorable. We discussed how the King's footprint was a different size than other people's and the importance of being clear in which unit of measurement we are using. I showed them an example of Shaq's 22.5 inch foot (YIKES!). My kiddos then traced their own feet to compare to Shaq and each other. They truly love this measurement unit! Lastly, here are a few of the online games we use doing whole group practice and centers. I have multiple laptops set up around the room and my kids love playing math games on them! Simply click each picture below to go to the website! I hope you were able to find some fun new measurement ideas. Be sure to check out all of the other tips below. Have a great week!
Mensuration is the branch of mathematics which studies the measurement of the geometric figures and their parameters like length, volume, shape, surface area, lateral surface area, etc. Here, the concepts of mensuration are explained and all the important mensuration formulas and properties of different geometric shapes and figures are covered. Mensuration Maths- Definition A branch of mathematics which talks about the length, volume or area of different geometric shapes is called Mensuration. These shapes exist in 2 dimension or 3 dimensions. Let’s learn the difference between the two. Difference Between 2D and 3D shapes |2D Shape||3D Shape| |If a shape is surrounded by three or more straight lines in a plane, then it is a 2D shape.||If a shape is surrounded by a no. of surfaces or planes then it is a 3D shape.| |These shapes have no depth or height.||These are also called as solid shapes and unlike 2D they have both height or depth.| |These shapes have only 2-D length and breadth.||These are called Three dimensional as they have depth, breadth and length.| |We can measure their area and Perimeter.||We can measure their volume, CSA, LSA or TSA.| Mensuration in Maths- Important Terminologies Let’s learn a few more definitions related to this topic. |Area||A||M2 or Cm2||The area is the surface which is covered by the closed shape.| |Perimeter||P||Cm or m||The measure of the continuous line along the boundary of the given figure is called a Perimeter.| |Volume||V||Cm3 or m3||In a 3D shape, the space included is called a Volume.| |Curved Surface Area||CSA||M2 or cm2||If there’s a curved surface, then the total area is called a Curved Surface area. Example: Sphere or Cylinder.| |Lateral Surface area||LSA||M2 or cm2||The total area of all the lateral surfaces that surrounds the figure is called the Lateral Surface area.| |Total Surface Area||TSA||M2 or Cm2||If there are many surfaces like in 3D figures, then the sum of the area of all these surfaces in a closed shape is called Total Surface area.| |Square Unit||–||M2 or cm2||The area covered by a square of side one unit is called a Square unit.| |Cube Unit||–||M3 or cm3||The volume occupied by a cube of one side one unit| Now let’s learn all the important mensuration formulas involving 2D and 3D shapes. Using this mensuration formula list, it will be easy to solve the mensuration problems. Students can also download the mensuration formulas list PDF from the link given above. In general, the most common formulas in mensuration involve surface area and volumes of 2D and 3D figures. Mensuration Formulas For 2D Shapes |Shape||Area (Square units)||Perimeter (units)||Figure| |Rectangle||l × b||2 ( l + b)| |Circle||πr2||2 π r| Where, s = (a+b+c)/2 |Isosceles Triangle||½ × b × h||2a + b| |Equilateral Triangle||(√3/4) × a2||3a| |Right Angle Triangle||½ × b × h||b + hypotenuse + h| |Rhombus||½ × d1 × d2||4 × side| |Parallelogram||b × h||2(l+b)| Mensuration Formulas for 3D Shapes |Shape||Volume (Cubic units)||Curved Surface Area (CSA) or Lateral Surface Area (LSA) (Square units)||Total Surface Area (TSA) (Square units)||Figure| |Cuboid||l × w × h||–||2 (lb +bh +hl)| |Sphere||(4/3) π r3||4 π r2||4 π r2| |Hemisphere||(⅔) π r3||2 π r 2||3 π r 2| |Cylinder||π r 2 h||2π r h||2πrh + 2πr2| |Cone||(⅓) π r2 h||π r l||πr (r + l)| Question: Find the area and perimeter of a square whose side is 5 cm? Side = 5 cm Area of a square = a2 square units Substitute the value of “a” in the formula, we get Area of a square = 52 A = 5 x 5 = 25 Therefore, the area of a square = 25 cm2 The perimeter of a square = 4a units P = 4 x 5 =20 Therefore, the perimeter of a square = 20 cm. Register at BYJU’S to learn more on other mathematical concepts and also learn mensuration problems and formulas by downloading BYJU’S – The Learning App. Frequently Asked Questions What is mensuration in Maths? In maths, mensuration is defined as the study of the measurement of various 2D and 3D geometric shapes involving their surface areas, volumes, etc. What is the difference between mensuration and geometry? Mensuration refers to the calculation of various parameters of shapes like the perimeter, area, volume, etc. whereas, geometry deals with the study of properties and relations of points and lines of various shapes. What are 2D and 3D Mensuration? 2D mensuration deals with the calculation of various parameters like area and perimeter of 2-dimensional shapes like square, rectangle, circle, triangles, etc. 3D mensuration is concerned with the study and calculation of surface area, lateral surface area, and volume of 3-dimensional figures like cube, sphere, cuboid, cone, cylinder, etc.
Interactive Java Tutorials Electromagnetic radiation, the larger family of wave-like phenomena to which visible light belongs (also known as radiant energy), is the primary vehicle transporting energy through the vast reaches of the universe. This interactive tutorial explores the classical representation of an electromagnetic wave as a sine function, and enables the visitor to vary amplitude and wavelength to demonstrate how this function appears in three dimensions. The tutorial initializes with a sine function simulating electromagnetic wave propagation traversing from left to right across the window. The oscillating electric field vectors of the virtual electromagnetic wave are represented by blue lines, while the magnetic field vectors are depicted in red. In order to operate the tutorial, use the mouse cursor to drag the wave back and forth in the window to observe how it appears from different angles. The Filled slider can be employed to vary the density of vector lines appearing within the sine function, and the Amplitude slider increases or decreases vector amplitude. Placing a checkmark in the Show Wave Color check box changes to the wave simulate the color matching the current Wavelength slider value. This slider can be utilized to alter the wavelength of the virtual wave between a range of 300 nanometers (ultraviolet) to 800 nanometers (infrared). As the Wavelength slider is translated, the color corresponding to the current wavelength is acquired by the virtual electromagnetic wave (provided the Show Wave Color check box is active), and the name (red, yellow, green, etc.) also appears above the slider bar. An electromagnetic wave travels or propagates in a direction that is oriented at right angles to the vibrations of both the electric (E) and magnetic (B) oscillating field vectors, transporting energy from the radiation source to an undetermined final destination. The two oscillating energy fields are mutually perpendicular (illustrated in Figure 1) and vibrate in phase following the mathematical form of a sine wave. Electric and magnetic field vectors are not only perpendicular to each other, but are also perpendicular to the direction of wave propagation. By convention, and to simplify illustrations, the vectors representing the electric and magnetic oscillating fields of electromagnetic waves are often omitted, although they are understood to still exist. Whether taking the form of a signal transmitted to a radio from the broadcast station, heat radiating from a fireplace, the dentist's X-rays producing images of teeth, or the visible and ultraviolet light emanating from the sun, the various categories of electromagnetic radiation all share identical and fundamental wave-like properties. Every category of electromagnetic radiation, including visible light, oscillates in a periodic fashion with peaks and valleys (or troughs), and displays a characteristic amplitude, wavelength, and frequency that together define the direction, energy, and intensity of the radiation. The classical schematic diagram of an electromagnetic wave presented in Figure 1 illustrates the sinusoidal nature of oscillating electric and magnetic component vectors as they propagate through space. As a matter of convenience, most illustrations depicting electromagnetic radiation purposely omit the magnetic component, instead representing only the electric field vector as a sine wave in a two-dimensional graphical plot having defined x and y coordinates. By convention, the y component of the sine wave indicates the amplitude of the electric (or magnetic field), while the x component represents time, the distance traveled, or the phase relationship with another sine wave. A standard measure of all electromagnetic radiation is the magnitude of the wavelength (in a vacuum), which is usually stated in units of nanometers (one-thousandth of a micrometer) for the visible light portion of the spectrum. The wavelength is defined as the distance between two successive peaks (or valleys) of the waveform (see Figure 1). The corresponding frequency of the radiated wave, which is the number of sinusoidal cycles (oscillations or complete wavelengths) that pass a given point per second, is proportional to the reciprocal of the wavelength. Thus, longer wavelengths correspond to lower frequency radiation and shorter wavelengths correspond to higher frequency radiation. Frequency is usually expressed in quantities of hertz (Hz) or cycles per second (cps). Mortimer Abramowitz - Olympus America, Inc., Two Corporate Center Drive., Melville, New York, 11747. Matthew J. Parry-Hill and Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310. Questions or comments? Send us an email. © 1995-2019 by Michael W. Davidson, Kirill I. Tchourioukanov, and The Florida State University. All Rights Reserved. No images, graphics, software, scripts, or applets may be reproduced or used in any manner without permission from the copyright holders. Use of this website means you agree to the Legal Terms and Conditions set forth by the owners. This website is maintained by our
The four basic mathematical operations are: Adding two (or more) numbers means to find their sum (or total). The symbol used for addition is '+'. For example, 5 + 10 = 15 This is read as five plus ten is equal to fifteen or simply, five plus ten is fifteen. Find the sum of 9 and 8. 9 + 8 = 17 Addition of Large Numbers To add large numbers, list them in columns and then add only those digits that have the same place value. Find the sum of 5897, 78, 726 and 8569. - Write the numbers in columns with the thousands, hundreds, tens and units lined up. - 7 + 8 + 6 + 9 = 30. Thus, the sum of the digits in the units column is 30. So, we place 0 in the units place and carry 3 to the tens place. - The sum of the digits in the tens column after adding 3 is 27. So, we place 7 in the tens place and carry 2 to the hundreds place. - The sum of the digits in the hundreds column after adding 2 is 22. So, we place 2 in the hundreds place and carry 2 to the thousdands place. Subtracting one number from another number is to find the difference between them. The symbol used for subtraction is '–'. This is known as the minus sign. For example, 17 – 8 = 9 This is read as seventeen take away eight is equal to nine (or seventeen take away eight is nine). Also, we can say that 17 minus 8 is 9. Subtract 9 from 16. 16 – 9 = 7 Subtraction of Large Numbers To subtract large numbers, list them in columns and then subtract only those digits that have the same place value. Find the difference between 7064 and 489. - Use the equals addition method or the decomposition method. - Line up the thousands, hundreds, tens and units place values for the two numbers when placing the smaller number below the larger number as shown above. Multiplication means times (or repeated addition). The symbol used for multiplication is '×'. For example, 7 × 2 = 14 This is read as seven times two is equal to fourteen or simply, seven times two is fourteen. To multiply a large number with another number, we write the numbers vertically and generally multiply the larger number with the smaller number. A product is the result of the multiplication of two (or more) numbers. Calculate 765 × 9. Write the smaller number, 9, under the larger number, 765, and then calculate the multiplication. - 9 × 5 = 45. So, place 5 units in the units column and carry the 4 (i.e. four tens) to the tens column. - Calculate 9 × 6 and then add 4 to give 58 (i.e. 58 tens). Then place 8 in the tens column and carry 5 to the hundreds column. - Finally multiply 7 by 9 and add 5 to give 68 (i.e. 68 hundreds). Write this number down as shown above. - To multiply two large numbers, write the numbers vertically with the larger number generally being multiplied by the smaller number which is called the multiplier. - We use the 'times table' to find the product of the larger number with each digit in the multiplier, adding the results. - Remember to add a zero for every place value after the multiplying digit. For example, if the multiplying digit is in the hundreds column, add two zeros for the tens column and for the units column. Calculate 38 × 70. - Multiplying 38 by 70 is quicker than multiplying 70 by 38 as 70 contains a zero. - A zero is placed in the units column. Then we calculate 7 × 38 as shown above. Calculate 385 × 500. - Multiplying 385 by 500 is quicker than multiplying 500 by 385 as 500 contains two zeros. - A zero is placed in the units column and also the tens column. Then we calculate 5 × 385 as shown above. Calculate 169 × 68. - To multiply 169 by 68, place 68 below 169. - Then we calculate 8 × 169 and 60 × 169 as shown above. Division 'undoes' multiplication and involves a number called the dividend being 'divided' by another number called the divisor. The symbol used for division is '÷'. - As division is the inverse of multiplication, start by dividing 4 into the column furthest to the left. - 6 ÷ 4 = 1 and 2 is the remainder. - Clearly, the remainder 2 is 200 (i.e. 20 tens); and we can carry this into the tens column to make 29. - Now, 29 ÷ 4 = 7 with a remainder of 1. Clearly, the remainder of 1 is 10 (i.e. 10 units) and we carry this into the units column to make 12. - Finally, 12 ÷ 4 = 3. - The four basic mathematical operations are: - Adding two (or more) numbers means to find their sum (or total). - Subtracting one number from another number is to find the difference between them. - Multiplication means times (or repeated addition). A product is the result of the multiplication of two (or more) numbers. - Division 'undoes' multiplication. basic operations, addition, sum, total, subtraction, difference, minus sign, equals addition method, decomposition method, multiplication, times, repeated addition, product, division, dividend, divisor, quotient, remainder This list consists of visual resources, activities and games designed to support the new curriculum programme of study in Years Five and Six. Containing tips on using the resources and suggestions for further use, it covers: Year 5: Add and subtract whole numbers with more than 4 digits, including formal written methods, add and subtract numbers mentally with increasingly large numbers, use rounding to check answers and determine levels of accuracy, solve addition and subtraction multi-step problems in contexts. Year 6: Perform mental calculations, including with mixed operations and large numbers, use knowledge of the order of operations to carry out calculations involving the four operations, solve addition and subtraction multi-step problems in contexts, solve problems involving addition, subtraction, multiplication and division, use estimation to check answers to calculations and determine, in the context of a problem, an appropriate degree of accuracy. Visit the primary mathematics webpage to access all lists. Links and Resources Report inappropriate content The activities in this pack are designed for use as lesson starters or plenaries, but some could also be extended into a longer activity. The activity on page 8 is a ready-made chain game which could be used with the whole class to practise mental addition and subtraction. Classes could set themselves a time limit in which to complete it and improve on this each time. Blank cards are provided for teachers to write in extra questions to help differentiate the activity. The cards are at the end of the file. This resource offers many activity ideas, games and worksheets which practise different areas of mathematics. Topic 17 introduces four-digit subtraction, including the exchange of a thousand for ten hundreds.It offers investigations and worded problems and a checking system for subtraction. The workbook and answer book can be found here. By Years Five and Six, children will have encountered many different ways of carrying out subtraction. This video shows examples of subtraction methods used within a Year Six class. It includes examples of finding the difference by counting on, along a number line and in a column. It also shows an interesting method where column subtraction is done without borrowing but by using negative numbers. This interactive resource is a great way of helping children understand the process of column subtraction. Three digit numbers are partitioned and place value counters are used before carrying out the column subtraction. This is a useful step when moving children towards a more formal written method for subtraction. Make your own place value counters and use them in class. A great aid for children struggling with column subtraction or for the whole class, dependent on specific class needs. There are examples of expanded subtraction and column subtraction both with and without borrowing. This book provides a wealth of games which practise many aspects of mathematics. Aimed at children working within the curriculum levels 3-6. Jumble (sheet 18) practises mixed number operations. Stopper (sheet 23) practises adding several single-digit numbers. Snowman (sheet 25) practises addition and subtraction of three-digit numbers. Add and Match (sheet 32) practises adding sets of three single-digit numbers, aiming to make equal totals. Go further with Number Skills provides 40 more activity sheets which are a great addition to many lessons. This article from NRICH discusses ways in which teachers may develop children's problem solving skills. It provides ideas and links which would benefit a teacher's own practice or could be used as a basis of a staff training session. Here are nine challenges from NRICH which support Addition and Subtraction at KS2. Addition pack one contains fifteen work cards with activities on simple counting, number bonds to ten, addition using money and adding two digit numbers. Addition pack two contains eleven work cards with slightly more challenging activities. Students are required to add two digit numbers which require a digit to be carried, know number bonds up to a hundred, to find multiples of ten and to be able to use a calculator to solve more challenging problems. Addition pack three contains nine work cards in which the degree of challenge is greater. Students are required to add simple decimals, solve more challenging puzzles and add larger decimal numbers using money as the context. This resource contains one pack of games, investigations, worksheets and practical activities supporting the teaching and learning of subtraction. The six work cards provide activities covering subtracting two digit numbers using physical apparatus, and using the column method.
Overhead power line ||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (August 2010) (Learn how and when to remove this template message)| An overhead power line is a structure used in electric power transmission and distribution to transmit electrical energy along large distances. It consists of one or more conductors (commonly multiples of three) suspended by towers or poles. Since most of the insulation is provided by air, overhead power lines are generally the lowest-cost method of power transmission for large quantities of electric energy. Towers for support of the lines are made of wood (as-grown or laminated), steel (either lattice structures or tubular poles), and occasionally reinforced wood. The bare wire conductors on the line are generally made of aluminum (either plain or reinforced with steel, or composite materials such as carbon and glass fiber), though some copper wires are used in medium-voltage distribution and low-voltage connections to customer premises. A major goal of overhead power line design is to maintain adequate clearance between energized conductors and the ground so as to prevent dangerous contact with the line, and to provide reliable support for the conductors, resilient to storms, ice load, earthquakes and other potential causes of damage. Today overhead lines are routinely operated at voltages exceeding 765,000 volts between conductors, with even higher voltages possible in some cases. - 1 Classification by operating voltage - 2 Structures - 3 Insulators - 4 Conductors - 5 Compact transmission lines - 6 Low voltage - 7 Train power - 8 Further applications - 9 Use of area under overhead power lines - 10 Interface of aviation with power lines - 11 History - 12 Mathematical analysis - 13 See also - 14 References - 15 Further reading Classification by operating voltage Overhead power transmission lines are classified in the electrical power industry by the range of voltages: - Low voltage (LV) – less than 1000 volts, used for connection between a residential or small commercial customer and the utility. - Medium voltage (MV; distribution) – between 1000 volts (1 kV) and 69 kV, used for distribution in urban and rural areas. - High voltage (HV; subtransmission less than 100 kV; subtransmission or transmission at voltage such as 115 kV and 138 kV), used for sub-transmission and transmission of bulk quantities of electric power and connection to very large consumers. - Extra high voltage (EHV; transmission) – over 230 kV, up to about 800 kV, used for long distance, very high power transmission. - Ultra high voltage (UHV) – higher than 800 kV. Structures for overhead lines take a variety of shapes depending on the type of line. Structures may be as simple as wood poles directly set in the earth, carrying one or more cross-arm beams to support conductors, or "armless" construction with conductors supported on insulators attached to the side of the pole. Tubular steel poles are typically used in urban areas. High-voltage lines are often carried on lattice-type steel towers or pylons. For remote areas, aluminum towers may be placed by helicopters. Concrete poles have also been used. Poles made of reinforced plastics are also available, but their high cost restricts application. Each structure must be designed for the loads imposed on it by the conductors. The weight of the conductor must be supported, as well as dynamic loads due to wind and ice accumulation, and effects of vibration. Where conductors are in a straight line, towers need only resist the weight since the tension in the conductors approximately balances with no resultant force on the structure. Flexible conductors supported at their ends approximate the form of a catenary, and much of the analysis for construction of transmission lines relies on the properties of this form. A large transmission line project may have several types of towers, with "tangent" ("suspension" or "line" towers, UK) towers intended for most positions and more heavily constructed towers used for turning the line through an angle, dead-ending (terminating) a line, or for important river or road crossings. Depending on the design criteria for a particular line, semi-flexible type structures may rely on the weight of the conductors to be balanced on both sides of each tower. More rigid structures may be intended to remain standing even if one or more conductors is broken. Such structures may be installed at intervals in power lines to limit the scale of cascading tower failures. Foundations for tower structures may be large and costly, particularly if the ground conditions are poor, such as in wetlands. Each structure may be stabilized considerably by the use of guy wires to counteract some of the forces applied by the conductors. For a single wood utility pole structure, a pole is placed in the ground, then three crossarms extend from this, either staggered or all to one side. The insulators are attached to the crossarms. For an "H"-type wood pole structure, two poles are placed in the ground, then a crossbar is placed on top of these, extending to both sides. The insulators are attached at the ends and in the middle. Lattice tower structures have two common forms. One has a pyramidal base, then a vertical section, where three crossarms extend out, typically staggered. The strain insulators are attached to the crossarms. Another has a pyramidal base, which extends to four support points. On top of this a horizontal truss-like structure is placed. A grounded cable called a static line is sometimes strung along the tops of the towers to provide lightning protection. An optical ground wire is a more advanced version with embedded optical fibers for communication. A single-circuit transmission line carries conductors for only one circuit. For a three-phase system, this implies that each tower supports three conductors. A double-circuit transmission line has two circuits. For three-phase systems, each tower supports and insulates six conductors. Single phase AC-power lines as used for traction current have four conductors for two circuits. Usually both circuits operate at the same voltage. In HVDC systems typically two conductors are carried per line, but in rare cases only one pole of the system is carried on a set of towers. In some countries like Germany most power lines with voltages above 100 kV are implemented as double, quadruple or in rare cases even hextuple power line as rights of way are rare. Sometimes all conductors are installed with the erection of the pylons; often some circuits are installed later. A disadvantage of double circuit transmission lines is that maintenance works can be more difficult, as either work in close proximity of high voltage or switch-off of 2 circuits is required. In case of failure, both systems can be affected. The largest double-circuit transmission line is the Kita-Iwaki Powerline. Insulators must support the conductors and withstand both the normal operating voltage and surges due to switching and lightning. Insulators are broadly classified as either pin-type, which support the conductor above the structure, or suspension type, where the conductor hangs below the structure. The invention of the strain insulator was a critical factor in allowing higher voltages to be used. At the end of the 19th century, the limited electrical strength of telegraph-style pin insulators limited the voltage to no more than 69,000 volts. Up to about 33 kV (69 kV in North America) both types are commonly used. At higher voltages only suspension-type insulators are common for overhead conductors. Insulators are usually made of wet-process porcelain or toughened glass, with increasing use of glass-reinforced polymer insulators. However, with rising voltage levels, polymer insulators (silicone rubber based) are seeing increasing usage. China has already developed polymer insulators having a highest system voltage of 1100 kV and India is currently developing a 1200 kV (highest system voltage) line which will initially be charged with 400 kV to be upgraded to a 1200 kV line. Suspension insulators are made of multiple units, with the number of unit insulator disks increasing at higher voltages. The number of disks is chosen based on line voltage, lightning withstand requirement, altitude, and environmental factors such as fog, pollution, or salt spray. In cases where these conditions are suboptimal, longer insulators must be used. Longer insulators with longer creepage distance for leakage current, are required in these cases. Strain insulators must be strong enough mechanically to support the full weight of the span of conductor, as well as loads due to ice accumulation, and wind. Porcelain insulators may have a semi-conductive glaze finish, so that a small current (a few milliamperes) passes through the insulator. This warms the surface slightly and reduces the effect of fog and dirt accumulation. The semiconducting glaze also ensures a more even distribution of voltage along the length of the chain of insulator units. Polymer insulators by nature have hydrophobic characteristics providing for improved wet performance. Also, studies have shown that the specific creepage distance required in polymer insulators is much lower than that required in porcelain or glass. Additionally, the mass of polymer insulators (especially in higher voltages) is approximately 50% to 30% less than that of a comparative porcelain or glass string. Better pollution and wet performance is leading to the increased use of such insulators. Insulators for very high voltages, exceeding 200 kV, may have grading rings installed at their terminals. This improves the electric field distribution around the insulator and makes it more resistant to flash-over during voltage surges. The most common conductor in use for transmission today is aluminum conductor steel reinforced (ACSR). Also seeing much use is all-aluminum-alloy conductor (AAAC). Aluminum is used because it has about half the weight of a comparable resistance copper cable (though larger diameter due to lower specific conductivity), as well as being cheaper. Copper was more popular in the past and is still in use, especially at lower voltages and for grounding. Bare copper conductors are light green. While larger conductors may lose less energy due to lower electrical resistance, they are more costly than smaller conductors. An optimization rule called Kelvin's Law states that the optimum size of conductor for a line is found when the cost of the energy wasted in the conductor is equal to the annual interest paid on that portion of the line construction cost due to the size of the conductors. The optimization problem is made more complex by additional factors such as varying annual load, varying cost of installation, and the discrete sizes of cable that are commonly made. Since a conductor is a flexible object with uniform weight per unit length, the geometric shape of a conductor strung on towers approximates that of a catenary. The sag of the conductor (vertical distance between the highest and lowest point of the curve) varies depending on the temperature and additional load such as ice cover. A minimum overhead clearance must be maintained for safety. Since the temperature of the conductor increases with increasing heat produced by the current through it, it is sometimes possible to increase the power handling capacity (uprate) by changing the conductors for a type with a lower coefficient of thermal expansion or a higher allowable operating temperature. Two such conductors that offer reduced thermal sag are known as composite core conductors (ACCR and ACCC conductor). In lieu of steel core strands that are often used to increase overall conductor strength, the ACCC conductor uses a carbon and glass fiber core that offers a coefficient of thermal expansion about 1/10 of that of steel. While the composite core is nonconductive, it is substantially lighter and stronger than steel, which allows the incorporation of 28% more aluminum (using compact trapezoidal shaped strands) without any diameter or weight penalty. The added aluminum content helps reduce line losses by 25 to 40% compared to other conductors of the same diameter and weight, depending upon electric current. The carbon core conductor's reduced thermal sag allows it to carry up to twice the current ("ampacity") compared to all-aluminum conductor (AAC) or ACSR. The power lines and their surroundings must be maintained by linemen, sometimes assisted by helicopters with pressure washers or circular saws which may work 3 times faster. However this work often occurs in the dangerous areas of the Helicopter height–velocity diagram. For transmission of power across long distances, high voltage transmission is employed. Transmission higher than 132 kV poses some problems, such as the corona effect, which cause significant power loss and interference with communication circuits. In order to reduce this corona effect, it is preferable to use more than one conductor per phase, or bundled conductors. Bundle conductors consist of several parallel cables connected at intervals by spacers, often in a cylindrical configuration. The optimum number of conductors depends on the current rating, but typically higher-voltage lines also have higher current. There is also some advantage due to lower corona loss. American Electric Power is building 765 kV lines using six conductors per phase in a bundle. Spacers must resist the forces due to wind, and magnetic forces during a short-circuit. - Bundled conductors reduce the voltage gradient in the vicinity of the line. This reduces the possibility of corona discharge. At extra high voltage, the electric field gradient at the surface of a single conductor is high enough to ionize air, which wastes power, generates unwanted audible noise and interferes with communication systems. The field surrounding a bundle of conductors is similar to the field that would surround a single, very large conductor—this produces lower gradients which mitigates issues associated with high field strength. - Improvements in the transmission efficiency as loss due to corona effect is countered. - Bundled conductor lines will have higher capacitance in comparison with single lines. Thus, they will have higher charging currents, which helps in improving power factor. - When transmitting alternating current, bundle conductors also avoid the reduction in ampacity of a single large conductor due to the skin effect. - A bundle conductor also has lower reactance, compared to a single conductor. - Additionally, bundled conductors cool themselves more efficiently due to the increased surface area of the conductors, further reducing line losses. - The increased Geometric Mean Radius (GMR) reduces line reactance and inductance. - Wind resistance is higher (higher forces are a disadvantage), but oscillations can be damped at damping bundle spacers. Inductance and bundled conductors In addition to reducing corona losses and improving the skin effect, conductor bundling also reduces line inductance. Low line inductance is highly desired because it reduces reactive current flow, line heating, and voltage drop across transmission lines. For a non-bundled transmission line, two parameters of transmission lines affect the inductance: the geometric mean radius, D, and the equivalent conductor radius, rx. The geometric mean radius is the geometric mean of the distance between phases in a bundled or non-bundled transmission line. For example, a 3-phase system with equal line spacing d and conductors arranged in a straight line has a geometric mean radius of where 2d represents the distance between the two outermost phases. The conductor radius rx is the effective radius of a single conductor. The equation for inductance is then: where e is the natural logarithm base Usually, the rx value is tabulated because it depends on the exact composition of the conductor and inductive properties that result—these are hard to describe analytically especially in the case of composite conductors. Typical values of rx range from 6 to 18 mm. For a bundled transmission cable, two additional factors affect the line inductance: the bundle diameter DB and the geometric arrangement of the bundle. These two parameters can be used to calculate an effective bundled cable radius, DBE. - Two-Conductor Bundle Equation: - Three-Conductor Bundle Equation: - Four-Conductor Bundle Equation: - n-Conductor Bundle Equation: The resulting line inductance equation is nearly identical, however the equivalent bundle diameter DBE is substituted for the effective cable radius rx. where e is the natural logarithm base All of the above effects can be attributed to the concept of Geometric Mean Radius (GMR). By putting several cylindrical cables together to attain a single large cable, the action in effect increases the radius of the unit, lowering the inductance of the conductor. The assumption when carrying out this calculation is that the distance between phases are much larger than the GMR of each conductor. - Bundled conductors have higher wind loading, although much lower than the solid tube of equal diameter which they resemble electrically. - Bundled conductors are also more expensive and difficult to install. Overhead power lines are often equipped with a ground conductor (shield wire or overhead earth wire). The ground conductor is usually grounded (earthed) at the top of the supporting structure, to minimize the likelihood of direct lightning strikes to the phase conductors. In circuits with earthed neutral, it also serves as a parallel path with the earth for fault currents. Very high-voltage transmission lines may have two ground conductors. These are either at the outermost ends of the highest cross beam, at two V-shaped mast points, or at a separate cross arm. Older lines may use surge arresters every few spans in place of a shield wire; this configuration is typically found in the more rural areas of the United States. By protecting the line from lightning, the design of apparatus in substations is simplified due to lower stress on insulation. Shield wires on transmission lines may include optical fibers (optical ground wires/OPGW), used for communication and control of the power system. At some HVDC converter stations, the ground wire is used also as the electrode line to connect to a distant grounding electrode. This allows the HVDC system to use the earth as one conductor. The ground conductor is mounted on small insulators bridged by lightning arrestors above the phase conductors. The insulation prevents electrochemical corrosion of the pylon. Medium-voltage distribution lines may also use one or two shield wires, or may have the grounded conductor strung below the phase conductors to provide some measure of protection against tall vehicles or equipment touching the energized line, as well as to provide a neutral line in Wye wired systems. On some power lines for very high voltages in the former Soviet Union, the ground wire is used for PLC-radio systems and mounted on insulators at the pylons. Insulated conductors and cable Overhead insulated cables are rarely used, usually for short distances (less than a kilometer). Insulated cables can be directly fastened to structures without insulating supports. An overhead line with bare conductors insulated by air is typically less costly than a cable with insulated conductors. A more common approach is "covered" line wire. It is treated as bare cable, but often is safer for wildlife, as the insulation on the cables increases the likelihood of a large-wing-span raptor to survive a brush with the lines, and reduces the overall danger of the lines slightly. These types of lines are often seen in the eastern United States and in heavily wooded areas, where tree-line contact is likely. The only pitfall is cost, as insulated wire is often costlier than its bare counterpart. Many utility companies implement covered line wire as jumper material where the wires are often closer to each other on the pole, such as an underground riser/pothead, and on reclosers, cutouts and the like. Compact transmission lines |This section needs additional citations for verification. (March 2012) (Learn how and when to remove this template message)| A compact overhead transmission line requires a smaller right of way than a standard overhead powerline. Conductors must not get too close to each other. This can be achieved either by short span lengths and insulating crossbars, or by separating the conductors in the span with insulators. The first type is easier to build as it does not require insulators in the span, which may be difficult to install and to maintain. Examples of compact lines are: Compact transmission lines may be designed for voltage upgrade of existing lines to increase the power that can be transmitted on an existing right of way. Low voltage overhead lines may use either bare conductors carried on glass or ceramic insulators or an aerial bundled cable system. The number of conductors may be anywhere between four (three phase plus a combined earth/neutral conductor - a TN-C earthing system) up to as many as six (three phase conductors, separate neutral and earth plus street lighting supplied by a common switch). Overhead lines or overhead wires are used to transmit electrical energy to trams, trolleybuses or trains. Overhead line is designed on the principle of one or more overhead wires situated over rail tracks. Feeder stations at regular intervals along the overhead line supply power from the high-voltage grid. For some cases low-frequency AC is used, and distributed by a special traction current network. Overhead lines are also occasionally used to supply transmitting antennas, especially for efficient transmission of long, medium and short waves. For this purpose a staggered array line is often used. Along a staggered array line the conductor cables for the supply of the earth net of the transmitting antenna are attached on the exterior of a ring, while the conductor inside the ring, is fastened to insulators leading to the high-voltage standing feeder of the antenna. Use of area under overhead power lines Use of the area below an overhead line is limited because objects must not come too close to the energized conductors. Overhead lines and structures may shed ice, creating a hazard. Radio reception can be impaired under a power line, due both to shielding of a receiver antenna by the overhead conductors, and by partial discharge at insulators and sharp points of the conductors which creates radio noise. In the area surrounding the overhead lines it is dangerous to risk interference; e.g. flying kites or balloons, using ladders or operating machinery. Overhead distribution and transmission lines near airfields are often marked on maps, and the lines themselves marked with conspicuous plastic reflectors, to warn pilots of the presence of conductors. Construction of overhead power lines, especially in wilderness areas, may have significant environmental effects. Environmental studies for such projects may consider the effect of bush clearing, changed migration routes for migratory animals, possible access by predators and humans along transmission corridors, disturbances of fish habitat at stream crossings, and other effects. Interface of aviation with power lines General aviation, hang gliding, paragliding, skydiving, and kite flying have important interface with power lines. Nearly every kite product warns users to stay away from power lines. Many deaths occur when aircraft (powered and unpowered) crash into power lines. Some power lines are marked with visibility bulbs. The placement of power lines sometimes use up sites that would otherwise be used by hang gliders. The first transmission of electrical impulses over an extended distance was demonstrated on July 14, 1729 by the physicist Stephen Gray. The demonstration used damp hemp cords suspended by silk threads (the low resistance of metallic conductors not being appreciated at the time). However the first practical use of overhead lines was in the context of telegraphy. By 1837 experimental commercial telegraph systems ran as far as 20 km (13 miles). Electric power transmission was accomplished in 1882 with the first high-voltage transmission between Munich and Miesbach (60 km). 1891 saw the construction of the first three-phase alternating current overhead line on the occasion of the International Electricity Exhibition in Frankfurt, between Lauffen and Frankfurt. In 1912 the first 110 kV-overhead power line entered service followed by the first 220 kV-overhead power line in 1923. In the 1920s RWE AG built the first overhead line for this voltage and in 1926 built a Rhine crossing with the pylons of Voerde, two masts 138 meters high. In 1953, the first 345 kV line was put into service by American Electric Power in the United States. In Germany in 1957 the first 380 kV overhead power line was commissioned (between the transformer station and Rommerskirchen). In the same year the overhead line traversing of the Strait of Messina went into service in Italy, whose pylons served the Elbe crossing 1. This was used as the model for the building of the Elbe crossing 2 in the second half of the 1970s which saw the construction of the highest overhead line pylons of the world. Earlier, in 1952, the first 380 kV line was put into service in Sweden, in 1000 km (625 miles) between the more populated areas in the south and the largest hydroelectric power stations in the north. Starting from 1967 in Russia, and also in the USA and Canada, overhead lines for voltage of 765 kV were built. In 1982 overhead power lines were built in Soviet Union between Elektrostal and the power station at Ekibastuz, this was a three-phase alternating current line at 1150 kV (Powerline Ekibastuz-Kokshetau). In 1999, in Japan the first powerline designed for 1000 kV with 2 circuits were built, the Kita-Iwaki Powerline. In 2003 the building of the highest overhead line commenced in China, the Yangtze River Crossing. An overhead power line is one example of a transmission line. At power system frequencies, many useful simplifications can be made for lines of typical lengths. For analysis of power systems, the distributed resistance, series inductance, shunt leakage resistance and shunt capacitance can be replaced with suitable lumped values or simplified networks. Short and medium line model A short length of a power line (less than 80 km) can be approximated with a resistance in series with an inductance and ignoring the shunt admittances. This value is not the total impedance of the line, but rather the series impedance per unit length of line. For a longer length of line (80–250 km), a shunt capacitance is added to the model. In this case it is common to distribute half of the total capacitance to each side of the line. As a result, the power line can be represented as a two-port network, such as ABCD parameters. The circuit can be characterized as - Z is the total series line impedance - z is the series impedance per unit length - l is the line length - is the sinusoidal angular frequency The medium line has an additional shunt admittance - Y is the total shunt line admittance - y is the shunt admittance per unit length - Aerial cable - Conductor marking lights - CU project controversy - Overhead cable - Overhead line - Raptor conservation - Third rail - Operation Outward - Powerline river crossings in the United Kingdom - Wireless monitoring of overhead power lines |Wikimedia Commons has media related to Overhead power lines.| - Donald G. Fink and H. Wayne Beaty, Standard Handbook for Electrical Engineers, Eleventh Edition, McGraw-Hill, New York, 1978, ISBN 0-07-020974-X, Chapter 14 Overhead Power Transmission - "Powering Up - Vertical Magazine - The Pulse of the Helicopter Industry". verticalmag.com. Archived from the original on 4 October 2015. Retrieved 4 October 2015. - Sunrise Powerlink Helicopter Operations on YouTube - The Straight Dope: Why are thin wires strung along the tops of high-voltage transmission towers? - NGK-Locke Polymer insulator manufacturer - "ABB Energizes Transformer At Record 1.2 Mln Volts". World Energy News. Retrieved 7 October 2016. - Advanced Rubber Products - Suspension Insulators - "Chapter 6. Visual aids for denoting obstacles" (PDF). Annex 14 Volume I Aerodrome design and operations. International Civil Aviation Organization. 2004-11-25. Retrieved 1 June 2011. 6.2.8 ... spherical ... diameter of not less than 60 cm. ... 6.2.10 ... should be of one colour. - Head, Elan (April 2015). "High-value cargo". Vertical Magazine. p. 80-90. Retrieved 11 April 2015. - Maher, Guy R. (April 2015). "A cut above". Vertical Magazine. p. 92-98. Archived from the original on 11 May 2015. Retrieved 11 April 2015. - Harnesk, Tommy. "Helikoptermonterad motorsåg snabbkapar träden" Ny Teknik, 9 January 2015. Accessed: 12 January 2015. - Grainger, John J. and W. D. Stevenson Jr. Power System Analysis and Design, 2nd edition. McGraw Hill (1994). - Freimark, Bruce (October 1, 2006). "Six Wire Solution]". Transmission & Distribution World. Retrieved March 6, 2007. - (Table A4) Duncan Glover et al. - Glover, Duncan; Sarma, Overbye (2012). Power System Analysis and Design (Fifth ed.). Stamford, CT: Cengage Learning. p. 828. ISBN 978-1-111-42577-7. - Gary, Claude; Moreau, Marcel (1976). L'effet de couronne en tension alternative. France: Eyrolles. pp. 11–12. - The Art and Science of Lightning Protection. - Beaty, H. Wayne; Fink, Donald G. , Standard Handbook for Electrical Engineers (15th Edition) McGraw-Hill, 2007 978-0-07-144146-9 pages 14-105 through 14-106 - Aircraft Accidents Due to Overhead Power Lines - Pacific Gas and Electric Company Reminds Customers About Flying Kites Safely. - J. Glover, M. Sarma, and T. Overbye, Power System Analysis and Design, Fifth Edition, Cengage Learning, Connecticut, 2012, ISBN 978-1-111-42577-7, Chapter 5 Transmission Lines: Steady-State Operation - William D. Stevenson, Jr. Elements of Power System Analysis Third Edition, McGraw-Hill, New York (1975) ISBN 0-07-061285-4
Einstein and the photoelectric effect Figure 1. Albert Einstein. Figure 2. Photoelectric effect caused by light falling onto the metal sodium. Mention Albert Einstein and the first thing that springs to mind is the theory of relativity, that other extraordinary supernova that burst upon twentieth-century physics. Yet, incredibly, Einstein never won a Nobel Prize for relativity. His one Nobel medal (he surely should have got at least two), awarded in 1921 and presented in 1922, was for his pioneering work in quantum theory. If Planck hadn't fathered quantum theory (see Max Planck and the origins of quantum theory) that role may well have fallen to Einstein. As it was, Einstein was the first person to take the physical implications of Planck's work seriously. The turning point came when he saw how Planck's idea of energy quanta could be used to account for some puzzling facts that had emerged about a phenomenon known as the photoelectric effect. Early studies of the photoelectric effect In 1887, Heinrich Hertz became the first person to observe the photoelectric effect during his experiments that confirmed Maxwell's theory of electromagnetism. Hertz found that by shining ultraviolet light onto metal electrodes, he could lower the voltage needed to make sparks hop between the electrodes. The light obviously had some electrical effect, but Hertz stopped short of speculating what that might be. "I confine myself at present," he said, "to communicating the results obtained, without attempting any theory respecting the manner in which the observed phenomena are brought about." In 1899 the English physicist J. J. Thomson offered an important clue toward understanding the photoelectric effect. Thomson showed that ultraviolet light, falling onto a metal surface, triggered the emission of electrons. These were tiny charged particles whose existence Thomson had demonstrated a couple of years earlier and which he believed were the only material components of atoms. The photoelectric effect, it seemed to physicists at the time, must come about because electrons inside the atoms in a metal's surface were shaken and made to vibrate by the oscillating electric field of light waves falling on the metal. Some of the electrons would be shaken so hard, the theory went, that eventually they'd be tossed out altogether. In 1902, Philipp Lenard, who'd earlier been an assistant to Hertz at the University of Bonn, made the first quantitative measurements of the photoelectric effect. He used a bright carbon arc light to study how the energy of the emitted photoelectrons varied with the intensity of the light and, by separating out individual colors, with the frequency of light. Increasing the frequency of light, by selecting light from the bluer end of the spectrum, caused the ejected electrons on average to be more energetic, as predicted – because it was assumed they'd been made to vibrate faster. Increasing the intensity of light (by moving the carbon arc closer to the metal surface) caused more electrons to be thrown out, also as expected. On the other hand, increasing the intensity had no effect at all on the average amount of energy that each ejected electron carried away. That came as a real shock. If, as physicists believed, the photoelectric effect followed from an interaction between electrons and electromagnetic waves, then intensifying the radiation ought to shake the electrons in the metal surface harder and so shoot them out with more energy. It was a mystery why this didn't happen. Quanta of light Several years went by before Lenard's observations on the photoelectric effect and Planck's strange but neglected theory of the quantum, both puzzling in themselves, were seen as arrows pointing to a common solution. Looking back now, it seems clear enough, but it took the genius of Einstein to apply quantization, not to blackbody oscillators as Planck had done in a desperate effort to patch up classical theory, but to the actual radiation that's emitted or absorbed. Light itself is quantized, Einstein realized. All the light of a particular frequency comes in little bullets of the same energy, equal to the frequency multiplied by Planck's constant, and that's the key to understanding the photoelectric effect. An incoming light quantum smashes into an electron on the surface of a metal and gives up all of its energy to the electron. A certain amount of energy, called the work function, is needed simply to overcome the force of attraction between the electron and the metallic lattice in order to set the electron free; so there can't be any photoelectric effect unless this threshold is reached. Any energy left over from the exchange, above and beyond the work function, appears as kinetic energy (energy of motion) of the ejected electron. Increasing the intensity of radiation – the number of light quanta per unit area – has no effect on the energy of individual electrons because each electron is thrown out by one and only one parcel of light. Increasing the frequency of radiation, on the other hand, means that each light bullet packs a bigger wallop, which results in a more energetic photoelectron. The fact that 16 years went by before Einstein won a Nobel Prize for his ground-breaking work on the photoelectric effect, reflects how long it took the scientific world to accept that radiant energy is quantized. That may seem like an age, but the idea that energy, including light, is granular ran counter to everything that physicists had been taught for several generations: matter is made of particles; energy is continuous and tradable in arbitrarily small amounts; light consists of waves; matter and light don't intermingle. These rules had been the mantras of physics for much of the 19th century and were now being overturned. There was also the issue of experimental proof. It took a decade or so for the details of Einstein's photoelectric theory to be thoroughly tested and verified in the lab. The actual observation that the kinetic energy of electrons kicked out by the photoelectric effect is tied to the frequency of incoming light in exactly the way Einstein prescribed was finally made in 1916 by the American physicist Robert Millikan. Millikan had, in fact, long been expecting to prove Einstein wrong and thereby to uphold the wave theory of light. Instead he wound up giving powerful support to the particle theory and measuring Planck's constant to within 5 percent of its currently accepted value. Ironically, he won the Nobel Prize in 1923 for a superb series of experiments that dashed what earlier had been his greatest scientific hope. We talk about the quantum revolution – but it wasn't an overnight affair, this overthrow of the old worldview of matter and energy in favor of a new one. It was more than two decades after Planck's first inkling of the existence of quanta when quantum theory was fully accepted and acknowledged as the reigning paradigm of the microcosmos. For the first part of this interregnum, Einstein was at the cutting edge of developments. Following his seminal 1905 photoelectric paper, he worked on meshing Planck's notion of the quantum with other areas of physics. For instance, he showed that some anomalies to do with how much heat substances have to absorb to raise their temperature by a certain amount are best explained if the energy of vibration of atoms is assumed to be quantized. This early quantum pioneering by Einstein now seems almost entirely overshadowed by his work on relativity, but it was instrumental at the time in persuading scientists of the validity of quantum theory when applied to matter. His views on the quantum nature of electromagnetic radiation proved a harder sell. Yet, he insisted that the way ahead had to lie with some acceptance of light's particlelike behavior. In 1909 he wrote: "It is my opinion that the next phase in the development of theoretical physics will bring us a theory of light that can be interpreted as a kind of fusion of the wave and emission theory." In 1911, at the first Solvay Congress (an annual meeting of the world's top physicists) he was more forceful: "I insist on the provisional character of this concept, which does not seem reconcilable with the experimentally verified consequences of the wave theory." That apparent irreconcilability was a major stumbling block for all scientists. What kind of madness was it to argue that light could be both a particle and a wave? Experimentalists railed at the prospect of what Einstein's equation of the photoelectric effect implied. Robert Millikan, the very man who showed that the equation really did work, would have nothing to do with its physical interpretation. In 1915, Millikan wrote: "The semicorpuscular theory by which Einstein arrived at his equation seems at present wholly untenable." Three years later, Ernest Rutherford, the great New Zealand physicist who probed the structure of the atom, said there appeared to be "no physical connection" between the energy and frequency in Einstein's hypothesis about light quanta. It didn't seem to make sense that a particle could have a frequency, or that a wave could act as if it were made of energetic particles. The two concepts seemed to rule each other out. Final proof of the particle nature of light Between 1911 and 1916, Einstein took a sabbatical from his quantum work to attend to another little problem – the general theory of relativity, which transformed our ideas on gravity. Upon his return to the physics of the very small, he quickly grasped a link between quantum theory and relativity that convinced him of the reality of the particle aspect of light. In earlier work, Einstein had treated each quantum of radiation as if it had a momentum equal to the energy of the quantum divided by the velocity of light. By making this assumption he was able to explain how momentum is transferred from radiation to matter – in other words, how atoms and molecules are buffeted when they absorb radiation. Although this buffeting was much too small to be seen directly, it had effects on properties, such as the pressure of a gas, that could be measured. These measurements fitted with the formula for quantized momentum. Einstein now realized, in coming back to his quantum studies, that exactly the same expression for the momentum of a light quantum fell straight out of a basic equation in relativity theory. This link between relativity and the earlier assumption about the momentum of a radiation quantum clinched the case for light particles in Einstein's mind. In 1917, he may have been the only major scientist alive who believed that light had a genuine particle aspect. But the fact that his theory now insisted that whenever these supposed light quanta interacted with particles of ordinary matter a definite, predictable amount of momentum should be transferred, paved the way for experimental tests. Six years later, the particle nature of light had been put virtually beyond dispute. At the heart of lab work that ultimately proved the reality of radiation quanta was the American physicist Arthur Compton. In his early days at Princeton, Compton devised an elegant way of demonstrating Earth's rotation, but he soon launched into a series of studies involving X-rays that climaxed in the final victory of quantum physics over the old world order. In his mid-twenties Compton hatched a theory of the intensity of X-ray reflection from crystals that gave a powerful tool for studying the crystallographic arrangement of electrons and atoms in a substance. In 1918 he began a study of X-ray scattering that led inevitably to the question of what happens when X-rays interact with electrons. The key breakthrough came in 1922 and was published the following year. Compton found that when X-rays scatter from free electrons (electrons not tightly bound inside atoms) the wavelength of the X-rays increases. He explained this effect, now known as the Compton effect, in terms of radiation quanta colliding with electrons, one quantum per electron, and giving up some of their energy (or momenta) in the process. Energy lost translated to frequency decrease, or wavelength increase, according to the Planck formula. A further boost for this interpretation came from a device invented by Charles Wilson. Inspired by the wonderful cloud effects he'd seen from the peak of Ben Nevis, in his native Scotland, Wilson built a vessel in which he could create miniature artificial clouds. This cloud chamber proved invaluable for studying the behavior of charged particles, since water droplets condensing in the wake of a moving ion or electron left a visible trail. Wilson's cloud chamber revealed the paths of the recoil electrons in the Compton effect, showing clearly that the electrons moved as if struck by other particles – X-ray quanta – which, being uncharged, left no tracks. Final proof that the Compton effect really was due to individual X-ray quanta scattering off electrons came in 1927 from experiments based on the so-called coincidence method, developed by Walther Bothe. These experiments showed that individual scattered X-ray quanta and recoil electrons appear at the same instant, laying to rest some arguments that had been voiced to try and reconcile quantum views with the continuous waves of electromagnetic theory. To complete the triumph of the particle picture of light, the American physical chemist Gilbert Lewis coined the name "photon" in 1926, and the fifth Solvay Congress convened the following year under the title "Electrons and Photons." Doubt had evaporated: Light could manifest itself as particles. But there was equally no doubt that, at other times, it could appear as waves. And that didn't seem to make any sense at all. As Einstein said in 1924, "There are ... now two theories of light, both indispensable ... without any logical connection."
To Kill a Mockingbird Conceptual Unit Elizabeth Simon Email: Elizabeth523@missouristate.edu Phone: 314.583.1611 5/5/2008 Essential Questions: What is courage? When are people considered “grown up?” How do prejudices/stereotypes impact one’s perspective? Text(s): Harper Lee’s To Kill a Mockingbird Various selections of poetry (i.e. Theodore Roethke’s My Papa’s Waltz, Robert Hayden’s Those Winter Sundays, and selected African American poetry). Rationale: This conceptual unit revolves around Harper Lee’s novel, To Kill a Mockingbird, a text that is required for high school freshman. I chose this text as my focus for this conceptual unit because it has been one of my favorite and most memorable reads. Even as I have re-read the novel, its significance is relevant to high school students today. Harper Lee’s novel should be integrated into the classroom because of the variety of literary techniques, essential themes, historical background, and modern day relevance the novel contains. To Kill a Mockingbird is an essential text to teach in the classroom because the themes of discrimination, innocence, courage and prejudices, which are presented in Harper Lee’s novel, can be relevant issues that students still find and apply to their world today. It is essential that students learn the significance of these themes because these same motifs can be found within in their own lives, whether it be within themselves, their families, peers, community, country, or humankind. By exposing students to Harper Lee’s novel, To Kill a Mockingbird, they will be challenged to consider the role that innocence and prejudices play in their lives. Harper Lee writes, “The book to read is not the one which thinks for you, but the one which makes you think.” Students will be encouraged expand their outlooks of the world and investigate their own opinions and beliefs about other groups of people, social groups, and races found within their community, society, nation, and world through their reading of To Kill a Mockingbird. Each person faces their own prejudices and it is essential for him/her to consider that, “You never really understand a person until you consider things from his point of view…until you climb into his skin and walk around it in” (Lee 33). Through the discovery of each of these themes, students will be lead to consider the essential questions: “What is courage?” and “How do prejudices impact one’s perspective?” Harper Lee’s novel, To Kill a Mockingbird can be surrounded by multiple issues of concern. Great consideration needs to be present when considering teaching To Kill a Mockingbird. The particular language found within Harper Lee’s novel can present problems within the classroom if the specific racial language is not addressed appropriately. It is important to bring sensitivity to the language in Harper Lee’s novel and consider the needs of each student when reading To Kill a Mockingbird. In order to appropriately address this concern, I have planned to adjoin outside writings of poetry to To Kill a Mockingbird that present a wide variety of perspectives from writers of different races and backgrounds. In addition, supplying proper historical background about Jim Crow Laws and The Scottsboro trails will allow students to connect the historical aspects of our world to the significance of the themes that Harper Lee introduces in her novel and the literary devices she uses to convey these ideas. By providing historical background and connecting the themes to our nation’s history, students will be able to gain a greater appreciation for the techniques of plot, characterization, theme development, and setting that Harper Lee uses in her novel, To Kill a Mockingbird. Through class discussions, culminating activities, and deeper reading, students will be able to draw personal connections to the text, be challenged in their own thinking and perspectives, and students will be able to draw connections between the themes in To Kill a Mockingbird to their own lives, communities, nation, and world. By the end of the conceptual unit, I hope that students will be able to consider what courage is and how prejudices can impact one’s perspective. Students should be able to draw larger connections between the themes found in To Kill a Mockingbird to the history of the world and to their world today. Furthermore, I hope that students will be challenged to consider the prejudices and stereotypes that are still prevalent within our society and take steps to consider the perspectives of others – to step into the shoes of individuals who are different from themselves. Grade Level Expectations: Students will be able to: Develop and apply skills and strategies to the reading process by comparing, contrasting, analyzing, and evaluating connections between text ideas and the word by analyzing and evaluating the relationship between literature and its historical period and culture (Reading 1,I). Write effectively in various forms and types of writing to interpret, evaluate, or persuade (Writing 3,C). Develop and apply effective listening skills and strategies using active-listening behaviors (Listening and Speaking 1,B). Develop and apply effective speaking skills and strategies for various audiences and purposes in discussions and presentations (Listening and Speaking 2,A). Develop and apply effective research process skills to gather, analyze, and evaluate information and develop and appropriate research to guide investigation and research of focus questions (Information Literary 1,A). Culminating Activities: These culminating activities will be the main form of assessment. Because there is a literary analysis due at the end of the conceptual unit, many of the activities found in the Literary Portfolio will be assigned throughout the reading of To Kill a Mockingbird, discussed thoroughly in class, and will be given time to be completed in class. 1. Literary Portfolio: The Literary Portfolio is a compilation of various activities that can be completed. The Literary Portfolio allows creative freedom for students in a structured manner. There are four components of the Literary Portfolio, which are listed below and each has a particular number of activities that must be completed for full credit. All activities can be expanded or adjusted to your own particular interests with permission. This is a time for you to take creative freedom and liberties. Part I – Characterization of at least 3 characters in the novel (complete 2 of the 5): - Complete a Metaphor Graphic Organizer for a character of your choice. This includes creating your own metaphors to depict the “seen” and “unseen” characterization of a particular character. The metaphor responses should be accompanied by an explanation chart that will be provided by the teacher. - Chose two characters and write a “personal ad” that describes their appearance, interests, social status, personality traits. Each personal ad should be at least 5 sentences. - Write a letter from one character to another. The content should be relevant to the novel whether it addresses the themes found in the novel, a particular situation that the characters encounter, etc. - Find or write a poem or song that you feel relates or describes a character in To Kill a Mockingbird. Write a 5 sentence paragraph describing why you feel the poem reflects the character that you chose. (If you need direction in finding a poem, feel free to ask the teacher for help). - Draw a picture of a character in To Kill a Mockingbird. Include two quotes from the book that describes or represents the character. Write a 3-5 sentence paragraph for each quote that explains why you believe the quote is significant to the character you have focused on. Part II – A Look at History (complete 1 of the 3): - From our study of Jim Crow Laws, The Scottsboro Trials and the time period of when To Kill a Mockingbird was written, obtain/use the timeline (given at the beginning of the unit) of important historical events and connect these important dates in history to characters, plot, or events found in Harper Lee’s novel. Each connection needs to contain a specific quote from To Kill a Mockingbird with the correct page number, and a minimum of 3 sentences explaining how the particular passage from the book and the time in history relate. - Make a Chart that compares/contrasts the trial of Tom Robinson to the Scottsboro trials. Be sure to cite passages from To Kill a Mockingbird and use factual details from the history of the Scottsboro trials to support your comparison/contrast. - From our study of the time period in which Harper Lee’s novel, To Kill a Mockingbird, takes place, draw comparisons between the town of Maycomb and the historical conditions of our nation. Be sure to cite specific historical moments in our history and specific passages from the novel. Part III – Themes/Symbols (complete 1 of the 3): - Create a collage that depicts aspects of courage. Write a 1 ½ - 2 page paper that explains your collage and why you chose particular quotes, pictures, etc., in your collage. - Interview a peer or classmate who you know very little about. This interview can include questions: What are your interests? What is one thing that you hope to achieve in your lifetime? If you could change one thing in the world what would it be and why? After the interview is complete, write a 1 ½ - 2 page paper about the person that you interviewed. Part IV – Connecting to our Community, Nation, and World (complete 2 of the 3) - Complete a “Most Valuable Idea” chart. Write what you think is the single most important idea found in To Kill a Mockingbird. Find an example in the real world that illustrates this idea, and explain the connection between your most valuable idea and the real-world connection that you found. (Your real-world example can include a newspaper or magazine article, song lyrics, etc. and it must be cut and pasted on the chart). - If you were to cast the characters found in Harper Lee’s novel, who would you have represent these characters? Compile a list of the people (famous, such as Brad Pitt, or relatively unknown, such as Uncle Benny) and write a 6-8 sentence explanation why the person you chose to play a particular character is qualified for that particular role. The people chosen for your cast should not be selected based on their looks or because they are your favorite person. Consider the connections that your “actors” have to the characters that you have chosen them to represent. You need to cast at least 6 characters. - Find a children’s story, 2 songs, 2 photographs, or 2 television programs that address the central ideas/themes that are found in Harper Lee’s novel. Provide a summary of the children’s story or television program and write how either of these modern day examples connect to To Kill a Mockingbird. Or provide two print-outs or copies of the lyrics and music or print-outs of the two photographs with a written example (1 page each) of how these modern day examples connect to the central ideas found in the novel. 2. Literary Analysis After reading the novel To Kill a Mockingbird, write a paper that addresses the theme of courage, prejudices, or growing up. You should be able to connect a character, from To Kill a Mockingbird, with a particular theme and use specific passages and quotes to support your thesis statement. - Students will practice incorporating quotes from the novel into their writing, with attention paid to appropriate use of quotes for support and smooth integration of quotes. - Students will work to master strong verb usage. They will learn to avoid over-used terminology and clichés and first-person and second-person references in formal academic writing. Lesson Plans (50 minute class periods): Day 1: 5 minutes of journaling: When are people considered “grown up?” What is courage? How do prejudices/stereotypes impact someone’s perspective of another? Introduce Harper Lee’s novel, To Kill a Mockingbird and explain the significance of reading Harper Lee’s novel with attention to the historical background of when the novel was written. Provide visuals, such as “white only” signs, that reflect Jim Crow Laws and provide examples of literacy tests that were used to discriminate blacks (15 minutes). Address the language and sensitivity that needs to be taken when reading and discussing To Kill a Mockingbird, through Countee Cullen’s poem, “Incident” (10 minutes). Read the essential questions and explain that students should be able to address these questions when completing the reading of Lee’s novel, To Kill a Mockingbird, and through the completion of the culminating activities (5 minutes). Introduce culminating activities/reading schedule (10 minutes). Exit pass (5 minutes): Have students reflect on the historical background of To Kill a Mockingbird and write what their reactions are to beginning Harper Lee’s novel. Assign Chp. 1 for homework, which will be due for Day 3. Have students write at least 3 Question Slips. These slips should contain any question the student may have while reading the first chapter. Day 2: 5 minutes of journaling: - This will be a time for free-writing. Have students complete an opinionaire introduction activity where they respond to statements by rating their opinions on a scale from 1 (disagree) to 10 (agree) – (5 minutes). Students will find a partner and discuss their responses to each other. They will choose a question that they would like to focus on (10 minutes). Partners will then find another pair-share group and discuss the question that each pair has previously selected (10 minutes). Enter Socratic Seminar and have the whole class converse about the statements that they want to discuss further (15 minutes). Exit Pass: Have students reflect on how the discussion went, something new that they learned, or what insights someone else mentioned that was impactful to them (5 minutes). If extra time, allow for them to read Chapter 1. Day 3: 5 minutes of journaling: - Consider Chapter 1 and write any questions that you may have after reading the first chapter. - What particular character did you connect with or what aspect of Chapter 1 did you most enjoy? Divide students into groups and assign each person a specific role (20 minutes). - Summarizer: Prepares a brief summary of the chapter. - Questioner: Asks questions about what has been read in the text so far. - Literary Luminary: Locates passages that align with the group’s discussion. - Recorder: Records the group’s thinking, unanswered questions, and page numbers of passages that have been highlighted in discussion. - ALL: should be prepared to present group’s findings/discussion/questions. Open the class to a large group discussion and have each group present one question they discussed and answers they found. Answer any questions that students may have over the first chapter. Direct students to important details/quotes found in the chapter (20 minutes). Exit Pass: Write about the first chapter and write any questions that you want answers for. Day 4: 5 minute journaling: - How does education extend beyond the classroom? - Atticus states in Chapter 3, “You never really understand a person until you consider things from his point of view – until you climb into his skin and walk around in it” (33). What is your interpretation or reaction to this quote? Read Chapters 2 and 3 as a class (30 minutes). Introduce the Metaphoric Graphic Organizer “Shoes,” and explain that students will depict a character found within Chapters 2 and 3 by comparing them to a shoe that they feel best represents their character. They must provide textual support for their reasoning (5 minutes). Allow students time to create/draw/paste pictures of shoes and find quotes that can be used to complete one of the requirements for Part I of their Literary Profile (10 minutes). Assign Chapters 4 and 5 for the next class period. Day 5: 5 minute journaling: - Chose a character (Jem, Scout, Dill, Miss Maudie, or Boo Radley) and write your reaction them and their actions. Introduce and explain Hotseating (5 minutes). Arrange the classroom into five groups and assign each group a character to be put in the Hotseat. At this time, they should choose someone to represent their character and create two questions to ask their character based from Chapters 4 and 5 (10 minutes). Begin Hotseating (25 minutes). End with an exit pass asking students to consider and write about a perspective that was presented in the Hotseat that they thought was interesting. Students who were in the Hotseat will be given asked to consider what they were feeling or thinking as the character they were representing (5 minutes). Day 6: 5 minutes of Journaling: - Describe aspects of your own neighborhood or surrounding communities or describe the city that you live in. What particular events or typical associations made when you think about your area? Explain and hand out reflection logs for students to fill out during the in-class reading, which will be turned in at the end of class for a completion grade. When reading the text in class students should be able to draw conclusions about the town of Maycomb, recognize literary details that author uses to describe the town/setting of Maycomb, and make judgments about the actions or behaviors of the characters through the setting (5 minutes). Read Chapters 6, 7, and 8 in class (35-40 minutes). Assign Chapter 9 for the next class period. Day 7: 5 minutes of journaling: - Why are people prone to disregard or ignore the lessons that they can learn from children? - What kind of lessons can we learn from people that are younger than us? Discuss the essential question, “How do prejudices/stereotypes impact one’s perspective?” in relation to Uncle Jack’s prejudices towards children and Aunt Alexandra’s prejudices towards Scout (15 minutes). Maycomb’s prejudices towards blacks visited through Maya Angulo’s poem, “I know why the caged bird sings” (15 minutes). Any time left over is dedicated towards reading for the next class period (15 minutes). Assign Chapter 10 and 11 for the next class period. Day 8: 5 minutes of journaling: - What kind of father do you hope to be? - What kind of dad do you hope the father of your children will be? - What is a father to you? Introduce Round Robin Monologues and give students the prompt “A father is…”, and have students create a sentence that summarizes their feelings and completes the statement presented (5 minutes). Once students have completed their sentences, move around the circle and have students read out loud the sentence that they wrote while everyone else writes the sentences. Repeat the Round Robin Monologue without students writing each other’s sentences (10 minutes). Have students write a poem that incorporates the sentences that were used in the Round Robin Monologue (10 minutes). Have students share their poems with a partner and discuss the different perspectives that were presented in the Round Robin Monologue (5 minutes). Have students write a poem about Atticus incorporating the different views found among Aunt Alexandra, Scout, Jem, and the class (10 minutes). Exit Pass: Take a sentence that was presented in class and relate Atticus to the sentence? How does he accurately fit that description or how is he different from the sentence that you have chosen? Day 9: 5 minutes of journaling: - How do prejudices/stereotypes encourage discrimination? - How do peoples’ prejudices/stereotypes of others change their attitude? Hand out a slip of paper for students to write a question they have at the end of the reading. Explain and hand out reflection logs for students to fill out during the in-class reading, which will be turned in at the end of class for a completion grade. Each group member will have a different set of questions to ensure that she/he contributes to the discussion and reading (5 minutes). Read Chapters 12-14 in groups (35-40 minutes). Turn in reflection logs. Day 10: Hand out group worksheet. 5 minutes of journaling: - Has there ever been a time in your life where you have been misunderstood, stereotyped or judged by others? - How are groups of people misunderstood or discriminated in your own community, nation, or world? Review Jim Crow Laws and the Scottsboro trials (10 minutes). Have students enter groups and give each group two Mystery Envelopes containing questions related to each chapter. As students read through Chapters 15 and 16, have them answer the Mystery Envelope questions and write their answers on a handout given to them at the beginning of class (30 minutes). Exit Pass: Write about an event, character, etc. from Chapter 15 or 16 that impacted or interested you (5 minutes). Day 11: 5 minutes of journaling: - Predict what is going to happen at Tom Robinson’s trial based on what we have read already. - What makes a person courageous? Read Chapters 17, 18, and 19 in class and chose students to “Read n’ Act” the roles of the characters and trail scene (40 minutes). Exit Pass: Have students chose one passage from Chapters 17, 18, or 19 and write on why they chose the passage and the significance it holds to the essential questions or to the novel (5 minutes). Assign Chapters 20-22 for the next class period and have students prepare, for each chapter, one question they have – whether it is for clarification or whether it is to assess the motives of a character or the author’s intent, etc. Day 12: 5 minutes of journaling: - When we consider other peoples’ perspectives, what do we learn about those people? - Write about a character in To Kill a Mockingbird and how your opinion of them has changed since when we first met them. Why do you think you have a different perspective of them? Show students the scene of Atticus’ final statement in the movie, To Kill a Mockingbird (7 minutes). Have students enter into groups to discuss their prepared questions with one another. Each student needs to turn in a sheet of paper that has at least two things that everybody said during their group discussion (20 minutes). Allow students to work on Part II of their Literary Portfolio (15 minutes). Day 13: 5 minutes of journaling: - How can our prejudices of others impact our words and actions? - How does one’s background or upbringing influence their life? - What makes a courageous person weak? Read Chapters 23 and 24 in class while incorporating hotseating throughout. Specifically focus on the different views that the women of the Missionary Society may have had during these two chapters (Scout, Aunt Alexandra, Calpurnia, Miss Maudie) – (35 minutes). Exit Pass: Hotseating debrief worksheets will be distributed and students will describe how they interpreted characters, setting, and plot (5 minutes). If there is time left over, allow students to read Chapters 25, 26 and 27, which are due for the next class period. Day 14: 5 minutes of journaling: - Who or what are we prejudice towards in our own lives? - Respond to what Scout’s says in Chapter 26: “How can you hate Hitler so bad an’ then turn around and be ugly about folks right at home” (283). Introduce Phillis Wheatley’s poem, “To a Lady and Her Children,” Langston Hughes’ poem “I, too, Sing America,” and Theodore Roethke’s “My Papa’s Waltz” to discuss characters Helen, Miss Gates, and Bob Ewell. Have students connect passages to in To Kill a Mockingbird to the poems to create deeper meaning and connections between the characters and the plot. What can these poems reveal to us about the characters? What perspective can they give us? Day 15: 5 minutes of journaling: - Describe a person who you think is courageous. Read Chapters 28-29 and discuss and incorporate reflective writing while reading (25 minutes). Allow students to work on their Literary Analysis – Provide brainstorming techniques, concept maps, and an outline map (20 minutes). Outline maps should be assigned for homework. Day 16: 5 minutes of journaling: - What does this book mean in terms of my family? (“Family” can extend beyond your parents and siblings to include your close friends and those you care for). Read Chapters 30-31 as a class (20 minutes). Revisit Opinionaire completed at the beginning of the conceptual unit. Discuss how opinions have evolved or changed. Have students choose one statement and consider how someone may view it differently from themselves (20 minutes). Address culminating activities: Students should bring any materials needed to work on Part IV of their Literary Portfolio for the next class period. Students will also be required to turn in a rough draft of their Introduction and 1 body paragraph of their Literary Analysis. Day 17: 5 minutes of journaling: - Why should people your age be concerned with the issues presented in this book? Students will be given time in class to work on Part IV and any other Parts of the Literary Portfolio, which should be near completion (40 minutes). Students will receive their rough drafts back with written feedback (5 minutes). They must bring a near completed rough draft of their Literary Analysis. http://readwritethink.org/lessons/lesson_view.asp?id=1003 Day 18: 5 minutes of journaling: - How do the ideas in this book affect both your community and others? Mini-Lesson on integrating quotes into a paper (10 minutes). Peer response groups and mini-conferences with the teacher throughout the class period. During this time students will be reflecting on the content of the Literary Analysis and the textual support from To Kill a Mockingbird (30 minutes). Day 19: 5 minutes of journaling: - What does this book mean in terms of thinking about my country? - What can we learn about humanity from reading this text? Literary Portfolio due. Peer editing groups will focus on the use of strong verbs, punctuation, grammar and MLA documentation. Any conferencing that was not completed last period will continue (30 minutes). Begin presentations of the Literary Portfolios. Students will present one activity they completed by presenting it to the class. Each student will write a sentence response to each presentation (15 minutes). Day 20: 5 minutes of journaling: - What is your overall opinion about your experience with Harper Lee’s novel, To Kill a Mockingbird? Literary Analysis due. Complete presentations of the Literary Portfolios (40 minutes). Exit Pass: Students will write what they have learned most from reading To Kill a Mockingbird and why people their age should read Lee’s novel (5 minutes). Scoring Guide: Literary Portfolio Parts of the Excellent Good Average Literary Portfolio (A Portfolio) (B Portfolio) (C Portfolio) Emphasized Organization/ The Literary Portfolio is The Literary Portfolio The Literary Portfolio Neatness correctly organized with contains Parts I, II, III, is not organized with Parts 1, II, and III, IV and IV but is not in the Parts I, II, III, and IV in following in order and correct order and the correct order and/or free of grammar and contains grammar and Parts are missing and punctuation errors. punctuation errors. contains many punctuation and There is a Table of The Table of Contents grammar errors. Contents. is not complete or does not follow the The Literary Portfolio There is an introductory specific requirements does not have a Table letter describing how outlined. of Contents. your activities have impacted you in your There is no reading of To Kill a introductory letter. Mockingbird. Requirements The Literary Portfolio The Literary Portfolio The Literary Portfolio contains Parts I, II, III, contains Parts I, II, III, is missing more than and IV with each Part and IV with no more one activity from Part I, containing the required than 1 requirement II, III, or IV. amount of completed missing from all the activities. Parts combined. Literary Student shows a deeper Student shows a Student makes simple Understanding understanding of deeper understanding connections in the characters, historical of characters, novel To Kill a background, themes, and historical background, Mockingbird. The cultural relevance. and themes. Student student does not use Student connects To Kill makes connections in specific examples from a Mockingbird to specific To Kill a Mockingbird, Lee’s novel to connect characters, historical but they are not to the historical information, and themes. specific in providing background, themes, or Student is able to express examples from the culture surrounding what Lee’s novel implies novel to specific them. about human nature and events/subject in their relate specific details culture/world. from the novel to current cultural events/subjects. Scoring Guide: Literary Anaylsis Support for Thesis Relevant, telling, Supporting details and Supporting details and Supporting details and quality details give the information are relevant, information are information are reader important but one key issue or relevant, but several typically unclear or information that goes portion of the analysis is key issues or portions not related to the beyond the obvious or unsupported. of the storyline are topic. predictable. unsupported. Integrating Quotes and examples Most quotes are Many quotes are not Almost all quotes are Quotes support the topic and integrated smoothly and integrated smoothly in not integrated smoothy are integrated and refrain from "plop and the Literary Analysis. into the Literary flow smoothy within drop." Most quotes and Examples and quotes Anaylsis. Quotes and the Literary Anaylsis. examples from the from the source do not example from the source support the topic. clearly connect to the source do not topic. accurately support the topic. MLA All in-text citations Most in-text citations A Few in-text citations Many in-text citations documentation are cited correctly. are cited correctly. are cited correctly. are not cited correctly. Works Cited is Works Cited is included Works Cited Page There is no Works included and is and contains few errors. contains errors. Page Cited Page. Page without mistakes. Page Setup is correct Setup contains Setup is incorrect and Page Setup is correct and contains few errrors multiple errors. contains many errors. and consistent. Organization Details are placed in a Details are placed in a Some details are not in Many details are not in logical order and the logical order, but the a logical or expected a logical or expected way they are way in which they are order, and this distracts order. There is little presented effectively presented/introduced the reader. sense that the writing keeps the interest of sometimes makes the is organized. the reader. writing less interesting. Verb Usage Writer uses strong Writer uses mostly Writing has many Student makes no verbs and avoids using strong verbs, avoids "weak" verbs, may effort to use strong first and second using first and second make first or second verbs in their writing. person within their person references, and person references, or First and second formal writing uses very little over- contains many over- person references are assignment. used terminology. used terminologies. used all throughout the Literary Anaylsis. Writing contains over- used terminology. Spelling and Writer makes no Writer makes a few Writer makes a Writer makes many Grammar errors in grammar or errors in grammar or substainial amount of errors in grammar or spelling that distract spelling that distract the errors in grammar or spelling that distract the reader from the reader from the content. spelling that distract the reader from the content. the reader from the content. content. Reading Schedule and Assignment Calender Day 1: Introduction to Harper Lee’s novel, To Kill a Mockingbird. Read Chapter 1 by Day 3 Day 2: In-class Opinionaire; Homework: Read Chapter 1 for next class period. Day 3: Group work; Homework: Review Literary Portfolio, Part I. Day 4: Chapters 2-3 read in class; Work on an activity found in Part I of Literary Portfolio. Homework: Read Chapter 4-5 for next class period. Day 5: Hotseating Day 6: Chapters 6-8 read in class Homework: Chapter 9 due next class period. Day 7: Discuss Chapter 9 in class. Homework: Read Chapters 10-11 for next class period. Day 8: Round Robin Monologue Homework: You should consider spending the next few nights working on your Literary Portfolio. Day 9: Read Chapters 12-14 in class. Day 10: Read Chapters 15-16 in class. Day 11: Read Chapters 17-19 in class. Homework: Read Chapters 20-22 for next class period. Day 12: Discuss Chapters 20-22 Day 13: Read Chapter 23-24 in class Homework: Read Chapters 25-27 Day 14: Discuss Chpaters 25-27 Day 15: Read Chapters 28-29 in class Homework: Literary Analysis Outlines due next class period. Continue working on Literary Portfolio Day 16: Read Chapters 30-31 in class Homework: 1st Rough Draft of Literary Analysis due next class period. Day 17: Students will be given time to work on Literary Portfolios Homework: Bring revised Rough Draft of Literary Analysis for the next class period. Day 18: Peer-responding to Literary Analysis/Conferencing with teacher. Homework: Literary Portfolio due (be ready to provide a 3-4 minute presentation on one activity that you completed). Day 19: Peer-editing Literary Analysis/Conferencing with teacher. Homework: Literary Analysis due next class period. Day 20: Presentations; Closing remarks on TKAM. To a Lady and Her Children By Phillis Wheatley O'erwhelming sorrow now demands my song: From death the overwhelming sorrow sprung. What flowing tears? What hearts with grief opprest? What sighs on sighs heave the fond parent's breast? The brother weeps, the hapless sisters join Th' increasing woe, and swell the crystal brine; The poor, who once his gen'rous bounty fed, Droop, and bewail their benefactor dead. In death the friend, the kind companion lies, And in one death what various comfort dies! Th' unhappy mother sees the sanguine rill Forget to flow, and nature's wheels stand still, But see from earth his spirit far remov'd, And know no grief recalls your best-belov'd: He, upon pinions swifter than the wind, Has left mortality's sad scenes behind For joys to this terrestrial state unknown, And glories richer than the monarch's crown. Of virtue's steady course the prize behold! What blissful wonders to his mind unfold! But of celestial joys I sing in vain: Attempt not, muse, the too advent'rous strain. No more in briny show'rs, ye friends around, Or bathe his clay, or waste them on the ground: Still do you weep, still wish for his return? How cruel thus to wish, and thus to mourn? No more for him the streams of sorrow pour, But haste to join him on the heav'nly shore, On harps of gold to tune immortal lays, And to your God immortal anthems raise. I, Too, Sing America By Langston Hughes I, too, sing America. I am the darker brother. They send me to eat in the kitchen When company comes, But I laugh, And eat well, And grow strong. Tomorrow, I'll be at the table When company comes. Nobody'll dare Say to me, "Eat in the kitchen," Then. Besides, They'll see how beautiful I am And be ashamed-- I, too, am America. Incident By Countee Cullen Once riding in old Baltimore, Heart-filled, head-filled with glee; I saw a Baltimorean Keep looking straight at me. Now I was eight and very small, And he was no whit bigger, And so I smiled, but he poked out His tongue, and called me, "Nigger." I saw the whole of Balimore From May until December; Of all the things that happened there That's all that I remember. I know why the caged bird sings By Maya Angelou A free bird leaps on the back Of the wind and floats downstream Till the current ends and dips his wing In the orange suns rays And dares to claim the sky. But a BIRD that stalks down his narrow cage Can seldom see through his bars of rage His wings are clipped and his feet are tied So he opens his throat to sing. The caged bird sings with a fearful trill Of things unknown but longed for still And his tune is heard on the distant hill for The caged bird sings of freedom. The free bird thinks of another breeze And the trade winds soft through The sighing trees And the fat worms waiting on a dawn-bright Lawn and he names the sky his own. But a caged BIRD stands on the grave of dreams His shadow shouts on a nightmare scream His wings are clipped and his feet are tied So he opens his throat to sing. The caged bird sings with A fearful trill of things unknown But longed for still and his Tune is heard on the distant hill For the caged bird sings of freedom. My Papa's Waltz By Theodore Roethke The whiskey on your breath Could make a small boy dizzy; But I hung on like death: Such waltzing was not easy. We romped until the pans Slid from the kitchen shelf; My mother's countenance Could not unfrown itself. The hand that held my wrist Was battered on one knuckle; At every step you missed My right ear scraped a buckle. You beat time on my head With a palm caked hard by dirt, Then waltzed me off to bed Still clinging to your shirt. Those Winter Sundays By Robert Hayden Sundays too my father got up early and put his clothes on in the blueblack cold, then with cracked hands that ached from labor in the weekday weather made banked fires blaze. No one ever thanked him. I’d wake and hear the cold splintering, breaking. When the rooms were warm, he’d call, and slowly I would rise and dress, fearing the chronic angers of that house, Speaking indifferently to him, who had driven out the cold and polished my good shoes as well. What did I know, what did I know of love’s austere and lonely offices? Pages to are hidden for "To Kill a Mockingbird"Please download to view full document
Millipedes are a group of arthropods that are characterised by having two pairs of jointed legs on most body segments; they are known scientifically as the class Diplopoda, the name being derived from this feature. Each double-legged segment is a result of two single segments fused together. Most millipedes have very elongated cylindrical or flattened bodies with more than 20 segments, while pill millipedes are shorter and can roll into a ball. Although the name "millipede" derives from the Latin for "thousand feet", no known species has 1,000; the record of 750 legs belongs to Illacme plenipes. There are approximately 12,000 named species classified into 16 orders and around 140 families, making Diplopoda the largest class of myriapods, an arthropod group which also includes centipedes and other multi-legged creatures. |An assortment of millipedes (not to scale)| De Blainville in Gervais, 1844 |16 orders, c. 12,000 species| Most millipedes are slow-moving detritivores, eating decaying leaves and other dead plant matter. Some eat fungi or suck plant fluids, and a small minority are predatory. Millipedes are generally harmless to humans, and can even become household or garden pests. Millipedes can also be unwanted especially in greenhouses where they can cause severe damage to emergent seedlings. Most millipedes defend themselves with a variety of chemicals secreted from pores along the body, although the tiny bristle millipedes are covered with tufts of detachable bristles. Reproduction in most species is carried out by modified male legs called gonopods, which transfer packets of sperm to females. First appearing in the Silurian period, millipedes are some of the oldest known land animals. Some members of prehistoric groups grew to over 2 m (6 ft 7 in); the largest modern species reach maximum lengths of 27 to 38 cm (11 to 15 in). The longest extant species is the giant African millipede (Archispirostreptus gigas). Among myriapods, millipedes have traditionally been considered most closely related to the tiny pauropods, although some molecular studies challenge this relationship. Millipedes can be distinguished from the somewhat similar but only distantly related centipedes (class Chilopoda), which move rapidly, are carnivorous, and have only a single pair of legs on each body segment. The scientific study of millipedes is known as diplopodology, and a scientist who studies them is called a diplopodologist. Etymology and namesEdit The scientific name "Diplopoda" comes from the Ancient Greek words διπλοῦς (diplous), "double" and ποδός (podos), "foot", referring to the appearance of two pairs of legs on most segments, as described below. The common name "millipede" is a compound word formed from the Latin roots mille ("thousand") and ped ("foot"). The term "millipede" is widespread in popular and scientific literature, but among North American scientists, the term "milliped" (without the terminal e) is also used. Other vernacular names include "thousand-legger" or simply "diplopod". The science of millipede biology and taxonomy is called diplopodology: the study of diplopods. Approximately 12,000 millipede species have been described. Estimates of the true number of species on earth range from 15,000 to as high as 80,000. Few species of millipede are at all widespread; they have very poor dispersal abilities, depending as they do on terrestrial locomotion and humid habitats. These factors have favoured genetic isolation and rapid speciation, producing many lineages with restricted ranges. The living members of the Diplopoda are divided into sixteen orders in two subclasses. The basal subclass Penicillata contains a single order, Polyxenida (bristle millipedes). All other millipedes belong to the subclass Chilognatha consisting of two infraclasses: Pentazonia, containing the short-bodied pill millipedes, and Helminthomorpha (worm-like millipedes), containing the great majority of the species. Outline of classificationEdit The higher-level classification of millipedes is presented below, based on Shear, 2011, and Shear & Edgecombe, 2010 (extinct groups). Recent cladistic and molecular studies have challenged the traditional classification schemes above, and in particular the position of the orders Siphoniulida and Polyzoniida is not yet well established. The placement and positions of extinct groups (†) known only from fossils is tentative and not fully resolved. After each name is listed the author citation: the name of the person who coined the name or defined the group, even if not at the current rank. Class Diplopoda de Blainville in Gervais, 1844 - Subclass Penicillata Latrielle, 1831 - Order Polyxenida Verhoeff, 1934 - Subclass †Arthropleuridea (placed in Penicillata by some authors) - Subclass Chilognatha Latrielle, 1802 - Order †Zosterogrammida Wilson, 2005 (Chilognatha incertae sedis) - Infraclass Pentazonia Brandt, 1833 - Infraclass Helminthomorpha Pocock, 1887 - Superorder †Archipolypoda Scudder, 1882 - Order †Pleurojulida Schneider & Werneburg, 1998 (possibly sister to Colobognatha) - Subterclass Colobognatha Brandt, 1834 - Subterclass Eugnatha Attems, 1898 - Superorder Juliformia Attems, 1926 - Superorder Nematophora Verhoeff, 1913 - Superorder Merocheta Cook, 1895 - Order Polydesmida Pocock, 1887 Millipedes are among the first animals to have colonised land during the Silurian period. Early forms probably ate mosses and primitive vascular plants. There are two major groups of millipedes whose members are all extinct: the Archipolypoda ("ancient, many-legged ones") which contain the oldest known terrestrial animals, and Arthropleuridea, which contain the largest known land invertebrates. The earliest known land creature, Pneumodesmus newmani, was a 1 cm (0.4 in) long archipolypodan that lived 428 million years ago in the upper Silurian, and has clear evidence of spiracles (breathing holes) attesting to its air-breathing habits. During the Upper Carboniferous ( ), Arthropleura became the largest known land-dwelling invertebrate on record, reaching lengths of at least 2 m (6 ft 7 in). Millipedes also exhibit the earliest evidence of chemical defence, as some Devonian fossils have defensive gland openings called ozopores. Millipedes, centipedes, and other terrestrial arthropods attained very large sizes in comparison to modern species in the oxygen-rich environments of the Devonian and Carboniferous periods, and some could grow larger than one metre. As oxygen levels lowered through time, arthropods became smaller. The history of scientific millipede classification began with Carl Linnaeus, who in his 10th edition of Systema Naturae, 1758, named seven species of Julus as "Insecta Aptera" (wingless insects). In 1802, the French zoologist Pierre André Latreille proposed the name Chilognatha as the first group of what are now the Diplopoda, and in 1840 the German naturalist Johann Friedrich von Brandt produced the first detailed classification. The name Diplopoda itself was coined in 1844 by the French zoologist Henri Marie Ducrotay de Blainville. From 1890 to 1940, millipede taxonomy was driven by relatively few researchers at any given time, with major contributions by Carl Attems, Karl Wilhelm Verhoeff and Ralph Vary Chamberlin, who each described over 1,000 species, as well as Orator F. Cook, Filippo Silvestri, R. I. Pocock, and Henry W. Brölemann. This was a period when the science of diplopodology flourished: rates of species descriptions were on average the highest in history, sometimes exceeding 300 per year. In 1971, the Dutch biologist C. A. W. Jeekel published a comprehensive listing of all known millipede genera and families described between 1758 and 1957 in his Nomenclator Generum et Familiarum Diplopodorum, a work credited as launching the "modern era" of millipede taxonomy. In 1980, the American biologist Richard L. Hoffman published a classification of millipedes which recognized the Penicillata, Pentazonia, and Helminthomorpha, and the first phylogenetic analysis of millipede orders using modern cladistic methods was published in 1984 by Henrik Enghoff of Denmark. A 2003 classification by the American myriapodologist Rowland Shelley is similar to the one originally proposed by Verhoeff, and remains the currently accepted classification scheme (shown below), despite more recent molecular studies proposing conflicting relationships. A 2011 summary of millipede family diversity by William A. Shear placed the order Siphoniulida within the larger group Nematophora. In addition to the 16 living orders, there are 9 extinct orders and one superfamily known only from fossils. The relationship of these to living groups and to each other is controversial. The extinct Arthropleuridea was long considered a distinct myriapod class, although work in the early 21st century established the group as a subclass of millipedes. Several living orders also appear in the fossil record. Below are two proposed arrangements of fossil millipede groups. Extinct groups are indicated with a dagger (†). The extinct order Zosterogrammida, a chilognath of uncertain position, is not shown. |Alternate hypothesis of fossil relationships| Relation to other myriapodsEdit Although the relationships of millipede orders are still the subject of debate, the class Diplopoda as a whole is considered a monophyletic group of arthropods: all millipedes are more closely related to each other than to any other arthropods. Diplopoda is a class within of the arthropod subphylum Myriapoda, the myriapods, which includes centipedes (class Chilopoda) as well as the lesser-known pauropods (class Pauropoda) and symphylans (class Symphyla). Within myriapods, the closest relatives or sister group of millipedes has long been considered the pauropods, which also have a collum and diplosegments. Distinction from centipedesEdit The differences between millipedes and centipedes are a common question from the general public. Both groups of myriapods share similarities, such as long, multi-segmented bodies, many legs, a single pair of antennae, and the presence of postanntennal organs, but have many differences and distinct evolutionary histories, as the most recent common ancestor of centipedes and millipedes lived around 450 to 475 million years ago in the Silurian. The head alone exemplifies the differences; millipedes have short, elbowed antennae for probing the substrate, a pair of robust mandibles and a single pair of maxillae fused into a lip; centipedes have long, threadlike antennae, a pair of small mandibles, two pairs of maxillae and a pair of large poison claws. |Legs||Two pairs on most body segments; attached to underside of body||One pair per body segment; attached to sides of body; last pair extends backwards| |Locomotion||Generally adapted for burrowing or inhabiting small crevices; slow-moving||Generally adapted for running, except for the burrowing soil centipedes| |Feeding||Primarily detritivores, some herbivores, few carnivores; no venom||Primarily carnivores with claws modified into venomous fangs| |Spiracles||On underside of body||On the sides or top of body| |Reproductive openings||Third body segment||Last body segment| |Reproductive behaviour||Male generally inserts spermatophore into female with gonopods||Male produces spermatophore that is usually picked up by female| Millipedes come in a variety of body shapes and sizes, ranging from 2 mm (0.08 in) to around 35 cm (14 in) in length, and can have as few as eleven to over a hundred segments. They are generally black or brown in colour, although there are a few brightly coloured species, and some have aposematic colouring to warn that they are toxic. Species of Motyxia produce cyanide as a chemical defence and are bioluminescent. Body styles vary greatly between major millipede groups. In the basal subclass Penicillata, consisting of the tiny bristle millipedes, the exoskeleton is soft and uncalcified, and is covered in prominent setae or bristles. All other millipedes, belonging to the subclass Chilognatha, have a hardened exoskeleton. The chilognaths are in turn divided into two infraclasses: the Pentazonia, containing relatively short-bodied groups such as pill millipedes, and the Helminthomorpha ("worm-like" millipedes), which contains the vast majority of species, with long, many-segmented bodies. The head of a millipede is typically rounded above and flattened below and bears a pair of large mandibles in front of a plate-like structure called a gnathochilarium ("jaw lip"). The head contains a single pair of antennae with seven or eight segments and a group of sensory cones at the tip. Many orders also possess a pair of sensory organs known as the Tömösváry organs, shaped as small oval rings posterior and lateral to the base of the antennae. Their function is unknown, but they also occur in some centipedes, and are possibly used to measure humidity or light levels in the surrounding environment. Millipede eyes consist of several simple flat-lensed ocelli arranged in a group or patch on each side of the head. These patches are also called ocular fields or ocellaria. Many species of millipedes, including the entire order Polydesmida and cave-dwelling millipedes such as Causeyella and Trichopetalum, had ancestors that could see but have subsequently lost their eyes and are blind. Millipede bodies may be flattened or cylindrical, and are composed of numerous metemeric segments, each with an exoskeleton consisting of five chitinous plates: a single plate above (the tergite), one at each side (pleurites), and a plate on the underside (sternite) where the legs attach. In many millipedes, these plates are fused to varying degrees, sometimes forming a single cylindrical ring. The plates are typically hard, being impregnated with calcium salts. Because they lack a waxy cuticle and can't close their permanently open spiracles, millipedes are susceptible to water loss and must spend most of their time in moist or humid environments. The first segment behind the head is legless and known as a collum (from the Latin for neck or collar). The second, third, and fourth body segments bear a single pair of legs each and are known as "haplosegments", from the Greek haplo, "single" (the three haplosegments are sometimes referred to as a "thorax"). The remaining segments, from the fifth to the posterior, are properly known as diplosegments or double segments, formed by the fusion of two embryonic segments. Each diplosegment bears two pairs of legs, rather than just one as in centipedes. In some millipedes, the last few segments may be legless. The terms "segment" or "body ring" are often used interchangeably to refer to both haplo- and diplosegments. The final segment is known as the telson and consists of a legless preanal ring, a pair of anal valves (closeable plates around the anus), and a small scale below the anus. Millipedes in several orders have keel-like extensions of the body-wall known as paranota, which can vary widely in shape, size, and texture; modifications include lobes, papillae, ridges, crests, spines and notches. Paranota may allow millipedes to wedge more securely into crevices, protect the legs, or make the millipede more difficult for predators to swallow. The legs are composed of seven segments, and attach on the underside of the body. The legs of an individual are generally rather similar to each other, although often longer in males than females, and males of some species may have a reduced or enlarged first pair of legs. The most conspicuous leg modifications are involved in reproduction, discussed below. Despite the common name, no millipede has been discovered with 1,000 legs: common species have between 34 and 400 legs, and the record is held by Illacme plenipes, with individuals possessing up to 750 legs – more than any other creature on Earth. |Wikimedia Commons has media related to Millipede anatomy.| Millipedes breathe through two pairs of spiracles located ventrally on each segment near the base of the legs. Each opens into an internal pouch, and connects to a system of tracheae. The heart runs the entire length of the body, with an aorta stretching into the head. The excretory organs are two pairs of malpighian tubules, located near the mid-part of the gut. The digestive tract is a simple tube with two pairs of salivary glands to help digest the food. Reproduction and growthEdit Millipedes show a diversity of mating styles and structures. In the basal order Polyxenida (bristle millipedes), mating is indirect: males deposit spermatophores onto webs they secrete with special glands, and the spermatophores are subsequently picked up by females. In all other millipede groups, males possess one or two pairs of modified legs called gonopods which are used to transfer sperm to the female during copulation. The location of the gonopods differs between groups: in males of the Pentazonia they are located at the rear of the body and known as telopods and may also function in grasping females, while in the Helminthomorpha – the vast majority of species – they are located on the seventh body segment. A few species are parthenogenetic, having few, if any, males. Gonopods occur in a diversity of shapes and sizes, and in the range from closely resembling walking legs to complex structures quite unlike legs at all. In some groups, the gonopods are kept retracted within the body; in others they project forward parallel to the body. Gonopod morphology is the predominant means of determining species among millipedes: the structures may differ greatly between closely related species but very little within a species. The gonopods develop gradually from walking legs through successive moults until reproductive maturity. The genital openings (gonopores) of both sexes are located on the underside of the third body segment (near the second pair of legs) and may be accompanied in the male by one or two penes which deposit the sperm packets onto the gonopods. In the female, the genital pores open into paired small sacs called cyphopods or vulvae, which are covered by small hood-like lids, and are used to store the sperm after copulation. The cyphopod morphology can also be used to identify species. Millipede sperm lack flagella, a unique trait among myriapods. In all except the bristle millipedes, copulation occurs with the two individuals facing one another. Copulation may be preceded by male behaviours such as tapping with antennae, running along the back of the female, offering edible glandular secretions, or in the case of some pill-millipedes, stridulation or "chirping". During copulation in most millipedes, the male positions his seventh segment in front of the female's third segment, and may insert his gonopods to extrude the vulvae before bending his body to deposit sperm onto his gonopods and reinserting the "charged" gonopods into the female. Females lay from ten to three hundred eggs at a time, depending on species, fertilising them with the stored sperm as they do so. Many species deposit the eggs on moist soil or organic detritus, but some construct nests lined with dried faeces, and may protect the eggs within silk cocoons. In most species, the female abandons the eggs after they are laid, but some species in the orders Platydesmida and Stemmiulida provide parental care for eggs and young. The young hatch after a few weeks, and typically have only three pairs of legs, followed by up to four legless segments. As they grow, they continually moult, adding further segments and legs as they do so. Some species moult within specially prepared chambers of soil or silk, and may also shelter in these during wet weather, and most species eat the discarded exoskeleton after moulting. The adult stage, when individuals become reproductively mature, is generally reached in the final moult stage, which varies between species and orders, although some species continue to moult after adulthood. Furthermore, some species alternate between reproductive and non-reproductive stages after maturity, a phenomenon known as periodomorphosis, in which the reproductive structures regress during non-reproductive stages. Millipedes may live from one to ten years, depending on species. Habitat and distributionEdit Millipedes occur on all continents except Antarctica, and occupy almost all terrestrial habitats, ranging as far north as the Arctic Circle in Iceland, Norway, and Central Russia, and as far south as Santa Cruz Province, Argentina. Typically forest floor dwellers, they live in leaf litter, dead wood, or soil, with a preference for humid conditions. In temperate zones, millipedes are most abundant in moist deciduous forests, and may reach densities of over 1,000 individuals per square metre. Other habitats include coniferous forests, deserts, caves, and alpine ecosystems. Some species can survive freshwater floods and live submerged underwater for up to 11 months. A few species occur near the seashore and can survive in somewhat salty conditions. The diplosegments of millipedes have evolved in conjunction with their burrowing habits, and nearly all millipedes adopt a mainly subterranean lifestyle. They use three main methods of burrowing; bulldozing, wedging and boring. Members of the orders Julida, Spirobolida and Spirostreptida, lower their heads and barge their way into the substrate, the collum being the portion of their exoskeleton that leads the way. Flat-backed millipedes in the order Polydesmida tend to insert their front end, like a wedge, into a horizontal crevice, and then widen the crack by pushing upwards with their legs, the paranota in this instance constituting the main lifting surface. Boring is used by members of the order Polyzoniida. These have smaller segments at the front and increasingly large ones further back; they propel themselves forward into a crack with their legs, the wedge-shaped body widening the gap as they go. Some millipedes have adopted an above-ground lifestyle and lost the burrowing habit. This may be because they are too small to have enough leverage to burrow, or because they are too large to make the effort worthwhile, or in some cases because they move relatively fast (for a millipede) and are active predators. Most millipedes are detritivores and feed on decomposing vegetation, feces, or organic matter mixed with soil. They often play important roles in the breakdown and decomposition of plant litter: estimates of consumption rates for individual species range from 1 to 11 percent of all leaf litter, depending on species and region, and collectively millipedes may consume nearly all the leaf litter in a region. The leaf litter is fragmented in the millipede gut and excreted as pellets of leaf fragments, algae, fungi, and bacteria, which facilitates decomposition by the microorganisms. Where earthworm populations are low in tropical forests, millipedes play an important role in facilitating microbial decomposition of the leaf litter. Some millipedes are herbivorous, feeding on living plants, and some species can become serious pests of crops. Millipedes in the order Polyxenida graze algae from bark, and Platydesmida feed on fungi. A few species are omnivorous or occasionally carnivorous, feeding on insects, centipedes, earthworms, or snails. Some species have piercing mouth parts that allow them to suck up plant juices. Predators and parasitesEdit Millipedes are preyed on by a wide range of animals, including various reptiles, amphibians, birds, mammals, and insects. Mammalian predators such as coatis and meerkats roll captured millipedes on the ground to deplete and rub off their defensive secretions before consuming their prey, and certain poison dart frogs are believed to incorporate the toxic compounds of millipedes into their own defences. Several invertebrates have specialised behaviours or structures to feed on millipedes, including larval glowworm beetles, Probolomyrmex ants, chlamydephorid slugs, and predaceous dung beetles of the genera Sceliages and Deltochilum. A large subfamily of assassin bugs, the Ectrichodiinae with over 600 species, has specialized in preying upon millipedes. Parasites of millipedes include nematodes, phaeomyiid flies, and acanthocephalans. Nearly 30 fungal species of the order Laboulbeniales have been found growing externally on millipedes, but some species may be commensal rather than parasitic. Due to their lack of speed and their inability to bite or sting, millipedes' primary defence mechanism is to curl into a tight coil – protecting their delicate legs inside an armoured exoskeleton. Many species also emit various foul-smelling liquid secretions through microscopic holes called ozopores (the openings of "odoriferous" or "repugnatorial glands"), along the sides of their bodies as a secondary defence. Among the many irritant and toxic chemicals found in these secretions are alkaloids, benzoquinones, phenols, terpenoids, and hydrogen cyanide. Some of these substances are caustic and can burn the exoskeleton of ants and other insect predators, and the skin and eyes of larger predators. Primates such as capuchin monkeys and lemurs have been observed intentionally irritating millipedes in order to rub the chemicals on themselves to repel mosquitoes. Some of these defensive compounds also show antifungal activity. The bristly millipedes (order Polyxenida) lack both an armoured exoskeleton and odiferous glands, and instead are covered in numerous bristles that in at least one species, Polyxenus fasciculatus, detach and entangle ants. Other inter-species interactionsEdit Some millipedes form mutualistic relationships with organisms of other species, in which both species benefit from the interaction, or commensal relationships, in which only one species benefits while the other is unaffected. Several species form close relationships with ants, a relationship known as myrmecophily, especially within the family Pyrgodesmidae (Polydesmida), which contains "obligate myrmecophiles", species which have only been found in ant colonies. More species are "facultative myrmecophiles", being non-exclusively associated with ants, including many species of Polyxenida that have been found in ant nests around the world. Many millipede species have commensal relationships with mites of the orders Mesostigmata and Astigmata. Many of these mites are believed to be phoretic rather than parasitic, which means that they use the millipede host as a means of dispersal. A novel interaction between millipedes and mosses was described in 2011, in which individuals of the newly discovered Psammodesmus bryophorus was found to have up to ten species living on its dorsal surface, in what may provide camouflage for the millipede and increased dispersal for the mosses. Interactions with humansEdit Millipedes generally have little impact on human economic or social well-being, especially in comparison with insects, although locally they can be a nuisance or agricultural pest. Millipedes do not bite, and their defensive secretions are mostly harmless to humans — usually causing only minor discolouration on the skin — but the secretions of some tropical species may cause pain, itching, local erythema, edema, blisters, eczema, and occasionally cracked skin. Eye exposures to these secretions causes general irritation and potentially more severe effects such as conjunctivitis and keratitis. This is called millipede burn. First aid consists of flushing the area thoroughly with water; further treatment is aimed at relieving the local effects. Some millipedes are considered household pests, including Xenobolus carnifex which can infest thatched roofs in India, and Ommatoiulus moreleti, which periodically invades homes in Australia. Other species exhibit periodical swarming behaviour, which can result in home invasions, crop damage, and train delays when the tracks become slippery with the crushed remains of hundreds of millipedes. Some millipedes can cause significant damage to crops: the spotted snake millipede (Blaniulus guttulatus) is a noted pest of sugar beets and other root crops, and as a result is one of the few millipedes with a common name. Some of the larger millipedes in the orders Spirobolida, Spirostreptida, and Sphaerotheriida are popular as pets. Some species commonly sold or kept include species of Archispirostreptus, Aphistogoniulus, Narceus, and Orthoporus. Millipedes appear in folklore and traditional medicine around the world. Some cultures associate millipede activity with coming rains. In the Yoruba culture of Nigeria, millipedes are used in pregnancy and business rituals, and crushed millipedes are used to treat fever, whitlow, and convulsion in children. In Zambia, smashed millipede pulp is used to treat wounds, and the Bafia people of Cameroon use millipede juice to treat earache. In certain Himalayan Bhotiya tribes, dry millipede smoke is used to treat haemorrhoids. Native people in Malaysia use millipede secretions in poison-tipped arrows. The secretions of Spirobolus bungii have been observed to inhibit division of human cancer cells. The only recorded usage of millipedes as food by humans comes from the Bobo people of Burkina Faso, who consume boiled, dried millipedes in tomato sauce. Millipedes have also inspired and played roles in scientific research. In 1963, a walking vehicle with 36 legs was designed, said to have been inspired by a study of millipede locomotion. Experimental robots have had the same inspiration, in particular when heavy loads are needed to be carried in tight areas involving turns and curves. In biology, some authors have advocated millipedes as model organisms for the study of arthropod physiology and the developmental processes controlling the number and shape of body segments. - Hoffman, Richard L. (1990). "Diplopoda". In Dindal, Daniel L. Soil Biology Guide. John Wiley & Sons. p. 835. ISBN 978-0-471-04551-9. Hoffman, Richard L. (2000). "Milliped or Millipede?" (PDF). Bulletin of the British Myriapod Group. 16: 36–37. - Ruppert, Edward E.; Fox, Richard, S.; Barnes, Robert D. (2004). Invertebrate Zoology, 7th edition. Cengage Learning. pp. 711–717. ISBN 978-81-315-0104-7. - Shear, W. (2011). "Class Diplopoda de Blainville in Gervais, 1844. In: Zhang, Z.-Q. (Ed.) Animal biodiversity: An outline of higher-level classification and survey of taxonomic richness" (PDF). Zootaxa. 3148: 159–164. - Brewer, Michael S.; Sierwald, Petra; Bond, Jason E. (2012). "Millipede taxonomy after 250 years: Classification and taxonomic practices in a mega-diverse yet understudied arthropod group". PLoS ONE. 7 (5): e37240. Bibcode:2012PLoSO...737240B. doi:10.1371/journal.pone.0037240. PMC 3352885. PMID 22615951. - Sierwald, Petra; Bond, Jason E. (2007). "Current status of the myriapod class Diplopoda (Millipedes): Taxonomic diversity and phylogeny". Annual Review of Entomology. 52 (1): 401–420. doi:10.1146/annurev.ento.52.111805.090210. PMID 17163800. - Barker, G.M. (2004). Natural Enemies of Terrestrial Molluscs. CABI. pp. 405–406. ISBN 978-0-85199-061-3. - Bueno-Villegas, Julián; Sierwald, Petra; Bond, Jason E. "Diplopoda". In Bousquets, J. L.; Morrone, J. J. Biodiversidad, taxonomia y biogeografia de artropodos de Mexico (PDF). pp. 569–599. - Shelley, Rowland M. "Millipedes". University of Tennessee: Entomology and Plant Pathology. Retrieved 17 July 2016. - Shear, William A.; Edgecombe, Gregory D. (2010). "The geological record and phylogeny of the Myriapoda". Arthropod Structure & Development. 39 (2–3): 174–190. doi:10.1016/j.asd.2009.11.002. PMID 19944188. - Hoffman, R. L. (1963). "New genera and species of Upper Paleozoic Diplopoda". Journal of Paleontology. 37 (1): 167–174. JSTOR 1301419. - Garwood, Russell; Edgecombe, Gregory (2011). "Early terrestrial animals, evolution and uncertainty". Evolution, Education, and Outreach. 4 (3): 489–501. doi:10.1007/s12052-011-0357-y. - Wilson, Heather M.; Anderson, Lyall I. (2004). "Morphology and taxonomy of Paleozoic millipedes (Diplopoda: Chilognatha: Archipolypoda) from Scotland". Journal of Paleontology. 78 (1): 169–184. doi:10.1666/0022-3360(2004)078<0169:MATOPM>2.0.CO;2. - Sues, Hans-Dieter (15 January 2011). "Largest Land-Dwelling "Bug" of All Time". National Geographic. Archived from the original on 4 March 2016. Retrieved 25 February 2016. - Lockley, M. G.; Meyer, Christian (2013). "The Tradition of Tracking Dinosaurs in Europe". Dinosaur Tracks and Other Fossil Footprints of Europe. Columbia University Press. pp. 25–52. ISBN 978-0-231-50460-7. - Caroli Linnaei (1758). Systema naturae per regna tria naturae: secundum classes, ordines, genera, species, cum characteribus, differentiis, synonymis, locis. pp. 639–640. - Shelley, R. M. (2007). "Taxonomy of extant Diplopoda (Millipeds) in the modern era: Perspectives for future advancements and observations on the global diplopod community (Arthropoda: Diplopoda)" (PDF). Zootaxa. 1668: 343–362. - Shelley, Rowland M.; Sierwald, Petra; Kiser, Selena B.; Golovatch, Sergei I. (2000). Nomenclator generum et familiarum Diplopodorum II : a list of the genus and family-group names in the class Diplopoda from 1958 through 1999. Sofia, Bulgaria: Pensoft. p. 5. ISBN 954-642-107-3. - Hoffman, Richard L. (1980). Classification of the Diplopoda. Geneva, Switzerland: Muséum d'Historie Naturelle. pp. 1–237. - Enghoff, H. (1984). "Phylogeny of millipedes – a cladistic analysis". Journal of Zoological Systematics and Evolutionary Research. 22 (1): 8–26. doi:10.1111/j.1439-0469.1984.tb00559.x. - Wilson, Heather M.; Shear, William A. (2000). "Microdecemplicida, a new order of minute arthropleurideans (Arthropoda: Myriapoda) from the Devonian of New York State, U.S.A". Transactions of the Royal Society of Edinburgh: Earth Sciences. 90 (4): 351–375. doi:10.1017/S0263593300002674. - Kraus, O.; Brauckmann, C. (2003). "Fossil giants and surviving dwarfs. Arthropleurida and Pselaphognatha (Atelocerata, Diplopoda): characters, phylogenetic relationships and construction". Verhandlungen des Naturwissenschaftlichen Vereins in Hamburg. 40: 5–50. - Kraus, O. (2005). "On the structure and biology of Arthropleura species (Atelocerata, Diplopoda; Upper Carboniferous/Lower Permian)". Verhandlungen des Naturwissenschaftlichen Vereins in Hamburg. 41: 5–23. - Shelley, Rowland M. (1999). "Centipedes and millipedes with emphasis on North American fauna". The Kansas School Naturalist. 45 (3): 1–16. - Brewer, Michael S.; Bond, Jason E. (2013). "Ordinal-level phylogenomics of the arthropod class Diplopoda (Millipedes) based on an analysis of 221 nuclear protein-coding loci generated using next-generation sequence analyses". PLoS ONE. 8 (11): e79935. Bibcode:2013PLoSO...879935B. doi:10.1371/journal.pone.0079935. PMC 3827447. PMID 24236165. - Blower, John Gordon (1985). Millipedes: Keys and Notes for the Identification of the Species. Brill Archive. p. 1. ISBN 90-04-07698-0. - Minelli, Alessandro; Golovatch, Sergei I. (2001). "Myriapods". In Levin, Simon A. Encyclopedia of Biodiversity (PDF). pp. 291–303. ISBN 0-12-226865-2. Archived from the original (PDF) on 2014-02-21. - Barnes, Robert D. (1982). Invertebrate Zoology. Philadelphia, PA: Holt-Saunders International. pp. 818–825. ISBN 0-03-056747-5. - Marek, Paul E.; Moore, Wendy (2015). "Discovery of a glowing millipede in California and the gradual evolution of bioluminescence in Diplopoda". Proceedings of the National Academy of Sciences. 112 (20): 6419–6424. Bibcode:2015PNAS..112.6419M. doi:10.1073/pnas.1500014112. PMC 4443369. PMID 25941389. - Lewis, J. G. E. (2008). The Biology of Centipedes (Digitally printed 1st paperback version. ed.). Cambridge: Cambridge University Press. pp. 110–111. ISBN 978-0-521-03411-1. - Capinera, John L., ed. (2008). "Millipedes". Encyclopedia of Entomology. Springer. pp. 2395–2397. ISBN 978-1-4020-6242-1. - [ http://cronodon.com/BioTech/diplopoda.html Millipedes - Cronodo] - Mesibov, Robert. "Paranota". External Anatomy of Polydesmida. Retrieved 30 October 2013. - Hopkin, Stephen P.; Read, Helen J. (1992). The Biology of Millipedes. Oxford University Press. ISBN 0-19-857699-4. - Marek, P.; Shear, W.; Bond, J. (2012). "A redescription of the leggiest animal, the millipede Illacme plenipes, with notes on its natural history and biogeography (Diplopoda, Siphonophorida, Siphonorhinidae)". ZooKeys. 241 (241): 77–112. doi:10.3897/zookeys.241.3831. PMC 3559107. PMID 23372415. - Blower, J. Gordon (1985). Millipedes: Keys and Notes for the Identification of the Species. London: Published for the Linnean Society of London and the Estuarine and Brackish-Water Sciences Association by E.J. Brill. ISBN 90-04-07698-0. - Mesibov, Robert. "Gonopods". External Anatomy of Polydesmida. Retrieved 27 October 2013. - Drago, Leandro; Fusco, Giuseppe; Garollo, Elena; Minelli, Alessandro (2011). "Structural aspects of leg-to-gonopod metamorphosis in male helminthomorph millipedes (Diplopoda)". Frontiers in Zoology. 8 (1): 19. doi:10.1186/1742-9994-8-19. - Wesener, Thomas; Köhler, Jörn; Fuchs, Stefan; van den Spiegel, Didier (2011). "How to uncoil your partner—"mating songs" in giant pill-millipedes (Diplopoda: Sphaerotheriida)". Naturwissenschaften. 98 (11): 967–975. Bibcode:2011NW.....98..967W. doi:10.1007/s00114-011-0850-8. PMID 21971844. - Enghoff, Henrik; Akkari, Nesrine (2011). "A callipodidan cocoon (Diplopoda, Callipodida, Schizopetalidae)". International Journal of Myriapodology. 5: 49–53. doi:10.3897/ijm.5.1995. - Shelley, Rowland M.; Golavatch, Sergei I. (2011). "Atlas of myriapod biogeography. I. Indigenous ordinal and supra-ordinal distributions in the Diplopoda: Perspectives on taxon origins and ages, and a hypothesis on the origin and early evolution of the class". Insecta Mundi. 158: 1–134. - Golovatch, Sergei I.; Kime, R. Desmond (2009). "Millipede (Diplopoda) distributions: a review" (PDF). Soil Organisms. 81 (3): 565–597. - Adis, Joachim (1986). "An 'aquatic' millipede from a Central Amazonian inundation forest". Oecologia. 68 (3): 347–349. Bibcode:1986Oecol..68..347A. doi:10.1007/BF01036737. - Burrows, F. J.; Hales, D. F.; Beattie, A. J. (1994). "Aquatic millipedes in Australia: a biological enigma and a conservation saga". Australian Zoologist. 29 (3–4): 213–216. doi:10.7882/az.1994.007. - Barber, A. D., ed. (2013). "World Database of Littoral Myriapoda". World Register of Marine Species. Retrieved 25 October 2013. - Barker, G. M. (2004). "Millipedes (Diplopoda) and Centipedes (Chilopoda) (Myriapoda) as predators of terrestrial gastropods". In Barker, G. M. Natural Enemies of Terrestrial Molluscs. CAB International. pp. 405–426. ISBN 978-0-85199-061-3. - Weldon, Paul J.; Cranmore, Catherine F.; Chatfield, Jenifer A. (2006). "Prey-rolling behavior of coatis (Nasua spp.) is elicited by benzoquinones from millipedes". Naturwissenschaften. 93 (1): 14–16. Bibcode:2006NW.....93...14W. doi:10.1007/s00114-005-0064-z. - Saporito, R. A.; Donnelly, M. A.; Hoffman, R. L.; Garraffo, H. M.; Daly, J. W. (2003). "A siphonotid millipede (Rhinotus) as the source of spiropyrrolizidine oximes of dendrobatid frogs". Journal of Chemical Ecology. 29 (12): 2781–2786. doi:10.1023/B:JOEC.0000008065.28364.a0. - Eisner, T.; Eisner, M.; Attygalle, A. B.; Deyrup, M.; Meinwald, J. (1998). "Rendering the inedible edible: circumvention of a millipede's chemical defence by a predaceous beetle larva". Proceedings of the National Academy of Sciences of the United States of America. 95 (3): 1108–13. Bibcode:1998PNAS...95.1108E. doi:10.1073/pnas.95.3.1108. PMC 18689. PMID 9448293. - Ito, F. (1998). "Colony composition and specialized predation on millipedes in the enigmatic ponerine ant genus Probolomyrmex (Hymenoptera, Formicidae)" (PDF). Insectes Sociaux. 45 (1): 79–83. doi:10.1007/s000400050070. - Herbert, D. G. (2000). "Dining on diplopods: remarkable feeding behaviour in chlamydephorid slugs (Mollusca: Gastropoda)". Journal of Zoology. 251 (1): 1–5. doi:10.1111/j.1469-7998.2000.tb00586.x. - Forgie, Shaun A.; Grebennikov, Vasily V.; Scholtz, Clarke H. (2002). "Revision of Sceliages Westwood, a millipede-eating genus of southern African dung beetles (Coleoptera : Scarabaeidae)". Invertebrate Systematics. 16 (6): 931–955. doi:10.1071/IT01025. - Larsen, T. H; Lopera, A.; Forsyth, A.; Genier, F. (2009). "From coprophagy to predation: a dung beetle that kills millipedes". Biology Letters. 5 (2): 152–155. doi:10.1098/rsbl.2008.0654. PMC 2665820. PMID 19158030. - Forthman, M.; Weirauch, C. (2012). "Toxic associations: a review of the predatory behaviors of millipede assassin bugs (Hemiptera: Reduviidae: Ectrichodiinae)" (PDF). European Journal of Entomology. 109 (2): 147–153. doi:10.14411/eje.2012.019. - Santamaría, Sergi; Enghoff, Henrik; Reboleira, Ana Sofía P.S. (2018). "New species of Troglomyces and Diplopodomyces (Laboulbeniales, Ascomycota) from millipedes (Diplopoda)". European Journal of Taxonomy (429). doi:10.5852/ejt.2018.429. - Animals: The International Wildlife Magazine. Nigel-Sitwell. 1964. p. 21. - Blum, Murray S.; Woodring, J. Porter (1962). "Secretion of benzaldehyde and hydrogen cyanide by the millipede Pachydesmus crassicutis (Wood)". Science. 138 (3539): 512–513. Bibcode:1962Sci...138..512B. doi:10.1126/science.138.3539.512. PMID 17753947. - Kuwahara, Yasumasa; Ômura, Hisashi; Tanabe, Tsutomu (2002). "2-Nitroethenylbenzenes as natural products in millipede defense secretions". Naturwissenschaften. 89 (7): 308–310. Bibcode:2002NW.....89..308K. doi:10.1007/s00114-002-0328-9. PMID 12216861. - Weldon, Paul J.; Aldich, Jeffrey R.; Klun, Jerome A.; Oliver, James E.; Debboun, Mustapha (2003). "Benzoquinones from millipedes deter mosquitoes and elicit self-anointing in capuchin monkeys (Cebus spp.)". Naturwissenschaften. 90 (7): 301–305. Bibcode:2003NW.....90..301W. doi:10.1007/s00114-003-0427-2. PMID 12883771. Archived from the original on 2017-09-23. Retrieved 2018-04-29. - Valderrama, Ximena; Robinson, John G.; Attygalle, Athula B.; Eisner, Thomas (2000). "Seasonal anointment with millipedes in a wild primate: a chemical defense against insects". Journal of Chemical Ecology. 26 (12): 2781–2790. doi:10.1023/A:1026489826714. - Birkinshaw, Christopher R. (1999). "Use of millipedes by black lemurs to anoint their bodies". Folia Primatologica. 70 (3): 170–171. doi:10.1159/000021691. - Roncadori, R. W.; Duffey, S. S.; Blum, M. S. (1985). "Antifungal activity of defensive secretions of certain millipedes". Mycologia. 77 (2): 185–191. doi:10.2307/3793067. JSTOR 10.2307/3793067. - Eisner, Thomas; Eisner, Maria; Deyrup, Mark (1996). "Millipede defense: use of detachable bristles to entangle ants" (PDF). Proceedings of the National Academy of Sciences. 93 (20): 10848–10851. Bibcode:1996PNAS...9310848E. doi:10.1073/pnas.93.20.10848. PMC 38244. PMID 8855269. - Stoev, Pavel; Lapeva-Gjonova, Albena (2005). "Myriapods from ant nests in Bulgaria (Chilopoda, Diplopoda)" (PDF). Peckiana. 4: 131–142. - Farfan, Monica; Klompen, Hans (2012). "Phoretic mite associates of millipedes (Diplopoda, Julidae) in the northern Atlantic region (North America, Europe)". International Journal of Myriapodology. 7: 69–91. doi:10.3897/ijm.7.3064. - Swafford, Lynn; Bond, Jason E. (2010). "Failure to cospeciate: an unsorted tale of millipedes and mites". Biological Journal of the Linnean Society. 101 (2): 272–287. doi:10.1111/j.1095-8312.2010.01499.x. - Martínez-Torres, Shirley Daniella; Daza, Álvaro Eduardo Flórez; Linares-Castillo, Edgar Leonardo (2011). "Meeting between kingdoms: discovery of a close association between Diplopoda and Bryophyta in a transitional Andean-Pacific forest in Colombia". International Journal of Myriapodology. 6: 29–36. doi:10.3897/ijm.6.2187. - Marshall, Michael (22 September 2011). "Zoologger: Stealth millipede wears living camouflage". New Scientist. Retrieved 26 June 2016. - Mason, G.; Thompson, H.; Fergin, P.; Anderson, R. (1994). "Spot diagnosis: the burning millipede". Medical Journal of Australia. 160 (11): 718–726. PMID 8202008. - Shpall, S.; Frieden, I. (1991). "Mahogany discoloration of the skin due to the defensive secretion of a millipede". Pediatric Dermatology. 8 (1): 25–27. doi:10.1111/j.1525-1470.1991.tb00834.x. PMID 1862020. - Radford, A. (1976). "Giant millipede burns in Papua New Guinea". Papua New Guinea Medical Journal. 18 (3): 138–141. PMID 1065155. - Radford, A. (1975). "Millipede burns in man". Tropical and Geographical Medicine. 27 (3): 279–287. PMID 1103388. - Hudson, B.; Parsons, G. (1997). "Giant millipede 'burns' and the eye". Transactions of the Royal Society of Tropical Medicine and Hygiene. 91 (2): 183–185. doi:10.1016/S0035-9203(97)90217-0. PMID 9196764. - Alagesan, P.; Muthukrishnan, J. (2005). "Bioenergetics of the household pest, Xenobolus carnifex (Fabricius, 1775)" (PDF). Peckiana. 4: 3–14. - Enghoff, Henrik; Kebapći, Ümit (2008). "Calyptophyllum longiventre (Verhoeff, 1941) invading houses in Turkey, with the first description of the male (Diplopoda: Julida: Julidae)". Journal of Natural History. 42 (31–32): 2143–2150. doi:10.1080/00222930802196055. - Ebregt, E.; Struik, P. C.; Odongo, B.; Abidin, P. E. (2005). "Pest damage in sweet potato, groundnut and maize in north-eastern Uganda with special reference to damage by millipedes (Diplopoda)". NJAS – Wageningen Journal of Life Sciences. 53 (1): 49–69. doi:10.1016/S1573-5214(05)80010-7. - Niijima, Keiko (2001). ヤケヤスデ列車を止める [A millipede outbreak (Oxidus gracilis, Koch) stopped trains]. Edaphologia (in Japanese) (68): 43–46. ISSN 0389-1445.[permanent dead link] - Peckham, Matt (4 September 2013). "Millipedes – Yes, Millipedes – May Be Responsible for Australian Train Crash". Time Newsfeed. Time Magazine. Retrieved 31 October 2013. - Stoev, Pavel; Zapparoli, Marzio; Golovatch, Sergei; Enghoff, Henrik; Akkari, Nesrine; Barber, Anthony (2010). "Myriapods (Myriapoda). Chapter 7.2. In: Roques et al. (Eds). Alien terrestrial arthropods of Europe". BIORISK – Biodiversity and Ecosystem Risk Assessment. 4: 97–130. doi:10.3897/biorisk.4.51. - Lewbart, Gregory A., ed. (2011-09-20). Invertebrate Medicine (2nd ed.). Wiley-Blackwell. p. 255. ISBN 978-0-470-96078-3. - Costa Neto, Eraldo M. (2007). "The perception of Diplopoda (Arthropoda, Myriapoda) by the inhabitants of the county of Pedra Branca, Santa Teresinha, Bahia, Brazil". Acta Biológica Colombiana. 12 (2): 123–134. - Lawal, O. A.; Banjo, A. D. (2007). "Survey for the usage of arthropods in traditional medicine in southwestern Nigeria" (PDF). Journal of Entomology. 4 (2): 104–112. doi:10.3923/je.2007.104.112. - Negi, C. S.; Palyal, V. S. (2007). "Traditional uses of animal and animal products in medicine and rituals by the Shoka tribes of district Pithoragarh, Uttaranchal, India" (PDF). Studies on Ethno-Medicine. 1 (1): 47–54. - Jiang, T. L.; Feng, G. W.; Shen, J. H.; Li, L. F.; Fu, X. Q. (1981). "Observation of the effect of Spirobolus bungii extract on cancer cells". Journal of Traditional Chinese Medicine. 1 (1): 34–8. PMID 6926686. - Enghoff, Henrik; Manno, Nicola; Tchibozo, Sévérin; List, Manuela; Schwarzinger, Bettina; Schoefberger, Wolfgang; Schwarzinger, Clemens; Paoletti, Maurizio G. (2014). "Millipedes as food for humans: their nutritional and possible antimalarial value: a first report". Evidence-Based Complementary and Alternative Medicine. 2014: 1–9. doi:10.1155/2014/651768. - "Canada: Money in muskeg?". New Scientist. Reed Business Information: 198–199. 25 April 1963. ISSN 0262-4079. - Avirovik, Dragan; Butenhoff, Bryan; Priya, Shashank (2014). "Millipede-inspired locomotion through novel U-shaped piezoelectric motors". Smart Materials and Structures. 23 (3): 037001. Bibcode:2014SMaS...23c7001A. doi:10.1088/0964-1726/23/3/037001. - Wakimoto, Shuichi; Suzumori, Koichi; Kanda, Takefumi (2006). "A bio-mimetic amphibious soft cord robot". Nihon Kikai Gakkai Ronbunshu, C Hen/Transactions of the Japan Society of Mechanical Engineers, Part C (in Japanese and English). 72 (2): 471–477. - Beattie, Andrew; Ehrlich, Paul (2001). Wild Solutions: How Biodiversity is Money in the Bank (2nd ed.). New Haven: Yale University Press. pp. 192–194. ISBN 978-0-300-10506-3. |Wikimedia Commons has media related to Diplopoda.| |Wikispecies has information related to Diplopoda| |Wikisource has the text of the 1911 Encyclopædia Britannica article Millipede.| - Milli-PEET: The Class Diplopoda – The Field Museum, Chicago - Millipedes of Australia - Diplopoda: Guide to New Zealand Soil Invertebrates – Massey University - SysMyr, a myriapod taxonomy database - British Myriapod & Isopod Group
Motivation plays a significant role in student learning. Students with high motivation levels tend to have the best learning outcomes. Motivation is particularly useful in encouraging persistence in applying effort to a learning task and trying new approaches. Although motivation is highly influenced by student characteristics and tends to vary across different learning areas, the classroom context also plays an important role in influencing student motivation. There are many strategies that teachers can use to promote and support their students’ motivation. As many theorists define motivation as arising from a need to satisfy psychological desires for autonomy, competence and connection or belonging, the following motivational strategies focus on meeting those needs. - Build strong relationships Positive relationships with teachers are significantly related to positive motivation and to greater achievement. Motivation is affected by the level of emotional and social support students perceive. Students who believe their teachers are not interested in their learning report more negative motivation and experience lower achievement. Research has demonstrated that relationships with teachers are particularly important for Māori students. Teachers need to show support and concern for all students and be interested in their ideas and experiences, as well as what they produce in class. Try to ensure you communicate a sense of caring for how each individual student is doing. Showing sensitivity and kindness to students enhances the affective climate of the classroom, whereas threats, sarcasm, directives and imposed goals result in negative affective experiences for students. - Promote students’ sense of membership of the classroom and the school Students’ motivation is strongest when they believe they are socially accepted by teachers and peers and their school environment is fair, trustworthy and centred on concern for everyone’s welfare. Motivation tends to be lowest in environments that are perceived as unwelcoming and untrustworthy. When students have a strong sense of membership of the class and school, they are more likely to adopt the values endorsed by the school. Students from negatively stereotyped groups are most sensitive to cues of belonging and trustworthiness. Teachers are authority figures who can set the tone for relations in the classroom, and make students feel they are valued group members. Provide plenty of opportunities for positive interactions with and among students. Try to create a sense of belonging while also valuing students’ social and cultural identities. Activities that engender a shared sense of purpose will motivate students and enhance their sense of belonging. 3. Enhance task interest and engagement In classrooms characterised by positive attitudes and emotions, and high levels of interest in the tasks undertaken, students report greater motivation and perceptions of competence. Students who engage with activities and tasks relating to their interests find learning easier, more enjoyable and more related to their lives, and they perform better than those without personalised content. Linking content to existing interests of the students also helps students to connect their prior knowledge to academic learning. Interest can be fostered through well-chosen texts and resources, as well as activities that engender students’ curiosity through provocative questioning or generating suspense. Interest is further enhanced by providing a choice of activities and using hands-on activities connected to the learning content, although be wary of adding irrelevant or decorative details to a task in ways that detract from learning outcomes. Students themselves might be able to adapt academic problems to a context within their areas of interest if invited, although task interest is not simply a matter of allowing students to engage in work that aligns with their existing interests. Promoting a positive affective climate in the classroom can help improve students’ attitudes towards the subject being taught, and strategies such as humour can enhance their enjoyment of the topic. The teacher’s enthusiasm for the subject or task can also help to engender students’ interest. 4. Emphasise the relevance and importance of the learning Research finds that when teachers emphasise the importance of learning a particular strategy or piece of content, student motivation increases. Students perceive more challenging classes as more important, although it is necessary to find the optimal level of challenge: when challenge is too low or too high, students attribute low importance to the learning task. Elaborating on and clarifying students’ responses and summarising learning regularly throughout the lesson send the message that the learning is important. In addition, discussing the new knowledge students have developed through the application of particular strategies encourages students to perceive those strategies as valuable. In contrast, emphasising speed, coverage of content or accuracy over understanding, and failing to probe students’ answers for explanation and justification, reduces the level of challenge and also implies that student performance, rather than learning, is most important. Ways to communicate the importance and relevance of a topic to students include: - increasing challenge - providing opportunities for students to grapple with the central tenets and abstract principles of a topic - inviting students to personalise a topic by putting themselves into the context of a topic - discussing universal human experiences that relate to a topic - inviting students to explore the relevance and importance of their current learning by investigating how particular academic concepts are used in their communities 5. Connect with students’ goals, values and identities When students see learning tasks as useful and relevant to their goals, they develop more interest, persist longer and perform better. Students who see their future adult self as being dependent on their educational achievement spend more time on homework and have better grades. Students might have between one and five core goals for their self-development and future plans. It is helpful if school goal setting can tap into and co-ordinate with these goals. Students may value a learning area as important for their self-worth and identity, or they might value an area for its usefulness in accomplishing future goals relevant to their career or life plans. For example, one student may perceive mathematics as useful for eventually owning a business, another may view mathematics as handy for calculating cricket batting averages, and another may simply enjoy maths for its own sake. It is important that students feel that learning activities are congruent with their personal identities, as this makes them more motivated to persist with difficulty. If learning activities are at odds with students’ personal identities (‘This activity is not for people like me’), then difficulty is taken as proof that the activity is pointless and unachievable. Knowing your students well means you can promote interest in an academic topic by linking it to the students’ recreational activities or career goals. Otherwise, you can ask students to reflect on a curriculum unit and generate their own connections so that they discover the importance and usefulness for themselves – this is also an opportunity to learn more about your students. Not all activities and lessons can be inherently interesting to students, in which case it is important to ensure you offer a rationale for why the activity or lesson is useful to the student and worth the effort. Teachers can influence students to see their future adult self as dependent on achievement. Posters and images of possible careers might provide ongoing reminders so that students’ desired goals remain in mind. It is also important that parents value school and subject areas. Some research shows that providing information about the importance of a subject to parents leads to increases in student motivation and achievement within that subject. 6. Give students autonomy and responsibility Motivation is impaired when students feel they have no control over a situation. Giving students choices and empowering student initiative enhances motivation, effort, interest, positive emotions and perceptions of personal control and competence, as well as achievement. Most students perform better on self-adapted tests in which they can select test items from various options. Providing choices can also increase risk taking and help students develop interest for particular activities. However, for students from some cultural groups, motivation might be highest when authority figures or peers make choices for them. It is important to carefully plan how to make choices available to students, basing them on your students’ ability to understand and make choices. Some students may need scaffolding to help them make appropriate choices. Choices must be appropriate for students’ abilities and needs, and be a good match with student interests (although be wary of trying to align all learning activities with students’ current interests at the risk of compromising the quality of the learning or missing the opportunity to create interest and build knowledge in a new subject area). It might be that students get to choose from a list of topic-related activities provided by the teacher, or that they select their own tasks to work on. They might also be involved in setting due dates, choosing student working groups, and the order of task completion. Being able to choose how to apportion their time, as well as among several different versions of a task, might be most motivational for students with skills in self-regulation. However, it is important that all students, not just the highest-performing students, get to choose activities and resources. Some choices are more effective than others. The best type of choices: - allow students to reflect their personal interests, values and goals - are unrestricted choices, with no indication of which option to choose, rather than controlled choices - offer choice between 2–4 options: more than 5 options increases thinking effort and therefore decreases motivation, and less than 2 options undermines the perception of choice - allow students to repeatedly return to a list of options to make another choice rather than making single or multiple choices at one time only 7. Develop students’ self-efficacy Students have important needs in relation to feeling competent. Motivation is strongly influenced by students’ perceived expectations of success or failure, which are in turn influenced by teacher expectations. Motivation, self-efficacy and achievement are positively affected when outcomes are represented as the result of student effort and action. Optimal learning experiences occur when the student perceives the challenge of the task as equal to his or her skills to achieve it. When challenge and skills are unbalanced, learning activities are not rewarding and perhaps even evoke anxiety. The highest levels of motivation occur when there is both high challenge and high feelings of self-efficacy. One way to inspire increased motivation is to increase student’s expectations of success and their sense of self-efficacy. Tell students you believe in them and that they will learn a particular content or strategy if they study hard and are motivated. Ways to ensure students experience success include: - ensuring optimal challenge - focusing on personal improvement rather than outperforming others - providing feedback which helps students master content - helping students set realistic goals - structuring activities with clear processes for engagement with the task - reinforcing key learning throughout the lesson, which increases self-efficacy as students are clear that they are making progress - giving frequent, positive feedback focused on elaborating what students have learned and understood - attributing success to effort and strategies rather than ability 8. Set appropriate goals and provide regular feedback for learning Goals can motivate students by providing a purpose for using different learning strategies and encourage students’ persistence and effort over time, especially when goals are related to mastery of content and strategies rather than to specific performance. What is more, when students perceive praise or feedback as intended to facilitate their task mastery, they tend to feel their autonomy has been supported and are consequently motivated by the feedback. However, when students feel that the teacher is trying to control their learning and behaviour, there is a negative impact on motivation. Goals direct attention and action, and they also mobilise effort and motivation. For example, research has found that when students were given goals for reading focused on conceptual themes and knowledge content, they applied reading comprehension strategies with greater interest, effort and attention. Harder goals (that are acceptable to, and achievable by, the student) lead to higher levels of motivation and performance. Difficulty can be interpreted as a need to increase attention and therefore heightens motivation. Perceiving a task as too easy makes it seem not worth any effort, and motivation is consequently reduced. Likewise, perceiving a task as impossible halts motivation and effort abruptly. Between these extremes, increased difficulty enhances motivation. Set goals with students that are clear, measurable and provide a structured progression through incremental goals to the final goal. Plan points at which to stop and measure progress towards the goal. Provide feedback that indicates to students how they are progressing towards the goal and perhaps offers suggestions to foster achievement of the goal. Feedback is most motivating when it: - allows and empowers students’ choice in subsequent learning strategies - reflects a belief in the students’ capability to learn/complete a task - is positive, frequent and elaborative - is used to help students develop understanding - islinked to clearly articulated high standards 9. Provide opportunities for co-operative learning Meeting students’ need for connection with others can enhance motivation for the related learning activity. Students show increased motivation when teachers provide frequent opportunities for them to share their questions and what they have learned with their peers. Students also often demonstrate increased work effort when there is a sense of collective responsibility for learning. In addition, research shows students given collaborative learning opportunities engage in deeper-level processing of information. Plan for student-to-student dialogue within a lesson, and identify activities that can be undertaken in pairs or groups. Develop group tasks in which tasks are divided between students. Ensure each student has a clear responsibility and accountability in relation to a group goal. 10. Explicitly teach the concept of motivation and talk about how motivation supports learning Supporting students’ understanding of motivation can aid their ability to self-regulate their levels of motivation and help them to identify strategies and behaviours that increase or lower their motivation. Although motivational support strategies should be embedded into instruction, it can also be worthwhile to explicitly discuss motivation with students. This helps students to understand the importance of effort in learning and how finding ways to get motivated can help them put in the necessary effort. Emphasise the importance of motivation for success in learning. Talk regularly about how students must work hard and how effort helps them to get smarter by linking effort to outcomes. Ensure that you and your students have the same perception of effort: discuss what it means to try. Help students differentiate between productive and non-productive effort by explaining that effort is more than the time spent on a task but also means using effective strategies, practising and seeking help. Take the mystery out of learning something new by demonstrating that it is all about strategy and motivation. Oyserman, D. (2014). Identity-based motivation: Core process and intervention examples. Motivational Interventions, 18, 213-242. doi: 10.1108/S0749-742320140000018006 Patall, E. A., Cooper, H., Robinson, J. C. (2008). The effects of choice on intrinsic motivation and related outcomes: A meta-analysis of research findings. Psychological Bulletin, 134(2), 270-300. Schweinle, A., Meyer, D. K., & Turner, J. C. (2006). Striking the right balance: Students’ motivation and affect in elementary mathematics. Journal of Educational Research, 99(5), 271-294. doi: 10.3200/JOER.99.5.271-294 Wigfield, A., Mason-Singh, A., Ho, A. N., & Guthrie, J. T. (2014). Intervening to improve children’s reading motivation and comprehension: Concept-oriented reading instruction. Motivational Interventions, 18, 37-70. doi: 10.1108/S0749-742320140000018001 By Dr Vicki Hargraves
Applied Behavior Analysis (ABA) What is Applied Behavior Analysis? Applied Behavior Analysis (ABA) is a therapy based on the science of learning and behavior. Behavior analysis helps us to understand: - How behavior works - How behavior is affected by the environment - How learning takes place ABA therapy applies our understanding of how behavior works to real situations. The goal is to increase behaviors that are helpful and decrease behaviors that are harmful or affect learning. ABA therapy programs can help: - Increase language and communication skills - Improve attention, focus, social skills, memory, and academics - Decrease problem behaviors The methods of behavior analysis have been used and studied for decades. They have helped many kinds of learners gain different skills – from healthier lifestyles to learning a new language. Therapists have used ABA to help children with autism and related developmental disorders since the 1960s. How does ABA therapy work? Applied Behavior Analysis involves many techniques for understanding and changing behavior. ABA is a flexible treatment: Can be adapted to meet the needs of each unique person - Provided in many different locations – at home, at school, and in the community - Teaches skills that are useful in everyday life - Can involve one-to-one teaching or group instruction Positive reinforcement is one of the main strategies used in ABA. When a behavior is followed by something that is valued (a reward), a person is more likely to repeat that behavior. Over time, this encourages positive behavior change. First, the therapist identifies a goal behavior. Each time the person uses the behavior or skill successfully, they get a reward. The reward is meaningful to the individual – examples include praise, a toy or book, watching a video, access to playground or other location, and more. Positive rewards encourage the person to continue using the skill. Over time this leads to meaningful behavior change. Antecedent, Behavior, Consequence Understanding antecedents (what happens before a behavior occurs) and consequences (what happens after the behavior) is another important part of any ABA program. The following three steps – the “A-B-Cs” – help us teach and understand behavior: An antecedent: this is what occurs right before the target behavior. It can be verbal, such as a command or request. It can also be physical, such a toy or object, or a light, sound, or something else in the environment. An antecedent may come from the environment, from another person, or be internal (such as a thought or feeling). A resulting behavior: this is the person’s response or lack of response to the antecedent. It can be an action, a verbal response, or something else. - A consequence: this is what comes directly after the behavior. It can include positive reinforcement of the desired behavior, or no reaction for incorrect/inappropriate responses. Looking at A-B-Cs helps us understand: - Why a behavior may be happening - How different consequences could affect whether the behavior is likely to happen again - Antecedent: The teacher says “It’s time to clean up your toys” at the end of the day. - Behavior: The student yells “no!” - Consequence: The teacher removes the toys and says “Okay, toys are all done.” How could ABA help the student learn a more appropriate behavior in this situation? - Antecedent: The teacher says “time to clean up” at the end of the day. - Behavior: The student is reminded to ask, “Can I have 5 more minutes?” - Consequence: The teacher says, “Of course you can have 5 more minutes!” With continued practice, the student will be able to replace the inappropriate behavior with one that is more helpful. This is an easier way for the student to get what she needs! What Does an ABA Program Involve? Good ABA programs for autism are not "one size fits all." ABA should not be viewed as a canned set of drills. Rather, each program is written to meet the needs of the individual learner. The goal of any ABA program is to help each person work on skills that will help them become more independent and successful in the short term as well as in the future. Planning and Ongoing Assessment A qualified and trained behavior analyst (BCBA) designs and directly oversees the program. They customize the ABA program to each learner's skills, needs, interests, preferences and family situation. The BCBA will start by doing a detailed assessment of each person’s skills and preferences. They will use this to write specific treatment goals. Family goals and preferences may be included, too. Treatment goals are written based on the age and ability level of the person with ASD. Goals can include many different skill areas, such as: - Communication and language - Social skills - Self-care (such as showering and toileting) - Play and leisure - Motor skills - Learning and academic skills The instruction plan breaks down each of these skills into small, concrete steps. The therapist teaches each step one by one, from simple (e.g. imitating single sounds) to more complex (e.g. carrying on a conversation). The BCBA and therapists measure progress by collecting data in each therapy session. Data helps them to monitor the person’s progress toward goals on an ongoing basis. The behavior analyst regularly meets with family members and program staff to review information about progress. They can then plan ahead and adjust teaching plans and goals as needed. ABA Techniques and Philosophy The instructor uses a variety of ABA procedures. Some are directed by the instructor and others are directed by the person with autism. Parents, family members and caregivers receive training so they can support learning and skill practice throughout the day. The person with autism will have many opportunities to learn and practice skills each day. This can happen in both planned and naturally occurring situations. For instance, someone learning to greet others by saying "hello" may get the chance to practice this skill in the classroom with their teacher (planned) and on the playground at recess (naturally occurring). The learner receives an abundance of positive reinforcement for demonstrating useful skills and socially appropriate behaviors. The emphasis is on positive social interactions and enjoyable learning. The learner receives no reinforcement for behaviors that pose harm or prevent learning. ABA is effective for people of all ages. It can be used from early childhood through adulthood! Who provides ABA services? A board-certified behavior analyst (BCBA) provides ABA therapy services. To become a BCBA, the following is needed: - Earn a master’s degree or PhD in psychology or behavior analysis - Pass a national certification exam - Seek a state license to practice (in some states) ABA therapy programs also involve therapists, or registered behavior technicians (RBTs). These therapists are trained and supervised by the BCBA. They work directly with children and adults with autism to practice skills and work toward the individual goals written by the BCBA. You may hear them referred to by a few different names: behavioral therapists, line therapists, behavior tech, etc. To learn more, see the Behavior Analyst Certification Board website. What is the evidence that ABA works? ABA is considered an evidence-based best practice treatment by the US Surgeon General and by the American Psychological Association. “Evidence based” means that ABA has passed scientific tests of its usefulness, quality, and effectiveness. ABA therapy includes many different techniques. All of these techniques focus on antecedents (what happens before a behavior occurs) and on consequences (what happens after the behavior). More than 20 studies have established that intensive and long-term therapy using ABA principles improves outcomes for many but not all children with autism. “Intensive” and “long term” refer to programs that provide 25 to 40 hours a week of therapy for 1 to 3 years. These studies show gains in intellectual functioning, language development, daily living skills and social functioning. Studies with adults using ABA principles, though fewer in number, show similar benefits. Is ABA covered by insurance? Sometimes. Many types of private health insurance are required to cover ABA services. This depends on what kind of insurance you have, and what state you live in. All Medicaid plans must cover treatments that are medically necessary for children under the age of 21. If a doctor prescribes ABA and says it is medically necessary for your child, Medicaid must cover the cost. Please see our insurance resources for more information about insurance and coverage for autism services. You can also contact the Autism Response Team if you have difficulty obtaining coverage, or need additional help. Where do I find ABA services? To get started, follow these steps: - Speak with your pediatrician or other medical provider about ABA. They can discuss whether ABA is right for your child. They can write a prescription for ABA if it is necessary for your insurance. - Check whether your insurance company covers the cost of ABA therapy, and what your benefit is. - Search our resource directory for ABA providers near you. Or, ask your child’s doctor and teachers for recommendations. - Call the ABA provider and request an intake evaluation. Have some questions ready (see below!) What questions should I ask? It’s important to find an ABA provider and therapists who are a good fit for your family. The first step is for therapists to establish a good relationship with your child. If your child trusts his therapists and enjoys spending time with them, therapy will be more successful – and fun! The following questions can help you evaluate whether a provider will be a good fit for your family. Remember to trust your instincts, as well! - How many BCBAs do you have on staff? - Are they licensed with the BACB and through the state? - How many behavioral therapists do you have? - How many therapists will be working with my child? - What sort of training do your therapists receive? How often? - How much direct supervision do therapists receive from BCBAs weekly? - How do you manage safety concerns? - What does a typical ABA session look like? - Do you offer home-based or clinic-based therapy? - How do you determine goals for my child? Do you consider input from parents? - How often do you re-evaluate goals? - How is progress evaluated? - How many hours per week can you provide? - Do you have a wait list? - What type of insurance do you accept?
In classical mechanics, Newton's laws of motion are three laws that describe the relationship between the motion of an object and the forces acting on it. The first law states that an object either remains at rest or continues to move at a constant velocity, unless it is acted upon by an external force. The second law states that the rate of change of momentum of an object is directly proportional to the force applied, or, for an object with constant mass, that the net force on an object is equal to the mass of that object multiplied by the acceleration. The third law states that when one object exerts a force on a second object, that second object exerts a force that is equal in magnitude and opposite in direction on the first object. The three laws of motion were first compiled by Isaac Newton in his Philosophiæ Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy), first published in 1687. Newton used them to explain and investigate the motion of many physical objects and systems, which laid the foundation for Newtonian mechanics. The first law states that an object at rest will stay at rest, and an object in motion will stay in motion unless acted on by a net external force. Mathematically, this is equivalent to saying that if the net force on an object is zero, then the velocity of the object is constant. Newton's first law is often referred to as the law of inertia. The second law states that the rate of change of momentum of a body over time is directly proportional to the force applied, and occurs in the same direction as the applied force. where F is the net force applied, m is the mass of the body, and a is the body's acceleration. Thus, the net force applied to a body produces a proportional acceleration. Variable-mass systems, like a rocket burning fuel and ejecting spent gases, are not closed and cannot be directly treated by making mass a function of time in the second law; The equation of motion for a body whose mass m varies with time by either ejecting or accreting mass is obtained by applying the second law to the entire, constant-mass system consisting of the body and its ejected or accreted mass; the result is where u is the exhaust velocity of the escaping or incoming mass relative to the body. From this equation one can derive the equation of motion for a varying mass system, for example, the Tsiolkovsky rocket equation. Under some conventions, the quantity on the left-hand side, which represents the advection of momentum, is defined as a force (the force exerted on the body by the changing mass, such as rocket exhaust) and is included in the quantity F. Then, by substituting the definition of acceleration, the equation becomes F = ma. The third law states that all forces between two objects exist in equal magnitude and opposite direction: if one object A exerts a force FA on a second object B, then B simultaneously exerts a force FB on A, and the two forces are equal in magnitude and opposite in direction: FA = -FB. The third law means that all forces are interactions between different bodies, or different regions within one body, and thus that there is no such thing as a force that is not accompanied by an equal and opposite force. In some situations, the magnitude and direction of the forces are determined entirely by one of the two bodies, say Body A; the force exerted by Body A on Body B is called the "action", and the force exerted by Body B on Body A is called the "reaction". This law is sometimes referred to as the action-reaction law, with FA called the "action" and FB the "reaction". In other situations the magnitude and directions of the forces are determined jointly by both bodies and it isn't necessary to identify one force as the "action" and the other as the "reaction". The action and the reaction are simultaneous, and it does not matter which is called the action and which is called reaction; both forces are part of a single interaction, and neither force exists without the other. The two forces in Newton's third law are of the same type (e.g., if the road exerts a forward frictional force on an accelerating car's tires, then it is also a frictional force that Newton's third law predicts for the tires pushing backward on the road). From a conceptual standpoint, Newton's third law is seen when a person walks: they push against the floor, and the floor pushes against the person. Similarly, the tires of a car push against the road while the road pushes back on the tires--the tires and road simultaneously push against each other. In swimming, a person interacts with the water, pushing the water backward, while the water simultaneously pushes the person forward--both the person and the water push against each other. The reaction forces account for the motion in these examples. These forces depend on friction; a person or car on ice, for example, may be unable to exert the action force to produce the needed reaction force. Newton used the third law to derive the law of conservation of momentum; from a deeper perspective, however, conservation of momentum is the more fundamental idea (derived via Noether's theorem from Galilean invariance), and holds in cases where Newton's third law appears to fail, for instance when force fields as well as particles carry momentum, and in quantum mechanics. The ancient Greek philosopher Aristotle had the view that all objects have a natural place in the universe: that heavy objects (such as rocks) wanted to be at rest on the Earth and that light objects like smoke wanted to be at rest in the sky and the stars wanted to remain in the heavens. He thought that a body was in its natural state when it was at rest, and for the body to move in a straight line at a constant speed an external agent was needed continually to propel it, otherwise it would stop moving. Galileo Galilei, however, realised that a force is necessary to change the velocity of a body, i.e., acceleration, but no force is needed to maintain its velocity. In other words, Galileo stated that, in the absence of a force, a moving object will continue moving. (The tendency of objects to resist changes in motion was what Johannes Kepler had called inertia.) This insight was refined by Newton, who made it into his first law, also known as the "law of inertia"--no force means no acceleration, and hence the body will maintain its velocity. As Newton's first law is a restatement of the law of inertia which Galileo had already described, Newton appropriately gave credit to Galileo. Newton's laws were verified by experiment and observation for over 200 years, and they are excellent approximations at the scales and speeds of everyday life. Newton's laws of motion, together with his law of universal gravitation and the mathematical techniques of calculus, provided for the first time a unified quantitative explanation for a wide range of physical phenomena. For example, in the third volume of the Principia, Newton showed that his laws of motion, combined with the law of universal gravitation, explained Kepler's laws of planetary motion. Newton's laws are applied to objects which are idealised as single point masses, in the sense that the size and shape of the object's body are neglected to focus on its motion more easily. This can be done when the object is small compared to the distances involved in its analysis, or the deformation and rotation of the body are of no importance. In this way, even a planet can be idealised as a particle for analysis of its orbital motion around a star. In their original form, Newton's laws of motion are not adequate to characterise the motion of rigid bodies and deformable bodies. Leonhard Euler in 1750 introduced a generalisation of Newton's laws of motion for rigid bodies called Euler's laws of motion, later applied as well for deformable bodies assumed as a continuum. If a body is represented as an assemblage of discrete particles, each governed by Newton's laws of motion, then Euler's laws can be derived from Newton's laws. Euler's laws can, however, be taken as axioms describing the laws of motion for extended bodies, independently of any particle structure. Newton's laws hold only with respect to a certain set of frames of reference called Newtonian or inertial reference frames. Some authors interpret the first law as defining what an inertial reference frame is; from this point of view, the second law holds only when the observation is made from an inertial reference frame, and therefore the first law cannot be proved as a special case of the second. Other authors do treat the first law as a corollary of the second. The explicit concept of an inertial frame of reference was not developed until long after Newton's death. These three laws hold to a good approximation for macroscopic objects under everyday conditions. However, Newton's laws (combined with universal gravitation and classical electrodynamics) are inappropriate for use in certain circumstances, most notably at very small scales, at very high speeds, or in very strong gravitational fields. Therefore, the laws cannot be used to explain phenomena such as conduction of electricity in a semiconductor, optical properties of substances, errors in non-relativistically corrected GPS systems and superconductivity. Explanation of these phenomena requires more sophisticated physical theories, including general relativity and quantum field theory. In special relativity, the second law holds in the original form F = dp/dt, where F and p are four-vectors. Special relativity reduces to Newtonian mechanics when the speeds involved are much less than the speed of light. It is important to note that we cannot derive a general expression for Newton's second law for variable mass systems by treating the mass in F = dP/dt = d(M v) as a variable. [...] We can use F = dP/dt to analyze variable mass systems only if we apply it to an entire system of constant mass, having parts among which there is an interchange of mass.[Emphasis as in the original] Recall that F = dP/dt was established for a system composed of a certain set of particles[. ... I]t is essential to deal with the same set of particles throughout the time interval[. ...] Consequently, the mass of the system can not change during the time of interest. Quoting Newton in the Principia: It is not one action by which the Sun attracts Jupiter, and another by which Jupiter attracts the Sun; but it is one action by which the Sun and Jupiter mutually endeavour to come nearer together. Any single force is only one aspect of a mutual interaction between two bodies. [...] while Newton had used the word 'body' vaguely and in at least three different meanings, Euler realized that the statements of Newton are generally correct only when applied to masses concentrated at isolated points; For explanations of Newton's laws of motion by Newton in the early 18th century and by the physicist William Thomson (Lord Kelvin) in the mid-19th century, see the following:
1. Using tables; 2. Using graphs; 3. Defining by a formula; 4. Describing with words. Work Step by Step 1. A function may be defined by tabularly representing the argument $x$ and the value of the function $f(x)$ 2. A function may be given by the curve in the $xy$ plane where coordinates of every point from the curve represent respectively, the argument and the value of the function: $P(x,f(x)).$ 3. A function may be defined by a fomula i.e. $f(x)=ax^2+bx+c$. 4. A function may be described with words. For example, you can say, the price function is defined as: Its argument is a product. Its value is the price of that product.
Algebra 1 or elementary algebra includes the traditional topics studied in the modern elementary algebra course. Basic arithmetic operations comprise numbers along with mathematical operations such as +, -, x, ÷. While, algebra involves variables like x, y, z, and mathematical operations like addition, subtraction, multiplication, and division to form a meaningful mathematical expression. Algebra helps in the representation of different situations or problems as mathematical expressions. The concepts that come under algebra 1 or elementary algebra include variables, evaluating expressions and equations, properties of equalities and inequalities, solving the algebraic equations and linear equations which have one or two variables, and so on. |1.||What is Algebra 1?| |2.||Algebra 1 Topics| |3.||Laws of Algebra| |4.||Difference between Algebra 1 and Algebra 2| |5.||Algebra 1: Tips and Tricks| |6.||FAQs on Algebra 1| What is Algebra 1? Algebra 1- consists of the general concepts of algebra. It introduces evaluating equations and inequalities, real numbers, and their properties, which include additive and multiplicative identities, inverse operations, and the distributive and commutative properties. In algebra 1, we will also be introduced to the concept of polynomials, and will also incorporate a bit of geometry to calculate the area, volume, and perimeters of shapes using algebraic expressions instead of numbers. Algebra 1 or elementary algebra deals with solving the algebraic expressions for a viable answer. In algebra 1, simple variables like x, y, are represented in the form of an equation. Based on the degree of the variable the equations can be categorized into different types, namely linear equations, quadratic equations, cubic equations, and so on. Linear equations are of the forms of ax + b = c, ax + by + c = 0, ax + by + cz + d = 0. Elementary algebra based on the degree of the variables, branches out into quadratic equations and polynomials. A general form of representation of a quadratic equation is ax2 + bx + c = 0, and for a polynomial equation, it is axn + bxn-1+ cxn-2+ .....k = 0. Algebra 1 Topics Algebra is divided into numerous topics to help for a detailed study. Algebra 1 is divided into 12 chapters and each chapter is divided into several lessons. These 12 chapters in Algebra 1 are given as Laws of Algebra The basic laws of algebra are the associative, commutative, and distributive laws that are presented in the table below: |Commutative Law For Addition|| (a + b) = (b + a). According to the commutative property, swapping the positions of operands in an operation does not affect the result. If (4x + 3x) = 7x, then (3x + 4x) = 7x |Commutative Law For Multiplication|| (a × b) = (b × a). According to the commutative property, swapping the positions of operands in an operation does not affect the result. If (2x × 4) = 8x, then (4 × 2x) = 8x |Associative Law For Addition|| a + (b + c) = (a + b) + c. This grouping of addends does not affect the sum. If 3y + (4y + 5y) = (3y + 9y) = 12y, then (3y + 4y) + 5y = 7y + 5y = 12y |Associative Law For Multiplication|| a × (b × c) = b × (a × c). This grouping of factors does not affect the product. If 3a × (2b × 5c) = 3a × (10bc) = 30abc, then, 2b × (3a × 5c) = 2b × (15ac) = 30abc |Distributive Law For Addition|| a × (b + c) = (a × b) + (a × c). Adding two numbers and then multiplying them with a third gives the same result as multiplying the two numbers individually to the third and thereafter adding the obtained result. If 4x × (3y + 2y) = (4x × 5y) = 20xy, then (4x × 3y) + (4x × 2y) = 12xy + 8xy = 20xy Distributive Law for Subtraction |a × (b - c) = (a × b) - (a × c). Subtracting two numbers and then multiplying them with a third gives the same result as multiplying the two numbers individually to the third and thereafter subtracting the obtained result.||If 4x × (3y - 2y) = (4x × y) = 4xy, then (4x × 3y) - (4x × 2y) = 12xy - 8xy = 4xy| The rules for different properties under algebra 1 can be understood better as shown below, - The addition property of inequality: Adding the same number to each side of the inequality produces an equivalent inequality. - Negative exponents: The reciprocals of the positive exponents in exponential functions. - The quotient of powers property: It tells us that when we divide the powers with the same base we just have to subtract the exponents. - The constants have a monomial degree of 0. Difference Between Algebra 1 and Algebra 2 Algebra 1 and Algebra 2 can be distinguished based on the complexity and use of algebraic expressions. The following table explains the important differences between algebra 1 and algebra 2. |Algebra 1||Algebra 2| |Algebra 1 introduces you to the general concepts of algebra. You learn about variables, functions, and the most important concept in all of algebra.||Algebra 2 is much more advanced. It's also much more miscellaneous: you learn about everything from logarithms and complex numbers to implicit functions and conics to the fundamental theorem of algebra.| |Algebra 1 helps students to have the basic command in algebra topics.||Algebra 2 increases complexity and understanding of the topics learned in algebra 1.| |In this, students learn how to manipulate exponents or polynomials and write them in simpler forms, etc.||In this, students learn to apply the skills thus obtained in algebra 1 and also learn more difficult techniques.| |Algebra 1 is concentrated on solving equations and inequalities||Algebra 2 concentrates on additional types of equations, such as exponential and logarithmic equations.| |Algebra 1 is essential to understand algebra 2.||Algebra 2 is essential for understanding concepts coming on calculus.| Tips and Tricks on Algebra 1 - To understand Algebra 1, we need to be familiar with the pre-algebra topics like integers, one-step equations, inequalities and equations, graphs and functions, percent, probabilities, an introduction to geometry, and, right triangles. Once we go through a refresher, then we can proceed to algebra 1. - When multiplying two rational expressions in algebra, there is always a risk of getting false solutions or extraneous solutions so be careful with your calculations part. - We can add polynomials by just adding the like terms to combine the two polynomials into one. Solved Examples on Algebra 1 Example 1: Using laws and properties of algebra 1, evaluate the expression (4 × (x + 2)), where x = 5. Given, x = 5. Putting the value of x in 4 × (x + 2), we get, 4 × (5 + 2) = 4 × 7 = 28. Example 2: Solve the given expression for the value of x, 4 + 3 = x. Given, 4 + 3 = x. We will simply do the addition of the given expression and get the value of x. 4 + 3 is equal to 7, or x = 4 + 3 = 7. Therefore, the value of x is 7. FAQs on Algebra 1 What is Algebra 1? Elementary algebra or Algebra 1 includes the basic traditional topics studied in the modern elementary algebra course. Basic arithmetic operations comprise numbers along with mathematical operations such as +, -, x, ÷. While, algebra involves variables as well like x, y, z, and mathematical operations like addition, subtraction, multiplication, and division to form a meaningful mathematical expression. What is Considered Algebra 1? Algebra 1 consists of the general concepts of algebra. It introduces evaluating equations and inequalities, real numbers, and their properties, which include additive and multiplicative identities, inverse operations, and the distributive and commutative properties. What is the Difference Between Algebra 1 and Algebra 2? The difference between Algebra 1 and Algebra 2 can be understood using the following points: - Algebra 1 helps students to have the basic command in algebra topics, while algebra 2 increases complexity and understanding of the topics learned in algebra 1. - In algebra 1, students learn how to manipulate exponents or polynomials and write them in simpler forms, etc, while in Algebra 2, students learn to apply the skills thus obtained in algebra 1 and also learn more difficult techniques. - Algebra 1 is concentrated on solving equations and inequalities. But, algebra 2 concentrates on additional types of equations, such as exponential and logarithmic equations. - Algebra 1 is essential to understand algebra 2, whereas, algebra 2 is essential for understanding concepts coming on calculus. What is Standard Form in Algebra 1? A standard form in Algebra 1 is a form of writing a given mathematical concept like an equation, number, or an expression in a form that follows certain rules. How to Learn Algebra 1 Fast? The concepts of algebra 1 can be mastered by following certain instructions. The key points given below will help you ensure a thorough grasping of elementary algebra. - Focus on basic arithmetic concepts. - Remember PEMDAS rule. - Learn to distinguish clearly between the roles of variables, constants, exponents, and negative and positive numbers. - Do a thorough revision of formulas. - Work on practice problems. What Grade is Algebra 1? Algebra 1 or elementary algebra is the first math class you are required to take as part of your middle school. We study real numbers, exploring solving, writing, and graphing linear equations in this part of Algebra. Also polynomials, as well as quadratic equations and functions are included in Algebra 1. What Topics are Covered in Algebra 1? The topics covered in algebra 1 are divided into different chapters. These chapters can be broadly classified into the following categories: - Real Numbers and Their Operations - Linear Equations and Inequalities - An Introduction To Functions - Graphing Lines - Solving Linear Systems - Polynomials and Their Operations - Factoring and Solving by Factoring - Exponents And Exponential Functions - Rational Expressions and Equations - Radical Expressions and Equations - Solving Quadratic Equations and Graphing Parabolas - Data Analysis And Probability Is Algebra 1 or 2 Harder? Algebra 1 is the building block of algebra 2. Algebra 2 is a higher and more complex course, hence algebra 2 is a lot harder than algebra 1. What is the First Thing you Learn in Algebra 1? The first thing students learn in algebra 1 is real numbers and their operations. What are the Prerequisites to Understand Algebra 1 Better? To understand Algebra 1, it is an advantage if you know the foundations of arithmetic, integers, fractions, decimals, percent, ratio, proportion, probabilities, an introduction to geometry, and, right triangles.
We could do with as much information as possible about near-Earth asteroids. A manned mission is a natural step, both for investigating a class of object that could one day hit our planet, and also for continuing to develop technologies in directions that will be useful for our future infrastructure in space. You would think we would know much of what we needed from examining meteorites, which generally are chunks of asteroid material, but that assumption turns out to be erroneous. A recent paper in Nature has the story. Richard Binzel (MIT) and colleagues have been considering the properties of asteroids for a long time, looking at the spectral signatures of near-Earth asteroids and comparing them to spectra obtained from meteorites. And it turns out that most of the meteorites that fall to Earth represent types of asteroid that are different from the great bulk of near-Earth asteroids. In fact, the varied types of meteorites we find here generally resemble the mix of asteroids found in the main belt. How can this be? Binzel’s team posits our old friend the Yarkovsky effect, mentioned surprisingly often in our archives. Uneven heating of an asteroid surface and the subsequent radiation of that heat during rotation can create imbalances that accumulate over time, adjusting an object’s orbit. And if this study is to be believed, the Yarkovsky effect works more strongly on smaller objects and much more weakly on larger ones. So efficient is the effect at small scale that it can readily move boulder-sized objects out of the asteroid belt onto a path that leads to Earth, while larger asteroids are moved much more slightly. The largest near-Earth asteroids come from the innermost edge of the main belt, according to this theory, possibly remnants of a larger asteroid that was shattered aeons ago by collisions. In the aggregate, two-thirds of all such asteroids correspond to the type of meteorites known as LL chondrites, which represent only about eight percent of meteorites. They’re rich in olivine and poor in iron. Knowing this means we can put most of our attention onto deflecting this type of object. “Odds are,” says Binzel, “an object we might have to deal with would be like an LL chondrite, and thanks to our samples in the laboratory, we can measure its properties in detail. It’s the first step toward ‘know thy enemy.'” So most meteorites come not from the population of near-Earth asteroids but the main belt, on a track that is made possible by the Yarkovsky effect. How we defend against an incoming asteroid may well depend upon its type, so this is a result that should be weighed carefully. Let’s hope it can also be backed up by a mission to a near-Earth asteroid in the not too distant future. The paper is Vernazza, Binzel et al., “Compositional differences between meteorites and near-Earth asteroids,” Nature 454 (14 August 2008), pp. 858-860 (abstract)
Which of the following describes the given shape? a) acute scalene triangle, b) acute isosceles triangle, c) right isosceles triangle, d) obtuse isosceles triangle, or e) a right scalene triangle. Well it’s an enclosed shape with three straight sides, so it is a triangle; it’s a polygon with three sides And these little markers here and here tell us that those two sides are the same length. And triangles with two sides the same length are called isosceles triangles. Remember, scalene means that all the different sides of the triangle will be different lengths, so any triangle that says it’s a scalene triangle doesn’t match the diagram that we’ve got here. So that leaves us with three choices. Either it’s an acute isosceles triangle, a right isosceles triangle, or an obtuse isosceles triangle. Now you should remember that the base angles in an isosceles triangle are equal, and that’s these two angles here. They’re the angles which are the angle where the sides of the same length meet. Now remember an acute angle is one which is between zero degrees and ninety degrees, and an obtuse angle is between ninety degrees and one hundred and eighty degrees, and a right angle is exactly ninety degrees. Remember the sum of the measures of the interior angles in a triangle is a hundred and eighty degrees. Now that means that this angle and this angle have each got to be acute. They’ve got to be less than ninety degrees. If they were ninety degrees or more, then we’d have a sum of angles which is greater than hundred and eighty degrees for a triangle. So that wouldn’t work. So when we talk about acute, right, or obtuse, we can’t be talking about these angles here in an isosceles triangle. We must be talking about the other angle. So the question is is it acute, is it right, or is it obtuse. Well that symbol there means it’s ninety degrees, which makes it a right angle. And this means we’ve got a right isosceles triangle. So our answer is c, right isosceles triangle.