text
stringlengths
4
602k
Hungarian orthography (Hungarian: helyesírás, lit. ‘correct writing’) consists of rules defining the standard written form of the Hungarian language. It includes the spelling of lexical words, proper nouns and foreign words (loanwords) in themselves, with suffixes, and in compounds, as well as the hyphenation of words, punctuation, abbreviations, collation (alphabetical ordering), and other information (such as how to write dates). - 1 Alphabet - 2 Four principles of spelling - 3 Word breaks: writing in one word or separately - 4 Capitalization - 5 Foreign words and loanwords - 6 Hyphenation - 7 Punctuation - 8 Abbreviations and acronyms - 9 Other information - 10 Collation - 11 History - 12 Orthography and society - 13 See also - 14 References - 15 External links Hungarian is written with the Hungarian alphabet, an extended version of the Latin alphabet. Its letters usually indicate sounds, except when morphemes are to be marked (see below). The extensions include consonants written with digraphs or a trigraph and vowel letters marked with diacritics. Long consonants are marked by a double letter (e.g. l > ll and sz > ssz) while long vowels get an acute accent (e.g. o > ó) or their umlaut is replaced with a double acute accent (ö, ü > ő, ű). Only the first letter of digraphs and of the trigraph dzs is written in upper case when capitalizing in normal text, but all letters are capitalized in acronyms and all-uppercase inscriptions. The letters q, x, y, w are only part of the extended Hungarian alphabet and they are rarely used in Hungarian words – they are normally replaced with their usual phonetic equivalents kv, ksz, i, v (only the x is relatively common, e.g. taxi). Ch is not a part of the alphabet but it still exists in some words (like technika, 'technology' or 'technique'). In traditional surnames, other digraphs may occur as well, both for vowels and consonants. Four principles of spelling The first principle is that the Hungarian writing system is phonemic by default, i.e. letters correspond to phonemes (roughly, sounds) and vice versa. In some cases, however, vowel length or consonant length does not match between writing and pronunciation (e.g. szúnyog [suɲog] ‘mosquito’, küzd [kyːzd] ‘fight’, állat [aːlɒt] ‘animal’, egy [eɟː] ‘one’). Suffixed or compound words usually obey the second main principle, word analysis. It means that the original constituents (morphemes) of a word should be written the same way, regardless of pronunciation assimilations. This, however is only true when the resulting pronunciation conforms to some regular pattern; irregular assimilations are reflected in writing too. For example, hagy + j (’you should leave [some]’) is pronounced like “haggy”, but written as hagyj according to the principle of word analysis. This is because the composition of gy and j gives a long gy in Hungarian phonology anyway, so spelling out the original morphemes is considered clearer. By contrast hisz + j (’you should believe’) is pronounced “higgy” and also written as higgy, since this pronunciation cannot be regularly deduced from the morphemes and basic phonological rules. Compound words are generally written so that all constituents retain their spelling, but some compounds have become vague enough not to be considered true compounds any more, especially if one of the elements is obsolete. An example is kesztyű ‘glove’, which originally comes from kéz ‘hand’ and an obsolete tyű and in this case the spelling no longer reflects the derivation. The third principle, tradition, affects for example surnames, whose spelling often predates the modern spelling rules of Hungarian. For example, kovács ’smith’ may be spelt Kovács, Kováts or Kovách as a surname. Another example for tradition is that the digraph ly is still used despite the fact that it stands for the same sound as j in today's standard Hungarian. The fourth principle (simplification) only affects a handful of cases. If a common noun ending in a double consonant has a suffix beginning with the same consonant, the third instance is dropped, e.g. toll + lal > tollal. This rule extends to Hungarian given names, e.g. Bernadett + től > Bernadettől ‘from Bernadett.’ On the other hand, compounds and suffixed proper names (excluding Hungarian given names) containing three consecutive identical consonants preserve all three, but a hyphen is also inserted (e.g. sakk-kör ‘chess group’, Wittmann-né ‘Mrs. Wittmann’, Bonn-nal ‘with Bonn’). The simplification principle is also applied to double digraphs at the border of suffixes, thus sz + sz becomes ssz (e.g. Kovács + csal > Kováccsal ‘with Kovács’). However, there is no simplification in compounds: e.g. kulcscsomó ‘bunch of keys’. In case of suffix-like derivational elements such as -szerű and -féle ‘-like’, simplification can only be applied to words ending in a single digraph, e.g. viasz + szerű > viasszerű ‘wax-like’ but not to their doubled forms: dzsessz + szerű > dzsessz-szerű ‘jazz-like’. Word breaks: writing in one word or separately Compound words are typically spelt as one word (without spaces) and phrases are normally spelt as more than one word (with one or more spaces), but this is not always the case. Hyphenated spelling is considered an alternative to writing as one word and is used, e.g., if a compound contains a proper name. As far as repeated words are concerned, they are normally written separately (with a comma), but a hyphen is used if their connection is more than occasional (e.g. ki ‘who’ but ki-ki ‘everyone’). If a word is repeated with a different suffix or postposition, the words are written separately (napról napra ‘day by day’, lit. ‘from day to day’), except if an element only exists in this phrase, in which case the words are written with a hyphen (régi ‘old’ réges-régi ‘ancient old’). Coordinated words are normally written separately (with a comma). If the meaning of the result is different from that of the two words together, but both elements take suffixes, they are written with a hyphen (e.g. süt-főz ‘cook’, consisting of words referring to cooking in the oven and cooking in water, sütnek-főznek ‘they cook’). A hyphen is needed in cases when a phrase is only used with certain suffixes. Connections of words which are completely fused and thus take suffixes only at the end of the second element are written as one word (e.g. búbánat ‘sorrow and grief’, búbánatos ‘stricken with sorrow and grief’). However, there are phrases that only take suffixes at the end but their elements are still connected with a hyphen, as when words are contrasted (e.g. édes-bús ‘bittersweet’). Certain phrases can be suffixed either at the end of both elements or only at the end of the second element (e.g. hírnév ‘fame’: hírneve or híre-neve ‘his/her/its fame’). As shown by printed material and street inscriptions, this field is probably the most problematic for the majority of native speakers even at a reasonably educated level. The main principle is that these compounds have to be written without spaces if any of these three criteria are met: - there is a change of meaning, which cannot be deduced from the elements alone, - an inflectional suffix is omitted, - tradition (the examples in this group are limited, though). This applies to phrases and compounds of many types, like those where the first element is the subject of the second (which is a participle), or it is the adjective of the second (e.g. gyors vonat means ’fast train’, while gyorsvonat means ’express train’ as a type of train: the change in meaning makes it necessary to write the latter as one word). - Problematic point(s): it is virtually unpredictable whether the change in meaning (as compared to the mere sum of its elements) is attributed to one element or the whole compound. For example, élő adás ’live programme’ is written as two words, even though the word élő is used differently from its basic meaning – probably because it was decided that this component can carry the change of meaning, so writing the compound as one word is not necessary. On the other hand, gyorsétterem ’fast food restaurant’ (lit. ’fast restaurant’) is written as one word – probably because the change of meaning was attributed to the whole compound, leaving the lexical meaning of the word gyors intact. Some phrases without any change of meaning are written as one word, e.g. útitárs ’travel companion’, while most other phrases are written regularly: úti cél ’travel destination.’ - Sometimes the original meaning of the adjective is retained, but the whole compound still means something more specific than the sum of the elements. For example, savanyú káposzta, lit. ’sour cabbage’, actually ’sauerkraut’, is more than a cabbage that tastes sour: it means a type of pickled food, yet it is written with a space. On the other hand, mobiltelefon ’mobile phone’ is written in one word, although it is actually a telephone that is mobile – writing in one word may be justified by the different technology, as distinguished from a cordless telephone, which is also portable. As far as the suffix omission is concerned, often there is a grammatical relationship between two nouns of a compound which could also be expressed in a marked, more explicit way: for example ablaküveg ’window pane’ could be expressed as az ablak üvege ’the pane of the window,’ and based on this derivation, it needs to be written as one word. The word bolondokháza ’confusion, turmoil’ also needs to be written as one word, despite the marked possessive, so as to avoid the literal meaning ’house of fools’ (1st case). Other compounds, where the first element gives the object, the adverb, or the possessor, are also written in one word where the suffix is omitted, or if the actual meaning is different from the sum of its elements. Thus, szélvédett ’wind-protected’ can be deduced from széltől védett ’protected from [the] wind’, and it is written together as the suffix től is omitted. Verbal phrases where the suffix is marked are usually written in two words, even if the meaning has become figurative, (e.g. részt vesz ’take part’), while other phrases with a marked suffix are written in one word (e.g. véghezvisz ’implement’, literally “take to the end”). - Problematic point(s): there are more than a hundred verbal phrases that are used exactly like verbs with a verbal prefix (cf. “eat up” in English), like részt vesz above, but they must be written as two words. Verbal prefixes (cf. Vorsilben in German) are only written together with the verb they belong to if they immediately precede that verb. If the same verbal prefix is repeated to express repeated action, the first is divided by a hyphen, the second is written in one word (meg-megáll ’keep stopping once in a while’). If two verbal prefixes with an opposite meaning follow each other, both are written separately (le-föl sétál ’walk up and down’). Verbal prefixes may be written separately if the meaning of the prefix is stressed and the prefix is meant in a literal sense, but they must be written as one word if the meaning is changed (e.g. fenn marad ’stay upstairs’ but fennmarad ’survive, remain’). Some verbal prefixes coincide with adverbs that can have personal endings. In this case, they can only be written as one word if they are in the third person plural and the prefix/adverb is not stressed on its own (especially if the meaning is changed). Otherwise (if another person is used and/or the prefix/adverb is stressed) they should be written in two words. - Problematic point(s): phonologically speaking, verbal prefixes are always attached to the following verb, even if it is an auxiliary verb wedged inbetween, which loses its own stress. For example: megfog means ’catch’ and megnéz means ’see, have a look.’ Thus, megfogom a lepkét ’I’ll catch the butterfly’ but meg fogom nézni ’I’m going to see it.’ In the first example, meg belongs to fogom; in the second, meg belongs to nézni. The pronunciation is ['megfogom] in both cases. These cases can be distinguished, though, by considering the word elements. On the other hand, verbal prefixes with personal suffixes can never be written together with the main verb, even though they are stressed the same way as unsuffixed prefixes, e.g. rám néz, rád néz but ránéz ’s/he looks at me, you, him/her.’ A separate group of compounds with subordinated elements is the one named literally “meaning-condensing” or “meaning-compressing” compounds, which have a more complex internal structure, containing implicit elements outside the constituting words, or sometimes where the present meaning cannot be derived at all from the elements. They are always written in one word, e.g. csigalépcső ’spiral staircase,’ lit. “snail-staircase”, i.e. a staircase similar to the shell of snails. Phrases whose first element is a participle are written separately if the participle expresses an occasional action: dolgozó nő ’a working woman, a woman at work.’ However, if the participle expresses function, purpose, ability, task, or duty, the phrase is considered a compound and is written as one word, e.g. mosónő ’washerwoman’, someone whose duty is to wash. Sétálóutca ’walking [pedestrian] street’ means a street for walking: writing as one word expresses that it is not the street that walks. – However, this rule doesn’t apply to compounds where an element is already a compound itself, even if the whole compound expresses function or purpose. For example, rakétaindító állvány ’rocket launching platform’ is written as two words because of its compound first element, despite the fact that it is not the platform which launches the rocket, but it is only used for it, so a function is expressed. - Problematic point(s): there are several received expressions referring to function that are written separately despite the above rules (e.g. kijelentő mód ’indicative mood’ lit. ’declaring mood’, even though the mood is used for declarations, it doesn’t declare anything), so it is sometimes not obvious how a newly coined construction should be written. In addition, present participles sometimes become nouns, and their compounds cannot be written in two words, as they cannot be considered adjectives anymore. For example, labdarúgó ’footballer’ was created from a participle (lit. ’ball-kicking’ [person]) but it is a noun today, and since labdarúgómez ’footballer’s strip’ implies a possessive relationship, it must be written in one word. If a phrase (e.g. an adjective and a noun or a noun and a postposition) written in two words receives a derivational suffix, it will also be written in two words – except if the meaning is changed. However, if they receive a second derivational suffix, the phrase will be written in one word. (For example: egymás után ’one after the other’, egymás utáni ’successive’, but egymásutániság ’successiveness,’ i.e. ’succession.’ In addition: föld alatt ’under the ground’, föld alatti ’being under the ground’ but földalatti ’underground <movement>’ or ’subway, tube.’) - Problematic point(s): there are more than fifty phrases written in one word only after one single derivational suffix (e.g. partra száll ’disembark’ but partraszállás ’disembarkation’). Appositional compounds are normally written in two words, e.g. ’a footballer wife’ (a wife who plays football) is expressed as futballista feleség. However, if there is a possessive relationship between the words, i.e. if the wife of a footballer is meant, it is considered a (regular) compound, thus it should be written as one word: futballistafeleség. There are several appositional compounds though, which are written as one word, especially where the first element specifies the type of the second (e.g. diáklány ’student girl’). - Problematic point(s): people find long words difficult to read, so many still prefer to write them separately, relying on the context to clarify the meaning. In addition, the justification for the above subtype that provides specification is considered vague. Words containing a suffixed numeral are written as one word (e.g. húszméteres út ’a twenty metres long way,’ cf. húsz méter ’twenty metres’), except if an element is already a compound (e.g. huszonegy méteres út ’a twenty-one metres long way’ or húsz kilométeres út ’a twenty kilometres long way’). This rule doesn’t apply to compounds with numbers written in digits, e.g. 20 méteres út, as they are written with spaces. – A similar principle is applied to compounds whose first element expresses the material of the second (e.g. faasztal ’wooden table’ but fenyőfa asztal ’pine-wood table��� and fa konyhaasztal ’wooden kitchen table’). - Problematic point(s):, linguistically speaking, these do not constitute an actual compound (because the meaning is not institutionalized), so there is no sound reason for writing them in one word. In addition, the normal alternative for writing in one word is writing with hyphens, rather than writing separately, so this opposition is unusual in Hungarian orthography. Hyphenating long compounds The syllable-counting rule To avoid too long words, a “syllable-counting rule” is applied. Compounds with more than 6 syllables (excluding all its inflectional suffixes) and more than 2 elements take a hyphen at the border of the two main elements. For example, labdarúgócsapataitokkal ’with your [PL] football teams’ has 10 syllables, but its stem, labdarúgócsapat is only 6 syllables long, so all its forms are written as one word. On the other hand, labdarúgó-bajnokság ’football championship’ has 7 syllables even in its base form, so all its forms should take a hyphen. Compounds of whatever length are permitted, supposing they consist of only two elements, e.g. nitrogénasszimiláció ’nitrogen assimilation’ is written as one word despite its 9 syllables. Sometimes adding a single letter (a short suffix, in fact) may induce a hyphen, e.g. vendéglátóipar ’catering industry’ is written as one word, but vendéglátó-ipari ’catering industry related’ will take a hyphen in accordance with the above rules. - Problematic point(s): not only keeping the numbers and their meanings in mind and the differentiation between inflectional and derivational suffixes, but also that compounds are sometimes far from transparent to today’s speakers (e.g. rendszer ‘system’ from rend ‘order’ and an obsolete szer). In addition, it is not commonly known what is considered an element: it includes e.g. foreign prefixes that are used on their own with Hungarian second elements (there is a list of them) as well as verbal prefixes consisting of no less than two syllables. Three “mobility rules” Sometimes word boundaries are flexibly rearranged to reflect the meaning of the whole compound: the three rules dealing with it are referred to as “mobility rules”. - If a compound with a hyphen takes another element, its original hyphen is removed, and only the new element takes a hyphen: békeszerződés-tervezet ’peace treaty draft’ but békeszerződéstervezet-kidolgozás ’peace treaty draft development’. - If a phrase of two words takes another element that belongs to both, the two original elements will be exceptionally written together, and the new element will be attached to them with a hyphen: hideg víz ’cold water’ but hidegvíz-csap ’cold water tap’. - If two compounds with an identical element are contracted, the identical element is written separately and the two other elements are connected with a hyphen: e.g. rézötvözet ’copper alloy’ and aranyötvözet ’gold alloy’ but réz-arany ötvözet ’copper-gold alloy’. - Problematic point(s): the resulting very long words are difficult to comprehend, so instead of rephrasing them, people tend to write them separately. In addition, it is debated whether these forms occasionally written in one word should be allowed, because this form only shows the highest relationship at the expense of an easy-to-read overview of the other parts. (One of the OH. authors, Attila Mártonfi noted: the inscription forgalmi rend változás ‘change in traffic regulations’ is easier to read if written in three words rather than the regular form created with the mobility rules, forgalmirend-változás.) Sometimes this rule is ignored even in the title of linguistic books, such as Magyar nyelvtörténet (‘Hungarian historical linguistics,’ lit. “Hungarian language history”), which should be written Magyarnyelv-történet, to reflect that it is not historical linguistics in Hungarian language, but the historical linguistics of the Hungarian language. It may also cause problems when the involved elements are proper names, such as Nap–Föld-távolság ‘Sun–Earth distance,’ because the dashes and hyphens follow a different rule in this case (see the part on punctuation). The following type of proper names are distinguished: personal names, animals’ names, geographical names, astronomical names, names of institutions, brand names, names of awards and prizes, and titles (of works). Proper names may become common names, and in this case they are written in lowercase (e.g. röntgen ‘x-ray’) and even their derived compounds may become lowercase, losing the hyphen (e.g. ádámcsutka rather than *Ádám-csutka ‘Adam’s apple’). Personal names and animals’ names Surnames and given names are capitalized. Surnames may have an old-fashioned spelling, which is usually retained – except if their form already has variations, and some of them may interfere with reading. They may consist of two or more elements, and they may be given as one word or in several words, but today hyphenation is the most common method. Given names are written phonetically (even modern names like Dzsenifer, cf. English Jennifer), except that x and ch are retained (even though they are pronounced ksz and h), e.g. Richárd, Alexandra. Names of gods and religious figures are capitalized, except when they are referred to as common names (like Greek gods) or if they are mentioned as part of common phrases (e.g. hála istennek ‘thank God’). Occasional epithets are not capitalized: only their fixed equivalents are. Common nouns expressing rank or relation are written separately (István király ‘King Stephen’, Németh mérnök ‘Mr Németh, engineer’). Groups of people named after people (or even a fancy name)[clarification needed] are written separately, except for groups founded or led by that person (in which case it is a compound, written with a hyphen). - Problematic point(s): sometimes it is not commonly known which is the case, for example, Kodály vonósnégyes ‘Kodály string quartet’ is written with a space as it was only named after Kodály, while Tátrai-vonósnégyes is written with a hyphen as it was founded by Vilmos Tátrai. Another problematic point is that this rule applies to families (e.g. Kovács család ‘Kovács family’) but it does not apply to dynasties (Bourbon-család ‘Bourbon family’) Suffixes are added to personal names without hyphens. If a suffix is attached, it follows the pronunciation of the word, including obsolete consonant clusters (e.g. M��ricz, pronounced [ˈmoːrits], suffixed: Móriczcal). However, if a surname or a foreign name ends in a double consonant, suffixes are added with a hyphen, so that the original form can be restored (e.g. Papp is suffixed as Papp-pal, because Pappal would refer to another name, Pap). However, given names are suffixed in a simplified way, because they are from a limited set, so their original forms can be retraced (e.g. Bernadett + tel > Bernadettel). If an adjective is formed from a proper name, it is not capitalized. (In case of a hyphenated compound, no element is capitalized, e.g. Rippl-Rónai but rippl-rónais ‘typical of Rippl-Rónai’.) Suffixes are added directly, except if the name consists of several elements written separately: Széchenyi István and Széchenyi István-i. Compounds formed with personal names are always hyphenated, e.g. Ady-vers ‘a poem by Ady’. - Problematic point(s): e.g. Kossuth-díj ‘Kossuth Prize’ is also hyphenated, even if it is not a prize by[clarification needed] Lajos Kossuth, and no reason can be found for an actual compound: he had nothing to do with the prize, it was only named in his honour. This rule is also often ignored when it is considered to be overruled by another rule concerning names of institutions, e.g. in the name of Mindszenty-emlékhely ‘Mindszenty Memorial’, advertised as Mindszenty Emlékhely. In this case there are actually two reasons to capitalize Mindszenty (as a name of a person and the beginning letter of the institution name) but the second element of the compound should not be affected. An exception to the hyphenation of compounds with a proper name is when the proper name contains an uncapitalized common noun. For example, if there is a monastery (kolostor) named after Jeremiás próféta ‘the Prophet Jeremiah’, the compound Jeremiás próféta kolostor cannot have the usual hyphen, as it would falsely suggest a closer relationship between próféta and kolostor. (If all the elements were common nouns, the case would be simpler, as the above mobility rules could be applied.) Animals’ names are capitalized, and if the species is added, it is written in lowercase, without a hyphen. The two most important questions about geographical names are whether a name should be written in one word, with a hyphen, or in separate words, and which elements should be written uppercase and lowercase. Different written forms may refer to different entities, e.g. Sáros-patak lit. ‘muddy river’ refers to a river, but Sárospatak refers to a city (because rivers' names are written with a hyphen, but city names are written as one word). This field is considered one of the most complex parts of Hungarian orthography, so a separate volume has been published about it, and a separate board (Földrajzinév-bizottság) working in the Ministry of Agriculture is entitled to give statements. It consists of experts in linguistics, education, transportation, hydrology, natural protection, public administration, ethnic minorities, foreign relations, and other fields. Apart from single-element names, country names with -ország, -föld, -alföld or -part (‘country’, ‘land’, ‘plain’, ‘coast’) and most regions are written in one word, as well as Hungarian settlements and their districts (“towns”) and quarters, and even Hungarian names outside Hungary. The adjective-forming suffix -i (sometimes -beli) is attached directly to the name. If it already ends in -i, this ending is not repeated. - Problematic point(s): certain region names have become one word, dropping the hyphen, such as Dunakanyar; there are about 60 such forms. Quarters also need to be written as one word, even if they contain a proper name (e.g. Wekerletelep, lit. “Wekerle's settlement”), and even if they exceed the 6 syllables (e.g. Szépkenyerűszentmárton, 7 syllables and 4 elements, despite the above-mentioned syllable-counting rule). If a geographical name contains a common geographical expression (river, lake, mountain, island etc.) or another common noun or an adjective, the compound is written with a hyphen (e.g. Huron-tó ‘Lake Huron’ or Új-Zéland ‘New Zealand’). When these forms are converted into an adjective, only those elements are left capitalized which are actual proper names themselves (Kaszpi-tenger and Kaszpi-tengeri ‘Caspian Sea’, however Új-Zéland and új-zélandi – zélandi is not considered a proper name because it carries the adjectival suffix). The same rule is applied to compounds with three or more elements, although compounds with more than four elements are simplified (lower-ranked hyphens are removed). An en dash is used to express a relation between two places, and its adjectival form becomes completely lower-case (e.g. Moszkva–Párizs ‘Moscow-Paris [route]’ and moszkva–párizsi ‘of the Moscow-Paris route’). However, if a higher-ranked connected element becomes an adjective, the geographical proper names will retain the upper case (e.g. Volga–Don-csatorna ‘Volga-Don canal’ vs. Volga–Don-csatornai), except when the elements of the name contain adjectives or common nouns, which will become lower-case (e.g. Cseh–Morva-dombság ‘Bohemian-Moravian Highlands’ vs. cseh–morva-dombsági). All elements are written separately (excluding the above-mentioned names that are written as one word or with a hyphen) in current and historical country names and geographical-historical region names. Their adjectival forms are all written with lower case. (For example, Egyesült Királyság ‘United Kingdom’ vs. egyesült királysági ‘from/of the U.K.’, Dél-afrikai Köztársaság ‘South African Republic’ vs. dél-afrikai köztársasági but San Marino Köztársaság ‘Republic of San Marino’ vs. San Marino köztársasági). Only the first element is capitalized in subnational entities like counties, areas, districts, neighbourhoods. When forming an adjective, this uppercase letter is only kept if this element is a proper name, e.g. New York állam ‘State of New York’ vs. New York állami. However, if the first element of such an entity is a common noun or an adjectival form, all elements are written lower case (e.g. in names of local administrative units like Váci kistérség vs. váci kistérségi). Names of public spaces (roads, streets, squares, bridges etc.) are written separately (except for elements that are already compounds or hyphenated). Their first element is capitalized, and this capitalization is kept even in the adjectival forms, e.g. Váci utca ‘Váci Street’ and Váci utcai. - Problematic point(s): people need to know if a phrase is officially the name of that place or just a designation, e.g. Erzsébet híd is a name (‘Elisabeth Bridge’) but Duna-híd merely refers to a bridge on the Danube, so a hyphen should be used. If a common name is added to a geographical name to clarify its nature, it is written separately. - Problematic point(s): it is often unclear whether a common noun is actually part of an official geographical name, e.g. many people believe Fertő tó is the actual name of Fertő Lake so they write it with a hyphen; but the name is only Fertő, thus a space must be used before tó. In addition, names like Szahara sivatag (‘Sahara Desert’) or Urál hegység ‘Ural Mountains’ do not contain the common name, so no hyphen must be used, as opposed to the Kaszpi-tenger type. If a geographical name consists of several elements whose relationship is marked by suffixes or postpositions, these elements are also written separately. The uppercase letter of the beginning element is kept even in an adjectival form. - Problematic point(s): the suffix that marks the possessive relationship is lost in the adjectival form, so the relationship is eventually unmarked, but the hyphen is still not used. For example, when Vác környéke ‘Vác environs’ becomes Vác környéki ‘of/from Vác environs’, the possessive-marker -e is lost, so it seemingly becomes analogous with the above Kaszpi-tengeri type. In addition, while names like Külső Pesti út (‘Outer Pest Road’) makes it obvious that külső is part of the name (rather than an occasional designation), the adjectival form Külső Pesti úti can only be given correctly with this knowledge. The above case of Jeremiás próféta kolostor emerges again with the type of Mária asszony sziget ‘Lady Mary Island’, where sziget ‘island’ would normally be connected with a hyphen, were it not for the common noun asszony ‘lady’ in the original name, which makes it impossible, so all elements have to be written separately. Stars and other astronomical objects Stars, constellations, planets, moons are written with an uppercase capital, e.g. Föld ‘Earth’, Tejút ‘Milky Way’, especially as astronomical terms. In everyday usage, however, names of the Earth, the Moon, and the Sun are normally written in lowercase (föld körüli utazás ‘a journey around the Earth’). Names of offices, social organizations, educational institutions, academic institutes, cooperatives, companies etc. are written capitalizing all elements except conjunctions and articles. In adjectival forms, only actual proper names and fancy names[clarification needed] are left uppercase. For example, Országos Széchényi Könyvtár ‘National Széchényi Library’ vs. országos Széchényi könyvtári. - Problematic point(s): it is not always known whether a specific form is actually the official name of an institution, or what is its official name (e.g. whether the city where it is located is part of the name). In addition, it is not always clear if a group is actually an institution in the sense of having been registered at the court, with a statute, a stamp, letter header etc. on its own. A third problem is the question of whether the spelling of an organization can be corrected if it is not written according to the above rule. In addition, adjectival forms derived from institution names are also often mistaken because people feel the need to distinguish them from genuine common nouns (especially if the name contains a fancy name[clarification needed] that becomes identical with a common noun if written in lowercase). Furthermore, it is unclear why cinemas are treated differently (see above) from theatres, cf. Művész mozi, but Magyar Színház. If a part of the institution name stands for the whole name, its upper case form is preserved if it is a specific keyword of the name. However, if a common noun part is used for the whole name, it is written in lower case (except for Akadémia for the Hungarian Academy of Sciences and Opera for the Hungarian State Opera House). - This rule is commonly violated in legal documents where the authors want to make it as clear as possible that the names refer to the contracting parties in particular, so they write it in upper case (not only the common noun parts of the company names but also generic words referring to the parties involved). Subordinated units of institutions are written in uppercase if they are major divisions (e.g. Földrajzi Társaság ‘Geographical Society’, under the Hungarian Academy of Sciences), not including the personnel department or the warden’s office. Railway stations, airports, cinemas, restaurants, cafés, shops, baths and spas, cemeteries etc. are considered less typical institutions[clarification needed], so only their actual proper name elements (including possible fancy names[clarification needed]) are written in upper case, apart from the first word. Their adjectival forms retain the original case. For example, Keleti pályaudvar ‘Easter Railway Station’ vs. Keleti pályaudvari; Vén Diák eszpresszó ‘Old Student Café’ vs. Vén Diák eszpresszóbeli, Names of products, articles, makes, and brands are written capitalized, e.g. Alfa Romeo. This does not include names which include the material or origin of the product, e.g. narancsital ‘orange juice’. If the word showing the type is added to the name for clarification, it is done with a space, and in lowercase, e.g. Panangin tabletta ‘Panangin pill’. Awards and prizes Words denoting a prize, an award, a medal etc. are attached with a hyphen to proper names, e.g. Kossuth-díj ‘Kossuth Prize.’ If the name consists of several elements, whose relation is marked, all the elements are capitalized, e.g. Akadémiai Aranyérem ‘Golden Medal of the Academy.’ Degrees and types of awards are written in lowercase. Titles of works Titles are classified as constant and individual titles: the first being the title of newspapers, periodicals, magazines, and the second used with literary, artistic, musical, and other works, articles etc. All elements of constant titles are written in uppercase (e.g. 'Élet és Tudomány ‘Life and Science’ [weekly]), while only the first word is capitalized in individual titles (e.g. Magyar értelmező kéziszótár ‘Defining Desk Dictionary of the Hungarian Language’ or Kis éji zene ‘A Little Night Music’). Suffixes are attached to titles without a hyphen, except if a title already ends in a suffix or a punctuation mark, or if the suffix creates an adjective: in these cases, a hyphen must be used. (For example: a Magyar Hírlapban ‘in Magyar Hírlap’ but Magyar Hírlap-szerű ‘Magyar Hírlap-like.’) Names of national and religious holidays, celebrations, notable days, periods, historical events are not capitalized (nor day or month names), neither are names of nationalities and ethnicities, languages and language groups as well as religions. Events, programmes, and arrangements are not capitalized either, except if they have an institutional background. - Problematic point(s): an average person cannot always know if an event has an institutional background. Therefore, events are still usually written with capital letters. Apart from personal names, commons nouns expressing rank or relation may also be capitalized in addresses for reasons of politeness. suffixes and titles like Doctor, Junior, Senior, and their abbreviations are only capitalized if they are in a prominent position (e.g. in postal addresses or lists). Foreign words and loanwords Foreign words either retain their foreign spelling or they are phonetically respelled according to the Hungarian writing system. If a word comes from a language using the Latin script, it is only respelled if it has become an integral, widely known part of Hungarian language (e.g. laser > lézer; manager > menedzser). If it is less widely used, it retains its original spelling, e.g. bestseller, myocarditis, rinascimento. But there is no hard and consistent rule, and many widely used terms are written in the original spelling, e.g. musical or show. Certain phrases from foreign languages are always written in their original form, even if the individual words would be respelled in isolation, e.g. tuberkulózis cf. tuberculosis bronchialis. - Problematic point(s): Inconsistency in some cases, cf. fitnesz for ‘fitness’ and wellness, or Milánó and Torino. Certain words, long present in Hungarian, are written in the foreign way (such as musical), despite their being commonly known, because of uncommon sound clusters in Hungarian (such as [mju] in *mjuzikel), or because of possible confusion with an existing Hungarian word (e.g. show for só ‘salt’). Moreover, the traditional Hungarian transliteration may be rejected for languages like Chinese that already have a Latin version of their writing system. In addition, it may not be obvious whether a Latin or a non-Latin official language of a country should be considered as a basis (e.g. Indian names). Some features of the original spelling are sometimes retained, e.g. football > futball (pronounced “fudbal”), million > millió (pronounced “milió”). The digraph ch is preserved if it is pronounced [h]. The letter x, if pronounced “ksz”, is usually written x in Hungarian too. However, if it is pronounced “gz”, it is normally written gz, again with a few exceptions. The letters qu are always respelled as kv. If the source language uses a non-Latin script (Greek, Russian, Chinese etc.) words are respelled phonetically. This does not always mean exact transliteration: sometimes the foreign pronunciation is bent to conform to Hungarian phonology better (e.g. szamovár, tájfun 'samovar', 'typhoon'). In practice, English transliterations are also often used, such as gyros[Is this an English word?] instead of gírosz). Proper names from languages with a Latin alphabet are normally written in the original way, e.g. Shakespeare, Horatius, Chopin, including all the diacritics (e.g. Molière, Gdańsk). Certain foreign proper names have a Hungarian version, e.g. Kolumbusz Kristóf for Christopher Columbus (in the Eastern name order, typical of Hungarian). Other names adapted the given name and the word order to Hungarian customs, but left the surname intact, e.g. Verne Gyula for Jules Verne. Recently borrowed names are no longer modified in Hungarian. The only exceptions are some given names which can only be written in Hungarian spelling, e.g. Krisztián for Christian and Kármen for Carmen. As with common nouns, ch and x are retained in both personal names and geographical names of foreign origin (e.g. Beatrix, Mexikó). Similarly to common names again, widely-known and fixed forms of proper names from languages with a non-Latin script are preserved (e.g. Ezópus (Aesop), Athén, Peking), rather than introducing a more up-to-date or more accurate transliteration (e.g. Aiszóposz, Athénai/Athína, Pejcsing). Some well-established foreign names have a popular form used in phrases and another referring to the person (e.g. Pitagorasz tétele ‘Pythagorean theorem’ but Püthagorasz for the philosopher himself). Suffixes are added directly in most cases. The -i suffix is omitted in writing if the word already ends in the letter i (e.g. Stockholm > stockholmi; Helsinki > helsinki). In the case of suffixes of variable forms depending on Hungarian vowel harmony rules, the version in accordance with the actual pronunciation should be used. If a certain suffix requires lengthening of the word-final vowels a, e, o, ö, they are lengthened as usual, e.g. Oslo but Oslóban, oslói. In addition, suffixes will follow the pronunciation of the word in terms of the ending consonant and the front or back vowels (e.g. Bachhal ‘with Bach’, Greenwichcsel ‘with Greenwich’). If the last letter of a foreign word is silent (not pronounced) or part of a complex cluster of letters, a hyphen is used when attaching suffixes (e.g. guillotine-nal ‘with a guillotine’, Montesquieu-vel ‘with M.’). If an adjective is formed from a proper name with only one element, it will be lowercase (e.g. voltaire-es ‘Voltaire-esque’). A hyphen is also used if an adjective is formed from a multiword name (e.g. Victor Hugó-i ‘typical of V. H.’, San Franciscó-i ‘S. F.-based’). The last vowel is lengthened even in writing if it is pronounced and it is required by phonological rules. If the suffix begins with the same letter as a word-final double letter (e.g. Grimm-mel ‘with Grimm’)., a hyphen is used again. Hyphenation at the end of a line depends on whether there is an easily recognizable word boundary there. If the word is not a compound (or it is, but the boundary is not nearby) the word is hyphenated by syllables, otherwise by word elements (e.g. vas-út ‘railway’, lit. ‘iron-road,’ instead of *va-sút). The number of syllables is defined by the number of vowels (i.e., every syllable must contain one and only one vowel) and the main rule can be summarized as follows: a syllable can begin with at most one consonant (except for the first syllable of a word, which may contain up to three initial consonants). It means that a syllable can only begin without a consonant if there is no consonant after the preceding vowel (e.g. di-ó-nyi ‘nut-sized’), and if there are multiple consonants between vowels, only one can go to the next syllable (e.g. lajst-rom ‘list’). Hyphenation normally follows pronunciation, rather than the written form. If a word contains several vowel letters but they are pronounced as a single sound, it cannot be hyphenated (e.g. Soós ‘a surname’, blues ‘blues’). Pronunciation is respected in the case of ch, which is pronounced as a single sound so both its letters are kept together (e.g. pszi-chológia, züri-chi ‘from Zürich’). Hungarian surnames are also hyphenated by pronunciation, e.g. Beöthy > Beö-thy [pr. bő-ti], Baloghék ‘the Balogh family’ > Ba-lo-ghék [pr. balog], móri-czos ‘typical of Móricz’ [ˈmoːrits]. The same principle applies to foreign common names and proper names, e.g. Ljub-lja-na, Gior-gio, Fi-scher for consonants (because lj, gi, and sch denote single sounds) and Baude-laire, Coo-per for vowels. Even acronyms can be hyphenated if they contain at least two vowels (e.g. NA-TO) or at the boundary of the acronym and the suffix, where a hyphen already exists (e.g. NATO-ért ‘for the NATO’). On the other hand, x denotes two sounds, but it is not separated at the boundary of two syllables (e.g. ta-xi rather than *tak-szi, based on phonetics). Long double consonants are separated and their original forms are restored if they are at the boundary of two syllables (e.g. meggyes ‘cherry-flavoured’ > megy-gyes). Although not incorrect, it is not recommended to leave a single vowel at the end or the beginning of a line (e.g. Á-ron, Le-a). Double vowels can be separated (e.g. váku-um ‘vacuum’), and long consonants can also be separated (e.g. ton-na ‘ton’). Inflectional suffixes are not considered elements on their own (e.g. although the stem of pénzért ‘for money’ is pénz, its hyphenation is pén-zért rather than *pénz-ért). Apart from the hyphenation based on pronunciation, foreign compounds may be hyphenated at their boundary, if the prefix or suffix is widely recognized, e.g. fotog-ráfia (by syllables) or foto-gráfia (by elements). The elements are also taken into consideration in compound names (e.g. Pálffy [pr. pálfi], hyphenated as Pál-ffy, rather than *Pálf-fy). Sometimes different ways of hyphenation reflect different words (e.g. me-gint ‘again,’ a single word hyphenated by syllables, cf. meg-int ‘admonish,’ a compound with a verbal prefix, hyphenated by elements). Hyphens are not to be repeated at the beginning of the next line, except in specialized textbooks, as a way of warning for the correct form. - Problematic point(s): certain vowel couples are sometimes pronounced as diphthongs, e.g. augusztus ‘August’ is often pronounced in three syllables like au-gusz-tus, but the syllable-counting rule (above) should consider it as a word of four syllables. Another problematic point is that some words might seem to be compounds although they are not (e.g. jobbágy ‘serf’ > job-bágy, although jobb ‘better’ and ágy ‘bed’ are existing words). A third possible problem is that although hyphenation strictly follows pronunciation, long consonants pronounced as short are treated as if they were pronounced long (e.g. milliméter is pronounced [miliméter] so hyphenation could be mi-lli-mé-ter but it follows the written form and will be mil-li-mé-ter, or Kossuth is hyphenated as Kos-suth, although the ss is pronounced short.) A further problem is that dz is not considered as a genuine diphthong by current Hungarian phonology, but it is treated as a digraph, and both of its letters should be moved together (e.g. ma-dzag ‘string’) and the trigraph dzs is also treated as a single letter, even when it is pronounced long (e.g. me-ne-dzser ‘impresario’). Finally, the pronunciation and spelling rules of foreign words are not always known, so people may not be able to hyphenate them correctly (although they can separate them at a different point, or take them to the next line altogether). At the end of a sentence Punctuation marks are added to the end of the sentence depending on its intended meaning. The exclamation mark is not only used for exclamations, but also for wishes and commands. If the sentence formally reflects one mood, but it actually refers to a different idea, the punctuation mark is selected based on the actual meaning. Punctuation marks may be repeated or combined to express an intense or mixed emotion. (For example, Hogy képzeled ezt?! ‘How do you dare?!’) In case of coordinating clauses, the punctuation mark is adapted to the ending clause. Subordinated clauses take a punctuation mark reflecting the main clause – except if the main clause is only symbolic, emphasizing the subordinate clause. A comma (or a colon, semicolon etc.) should be placed at the border of clauses whether or not there is a conjunction. It also applies to cases when the clause begins with one of the conjunctions és, s, meg ‘and’ and vagy ‘or’. However, it is sometimes difficult to assess whether the part joined with these conjunctions is a separate clause (because if it is not, no comma is needed). For example: Bevágta az ajtót, és dühösen elrohant. ‘He banged the door and rushed away in fury.’ but Hirtelen felugrott és elrohant. ‘He suddenly jumped up and rushed away.’ Similes introduced with the word mint ‘as, like’ are to be preceded by a comma. The exception is a kind of a ‘more than’ construction that has a mere intensifying function (as opposed to ‘practically’ or ‘almost’). In case of a double conjunction expressing ‘instead of (doing)’, ‘without (doing)’ etc., only the first element should be preceded by a comma – except if the first element closely belongs to the first clause, in which case the comma is placed between the two conjunctions. Semicolons are generally used to separate sets of closely connected clauses, if these larger sets of clauses are loosely connected to each other. A semicolon may also be used to mark that two single clause have but a loose relation to each other. Colons attract the attention to a forthcoming idea, or they may be used to mark that an important explanation or conclusion follows. If a clause introduces several separate sentences, all of them (including the first) are written with an uppercase initial. To express that a fairly distinct set of ideas follows, a dash may be used after the full stop, the question mark, or the exclamation mark. Between clause elements Coordinated clause elements are separated by commas if no conjunction is used. (A semicolon may be used to separate series of words whose elements are separated by commas.) If a conjunction is used between coordinated clause elements, a comma is used before it, except if the conjunction is one of the words és, s, meg ‘and’ or vagy ‘or,’ where the comma is omitted. Since the abbreviation stb. ‘etc.’ includes the conjunction s ‘and,’ it doesn’t need a comma either. For example: tetszetős, de helytelen elmélet ‘an appealing but incorrect theory,’ a rózsának, a szegfűnek vagy a levendulának az illata ‘the scent of a rose, a carnation, or a lavender.’ If a coordinated sentence element is mentioned at the end of the whole clause, separated from the related elements, in a postponed manner, it is separated from the rest of the clause with a comma. For example: Ernyőt hozzál magaddal a kirándulásra, vagy kabátot! ‘Bring an umbrella to the excursion, or a raincoat.’ Coordinated structures formed with coupled conjunctions (e.g. “either – or”) are written with a comma placed before the second conjunction. Appositions are separated from the referred element with a comma (or a colon), if they are in the same grammatical position as the referred element. If the apposition gets further back in the sentence, the comma will precede it directly. If the apposition is followed by a pause in speech, a comma may be placed after it, too. If a descriptive phrase is added to a personal name but only the last part takes the suffixes (in which case it is not called an apposition), no comma is used after the personal name. For example: Nagy Elemérnek, városunk díszpolgárának ‘to Elemér Nagy, honorary citizen of our town’ – because of the possessive structure, both element take the suffixes, and the second part can only be an apposition, so a comma is needed. On the other hand: Nagy Elemér díszpolgárnak ‘to Elemér Nagy honorary citizen’ – the whole structure takes one suffix at the very end, thus it cannot be appositive, and no comma is used. If the apposition or the referred element is a derivative of the word maga (“himself” etc.), the comma is not used. However, adverbs used like appositions take the comma. Subordinated clause elements take no comma (e.g. fekete szemüveges férfi ‘a man with black glasses’ – the word fekete ‘black’ doesn’t belong to férfi ‘man’ but to szemüveg ‘glasses’). If the word mint ‘as’ precedes a phrase expressing status or quality, no comma is used before it (e.g. Bátyámat mint tanút hallgatták ki. ‘My brother was heard as a witness.’) Structures formed with an adverbial participle are not usually separated from the clause with a comma, especially if the participle is directly connected to it. However, if this part is loosely attached to the clause (especially if the participle has its own complement), it is recommended to use a comma. Elements wedged into a clause Words or phrases (especially external elements) interposed into a sentence are marked with commas, dashes (with spaces), or parentheses. For example: Bátyámat, a baleset tanújaként, többször is kihallgatták. or Bátyámat – a baleset tanújaként – többször is kihallgatták. or Bátyámat (a baleset tanújaként) többször is kihallgatták. ‘My brother, as the witness of the accident, was heard several times.’ The comma may be omitted around interposed elements depending on the articulation, reflecting the intention of the author, e.g. A vonat, persze, megint késett. ‘The train was, of course, late again.’ can be written without commas as well. If the conjunction mint ‘as’ precedes an interpolation separated by pauses in speech, commas may be used before and after the interjected part. Subordinated clauses are also separated by commas, dashes, or parentheses if they are interposed into another clause. Évi, bár még át tudott volna szaladni az úttesten, hagyta elmenni a teherautót. ‘Eve, though she could have run through the road, let the truck leave.’ If a word, phrase, or clause is interposed into a sentence right next to a punctuation mark, this mark needs to be inserted after the pair of dashes or parentheses. For example: Műszaki egyetemen szerzett diplomát – vegyészmérnökit –, de író lett. ‘He graduated at a technical university – as a chemical engineer – but he became a writer.’ However, if an independent sentence is interposed, its punctuation mark is inserted inside the parentheses. Forms of address Forms of address are usually followed by an exclamation mark, e.g. Kedves Barátaim! ‘My dear friends,’ or a comma can be used in private letters. If this form stands within a sentence, it is separated from the rest with commas. - Problematic point(s): intonation is sometimes unbroken before appositions wedged into sentences, so even a series of books was published by a notable publishing house with the title Magad uram, ha gondod van a PC-vel ‘Do it yourself, Sir, if you are in trouble with the PC,’ without a comma preceding uram. Quotation marks are placed below at the beginning and above at the ending of a quotation, both signs turning left, being curly and double. If another quotation is included in a quotation, angle quotation marks (guillemets) are used, directed towards each other with their tips: („quote1 »quote2« quote1”). If a quoting sentence introduces the quotation, it is preceded by a colon; the ending punctuation mark should be inserted as in the original. Lowercase initials should only be used if they are lowercase in the original. If a quoting sentence follows the quotation, they are separated by a dash (and spaces). Punctuation marks of the original text are preserved, except for the full stop, which is omitted. If the quoting sentence is interposed in the quotation itself, it is written in lowercase and separated with dashes (and spaces). The second quotation mark stands at the end of the quotation. For example: Így felelt: „Igen, tudom.” or „Igen, tudom” – felelte. or „Igen – felelte –, tudom.” ‘“Yes, I know,” he replied.’ If the quotation is organically interwoven into one’s own text, the quoted part is marked with quotation marks, and common words beginning the quotation are written in lowercase (even despite the original). For example: A tanterv szerint az iskola egyik célja, hogy „testileg, szellemileg egészséges nemzedéket neveljen”. ‘According to the syllabus, one of the goals of a school is “to bring up a generation healthy in body and mind.”’ When quoting others’ words in terms of their content, the quotation marks are not used: Alkotmányunk kimondja, hogy társadalmi rendszerünknek a munka az alapja. ‘Our constitution states that our social system is based on work.’ Indirect (reported) speech is treated in the same way. In fiction and prose, quotations are marked by dashes instead of quotation marks, placed at the beginning of a line. If the quotation is written in a separate line, the only dash is the one that precedes it. If quotation is followed by the quoting sentence, they are separated by another dash (the full stop omitted from the end, other punctuation marks retained, as described above). If a quotation is continued after the author’s words, another dash follows. For example: - – Nagyon vártalak már – fogadta a barátját. – Sok a teendőnk. - “I have been waiting for you,” he received his friend. “We have a lot to do.” Between words and their elements Interjections are preceded and followed by commas. If an interjection is followed by the emphatic words be or de ‘how much,’ the commas can be omitted depending on the stress and pause conditions of the sentence. If two conjunctions follow each other (e.g. because of an interposed clause), only the first is preceded by a comma, e.g. Hívták, de mert hideg volt, nem indult útnak. ‘They invited him, but as it was cold, he didn’t set out.’ A hyphen is used between words and their elements in the following cases (a taxative list, partly reiterating points mentioned elsewhere): - in case of three successive identical consonant letters at the border of compound elements and between a proper name and its suffix (see above) - in certain kinds of word repetitions and coordinated compounds, in several types of subordinated compounds (see above), as well as in unusual, occasional compounds in the poetic language (e.g. bogáncs-szívem ‘my heart of thistle’) - if the ending or beginning word of two or more compounds are the same, and only the last instance is written out in full: the preceding, omitted instances are marked with a hyphen, e.g. tej-, zöldség- és gyümölcsfelhozatal ‘milk, vegetable, and fruits arrival,’ bortermelő és -értékesítő szövetkezet ‘cooperative for wine production and marketing’ - in numbers written in letters: beyond two thousand, if more numerals follow (see below) - nouns and their derived adjectives are connected to proper names in several cases with a hyphen (see above) - with double surnames (see above) - with several types of geographical compounds (see above) - with the enclitic question word -e (e.g. Tudod-e, merre menjünk? ‘Do you know which way to go?’) - in case of pairs of numbers (whether in digits or in letters) given in an approximative sense (e.g. nyolc-tíz nap ‘some eight or ten days’) The dash is referred to in Hungarian orthography under two names: gondolatjel (lit. “thought mark”) and nagykötőjel (lit. “big hyphen”). The first form applies to cases where it separates an interposed remark, usually a clause or a phrase (see above): this one is always used with spaces on either side (or a comma and a space after it). The second one is used to connect single words with each other to create a phrase: this one is normally used without spaces. This latter dash is used between words in the following cases (a taxative list): - to connect names of peoples or languages (e.g. francia–spanyol határ ‘French-Spanish border’) - to link proper names in a loose, occasional (i.e. not institutionalized) relationship (like when authors of a book are mentioned after each other, or for matches of two sports teams) - to express a relationship extending between two points (in time or space, e.g. Budapest–Bécs ‘Budapest-Vienna [route]’). Note: the dash can exceptionally be surrounded by spaces in more complex cases, e.g. i. e. 753 – i. sz. 456 ‘753 BC – 456 AD’ - giving types of machines, between letters or words and a number (e.g. Apollo–11) The ellipsis sign (…) is used to mark that an idea is unfinished (and more thoughts can be inferred from what is written), or if a part of a text has been omitted from a quotation. Suffixes are normally attached to words directly. However, a hyphen is used in a couple of cases (a taxative list, referring to other passages of the regulation): - in case of three successive identical consonant letters in certain cases that cannot be simplified, such as with proper names ending in double letters and having a suffix (see above) - personal and geographical names as well as titles of periodicals consisting of several separate elements take adjective-forming derivational suffixes with a hyphen (see above), e.g. Leonardo da Vinci-s ‘typical of L. da Vinci’ (but Leonardo da Vincivel ‘with L. da Vinci’), New York-i ‘N. Y. C.-based’ (but New Yorkban ‘in N. Y. C.’) - proper names (including personal names, geographical names, institution names, titles of periodicals) with one single element take a hyphen before suffix-like derivational elements such as -szerű and -féle (e.g. Petőfi-szerű ‘Petőfi-like,’ cf. Petőfivel ‘with Petőfi’ and petőfis ‘typical of Petőfi’) - if the word-final letter is not pronounced (silent), or this letter is part of a more complex cluster of letters, suffixes are connected with a hyphen (see above) - digits, punctuation marks, typographical signs, abbreviations, and acronyms take a hyphen before suffixes (see below) Other information on punctuation No full stop is needed after titles of periodicals, books, poems, articles, studies, and treatises as well as after institution names and direction signs if they are given highlighted or on their own. However, lower section titles can be inserted in a text and they can be followed by other sentences: in this case, a full stop is used after them. Question and exclamation marks can be used even in highlighted titles. A full stop is used in the following cases: - after Roman and Arabic numerals denoting ordinal numbers (see below) - after certain types of abbreviations (see below) - after numbers marking the year, the month, and the day of a date (see below). A colon is used to highlight a phrase or sentence mentioned as an example. This sign is also used between an author’s name and the title of the work, if they are given without a syntactic reference to each other. A possessive case, however, elimites the colon. (For example: Arany János: Toldi but Arany János Toldija ‘Toldi by János Arany.’) A hyphen is used at the end of a line, when a part of a word is taken to the next line. If a word already contains a hyphen for whatever reason, it can be used at the end of the line, just like if it contains a dash. If a part given in parentheses has a fairly close connection to the sentence, the closing punctuation mark is used after it. If the part in parentheses ends in a full stop, the punctuation mark still needs to be used after the parenthetical part. Quotation marks may be used (though should not be overused) to express ironic or other emotional overtones. Quotation marks can be used around the titles of books, works, articles etc. – in this case, suffixes can be connected with a hyphen. The beginning of decimal fractions is marked with a comma. Numerals of more than four digits are divided by spaces, in groups of three, counted from the back. (See more below.) The following signs and symbols are also used relatively frequently (with minor differences from the Anglo-Saxon usage): plus (+) for addition, minus (–) for extraction, the interpunct ( · ) for multiplication, the colon ( : ) for division, the equals sign (=) to mean equality, the percent sign (%) to express percent, the slash (/) to express alternativeness or fractions, the section sign (§) to refer to sections, a combination of an upper dot, a slash, and a lower dot (⁒) to mean “please turn over,” the asterisk or superscript numbers (* or 1) to mark notes, a right double curly quote (”) to express repetition (as opposed to ditto mark), a right single curly quote (’) to express lack, the degree symbol to mark the (Celsius) degree, and the tilde (~) to express repetition or equivalence. Suffixes are connected to the percent sign, the section sign, and the degree symbol with a hyphen, and the suffix will reflect the pronounced form, with respect to assimilations and linking vowels, e.g. 3%-kal [pr. “három százalékkal”] ‘by 3%.’ Abbreviations and acronyms These two groups are distinguished by whether the shortened form is only used in writing (abbreviations) or in speech as well (acronyms). Acronyms may be pronounced with the name of their letters (e.g. OTP ‘National Savings Bank’ [pr. ótépé]), or if possible, in full (MÁV ‘Hungarian State Railways’ [pr. máv]). The article preceding these forms is always adapted to the spoken form. Abbreviations are written in one word whether they are created from single nouns, nouns with derivational suffixes, or compounds, and they are written with a full stop. If an abbreviation retains the ending of the original word, the full stop is still preserved (e.g. pság. < parancsnokság ‘headquarters’). Abbreviation of phrases normally contains as many elements as the original phrase contains (e.g. s. k. < saját kezével ‘by his/her own hand’) but there are exceptions (e.g. vö. < vesd össze ‘compare’). Case is usually kept in abbreviations (e.g. Mo. < Magyarország ‘Hungary’) but some abbreviations created from lowercase words use the uppercase (e.g. Ny < nyugat ‘west’). Units of measurement are used in accordance with the international standard, depending on whether the sign comes from a common name (m < méter) or a proper name (N < newton after Isaac Newton). Standard forms of abbreviations are not to be altered even in full-capitalized inscriptions (ÁRA: 100 Ft ‘PRICE: 100 HUF’). Some abbreviations are written without a full stop, such as names of currencies, cardinal and ordinal directions, country codes of cars, codes of country names, chemical, physical, mathematicals symbols, symbols of units, etc. The full stop can be omitted from abbreviations in encyclopedias, but they are to be explained in a legend. A full stop is not used after abbreviations whose last element is a full word (e.g. uaz < ugyanaz ‘the same’). Suffixes are attached to abbreviations based on their pronunciation (even if the pronunciation is considerably different from the symbol, e.g. F [vas ‘iron’] > Fe-sal [vassal ‘with iron’], and the article, too, should reflect the pronounced form). If an abbreviation forms a compound with a full word, they are connected with a hyphen (e.g. fszla.-kivonat < folyószámla-kivonat ‘statement of current account’). Acronyms are classified into two groups: those consisting only of initials (betűszók lit. ‘letter-words’), and those consisting parts of the original word (szóösszevonások ‘word contractions’). The first group is divided again by whether they denote proper names (written in uppercase, e.g. ENSZ < Egyesült Nemzetek Szövetsége ‘United Nations Organization’, note that both letters of the digraphs SZ are capitalized) or common names (written in lowercase, e.g. vb < végrehajtó bizottság ‘executive committee’, note that it is written as one word despite the two elements). Some acronyms created from common names are still written in uppercase, though, especially in sciences (URH < ultrarövidhullám ‘ultra-high-frequency’) but other capitalized acronyms may be accepted too (TDK < tudományos diákkör ‘students’ scholarly circle’). In some cases, full-fledged words are created from the pronounced form of acronyms standing for common names (e.g. tévé < tv < televízió). - Problematic point(s): if a word is added to an abbreviation, it is necessary to know if the acronym already includes the meaning of this word: if not, it is considered a compound, so a hyphen is needed (e.g. CD lemez ‘CD disk’ doesn’t need a hyphen because ‘disk’ is already included in the meaning, but CD-írás ‘CD burning’ does). Acronyms of the second group are created from longer parts of the original words (in fact, at least one word of the original should keep at least two letters, not including digraphs). Their letters are not all capitalized, only the initial of acronyms that derive from proper names (e.g. Kermi < Kereskedelmi Minőség-ellenőrző Intézet, ‘Commercial Quality Control Institute’ cf. gyes < gyermekgondozási segély ‘maternity benefit’). Neither type of acronyms need a full stop between their elements or at their end. Acronyms take suffixes in accordance with their pronounced forms, whether their letters are pronounced one by one or as a full word (e.g. tbc-s [tébécés] ‘one with tuberculosis’). Those from the first group, consisting only of word initials, are suffixed with a hyphen. Their capitalized types will retain their uppercase even in their adjectival forms (ENSZ-beli ‘one from the UN’), and their ending vowel letter will not be lengthened even if it would be phonologically justified (e.g. ELTE-n [eltén] ‘at ELTE’). Those from the second group, however, consisting of shorter pieces of the constituting words, take suffixes without a hyphen (e.g. gyesen van ‘she is on maternity leave’). The same happens to those words that were created from pronounced letters (e.g. tévézik ‘watch TV’). Proper name types of these acronyms are written in lowercase if an adjective is formed out of them (e.g. kermis ‘Kermi-related’). In addition, their ending vowel letter may be lengthened in accordance with general phonological rules (e.g. Hungexpo > Hungexpónál ‘at Hungexpo’). Compounds are created with acronyms by the following rules: those from the first group take other elements with a hyphen (e.g. URH-adás ‘UHF broadcast’), and proper name types of the second group behave the same way (e.g. Kermi-ellenőrzés ‘control by Kermi’). The common name types of the second group, however, can be written as one word with other elements, except if they require a hyphen because of length (e.g. tévéközvetítés ‘TV transmission’). Numerals that can be pronounced with a short word are usually written in letters, just like those having a suffix, a postposition, or another compound element. On the other hand, digits should be used in case of longer or bigger numerals, as well as to note down exact quantities, dates, amounts of money, measurement, statistical data etc. If cardinal numbers are written in letters, they should be written as one word up to 2000 (e.g. ezerkilencszázkilencvenkilenc ‘1,999’) and they should be divided by hyphens by the usual three-digit division over 2000 (e.g. kétezer-egy ‘2,001’). Numbers written in digits can be written without a space up to four digits; above that, they are divided by spaces from the end by the usual three-digit division (e.g. 9999 but 10 000). If numbers are written under each other in a column, all can be divided by spaces. Ordinal numbers written in digits take a full stop (e.g. 3. sor ‘3rd line’). The full stop is retained even before the hyphen that connects suffixes (e.g. a 10.-kel ‘with the 10th’). Dates are an exception to this rule, see below. If a fraction functions as a noun, the quantifier is written separately (e.g. egy negyed ‘one quarter’). However, if a fraction takes an adjectival role in a phrase, the two parts are written in one word (e.g. egynegyed rész ‘a one-quarter part’). Giving the hour is also done by this rule. The integer part of a decimal is divided from the rest by a comma (e.g. 3,14 ‘3.14’). Numbers are usually written in Arabic numerals. Roman numerals are only used in some special traditional cases, only to express ordinal numbers (e.g. to express the numbering of monarchs, popes, districts of a city, congresses, etc.). Their use is advisable if they have a distinctive role as opposed to Arabic numbers, e.g. to denote the month between the year and the day, or to mark the floor number in front of the door number. The year is always given in Arabic numerals and it is followed by a full stop. The name of the month can be written in full or abbreviated, or it can be marked with a Roman or Arabic numeral. The day is always written in Arabic numerals. Dates are sometimes written without full stops and spaces, divided only by hyphens. Normally, a full stop is needed after the year. However, it is omitted in three cases: (1) if it is in a possessive relationship with the forthcoming word, (2) if it is followed by a postposition or an adjective coined from it, or (3) if it is the subject of a sentence or it stands solely in parentheses. For example, 1994. tavasz ‘the 1994 spring’ but 1994 tavasza ‘the spring of 1994’ and 1994 után ‘after 1994.’ When digits expressing year and day take suffixes, the full stop is dropped before the hyphen (e.g. 1838-ban ‘in 1838’ and m��rcius 15-én ‘on March 15th’). The word elsején ‘on the 1st of’ and its suffixed forms are abbreviated as 1-jén etc. If a day is followed by a postposition, the full stop is retained (e.g. 20. és 30. között ‘between the 20th and the 30th’). Letters and other postal consignments are to be addressed by the official addressing patterns of the Hungarian Postal Service. (It currently means that the name comes first, then the settlement, the street or the P.O.B., and finally the postcode, written under each other. Street directions contain the street number first, and optionally the floor number and the door number.) The words for “hour” and “minute” (óra and perc) are not usually abbreviated in fluent texts. If the time is given in digits, a full stop is placed between the hour and the minute without a space (e.g. 10.35). This latter form takes a hyphen before suffixes (e.g. 10.35-kor ‘at 10:35’). Digraphs are distinguished in collation (i.e. to determine the order of entries in a dictionary or directory) from the letters they consist of. For example, cukor is followed by csata, even though s precedes u, as cs is considered a single entity, and follows all the words starting with c. In general dictionaries, contracted forms of digraphs are collated as if they were written in full, e.g. Menyhért precedes mennybolt, even though n precedes y, because nny consists of ny + ny, and h precedes ny. Short and long versions of vowels are considered equal for the purposes of collation (e.g. ír precedes Irak) unless the words are otherwise identically spelt, in which case the short vowel precedes the long one (e.g. egér precedes éger). Phrases and hyphenated compounds are collated ignoring the space or the hyphen between their elements; lower and upper case don't count either. Obsolete digraphs in traditional Hungarian names and foreign words are treated as a series of individual letters. Diacritics are only taken into consideration if there is no other difference between words. However, in encyclopedias, map indices, and other specialized works, where Hungarian and foreign names are mixed, the universal Latin alphabet is followed. The rules of Hungarian orthography were first published by the Hungarian Academy of Sciences (HAS) in 1832, edited by Mihály Vörösmarty. Major revisions followed in 1877, 1922, 1954, and 1984. The currently effective version is the 11th edition from 1984. A new revised edition is currently under preparation. Rules of Hungarian orthography are laid down by the Hungarian Language Committee of the Research Institute for Linguistics of the Hungarian Academy of Sciences and published in a book titled Rules of Hungarian Orthography (A magyar helyesírás szabályai). This volume is supplemented by two orthographic dictionaries, one published by HAS, and one published by the publisher Osiris Kiadó. The former is considered more official, and comprises 140,000 words and phrases; the latter is more comprehensive, including more than 210,000 words and phrases as well as a more detailed elaboration of the regulations. Orthography and society Although orthography gives only instructions how to note down an existing text, usage-related suggestions are also given in most Hungarian linguistic publications (such as if a construction should be rephrased or a word should be avoided). These periodicals include Magyar Nyelv, Magyar Nyelvőr, Édes Anyanyelvünk, Magyartanítás, and Nyelvünk és Kultúránk, and several other periodicals have linguistic columns (such as Élet és Tudomány). Ádám Nádasdy sometimes touched on orthographic issues in his column popularizing linguistics in Magyar Narancs, and in his books based on this column and its forerunners. New entries of Korrektorblog (Proofreader’s Blog – “The mild Grammar Nazi”) are published on the main page of the popular news portal Index.hu. Linguistic educational programmes were broadcast on television, the most famous being Álljunk meg egy szóra! “Let’s stop for a word”, screened more than 500 occasions between 1987 and 1997, and some of its issues were published in a book. Apart from the Geographical Names Committee and the manual on geographical names mentioned above, other fields have their specialized orthographical dictionaries, such as economy, medicine, technology, chemistry, and military affairs, as well as collections of examples in periodicals, such as for zoological and botanical names. Orthographical competitions are organized at primary, secondary, and tertiary level in every year (Zsigmond Simonyi competition for upper primary schools – for students aged 10 to 14 –, József Implom competition for secondary schools, and Béla J. Nagy competition for universities). Word processors, some Internet browsers and mailing applications are supplied with a Hungarian spellchecker: Hunspell for OpenOffice.org, Firefox and Thunderbird. A Hungarian company, MorphoLogic has developed its own proofing tools, which is used in Microsoft Office. People can seek advice for free in orthography-related and other linguistic topics from the Department of Normative Linguistics at the Research Institute for Linguistics of the Hungarian Academy of Sciences or from the Hungarian Linguistic Service Office. - AkH.: A magyar helyesírás szabályai. [“akadémiai helyesírás”] Akadémiai Kiadó, Budapest (several prints after 1984). ISBN 963-05-7735-6. (The numbers refer to passages.) - OH.: Laczkó, Krisztina and Attila Mártonfi. Helyesírás. Osiris Kiadó, Budapest, 2004. ISBN 963-389-541-3. (The numbers refer to page numbers.) - AkH. 2 b) - AkH. 2 c) - AkH. 7. a) - AkH. 7. b) - AkH. 4. b) - AkH. 8. - AkH. 10–11. - AkH. 12. - AkH. 17. - AkH. 49. - AkH. 86. - AkH. 92. - OH. pp. 56–57, 60–61 - List of Hungarian common nouns with pronunciation variability and with a spelling different from pronunciation (Hungarian Wikipedia) - AkH. 46. - AkH. 89. - AkH. 93. - AkH. 94. - AkH. 96. - AkH. 97. - AkH. 98. - AkH. 100. a) - AkH. 100. b), 102. a) - AkH. 100. c), 102. b) - AkH. 101. a), 103. a) - AkH. 101. b), 103. b) - AkH. 95. - AkH. 106. - AkH. 107. - AkH. 123. - AkH. 125. - AkH. 128. - AkH. 125. c) - AkH. 123. a) - AkH. 125. b) - OH. pp. 94–96 - AkH. 131. - AkH. 129. - AkH. 112. - AkH. 108., 124., 126., 130. - AkH. 137. - OH. pp. 105–106. - AkH. 114. a) - AkH. 114. b) - AkH. 119. - AkH. 115. - Kálmán, László and Ádám Nádasdy. Hárompercesek a nyelvről [“Three-minute Stories on Language”]. Osiris, Budapest, 1999, p. 65 - AkH. 138. - OH. pp. 129–130 - AkH. 139. a) - AkH. 139. b) - AkH. 139. c) - HVG, 2008/17 - The book at Libri.hu - AkH. 201. - AkH. 156., 157. - AkH. 158. - AkH. 161. - AkH. 160. - AkH. 167. - OH. 170. - AkH. 162., 163. a)–b) - AkH. 163. c) - AkH. 164. - AkH. 168. - AkH. 170. - AkH. 172. - Fábián, Pál – Ervin Földi – Ede Hőnyi. A földrajzi nevek helyesírása. Akadémiai, Budapest, 1998 - AkH. 175. - OH. 195. - OH. pp. 198–199 - AkH. 176–177. - AkH. 178. - AkH. 179. - AkH. 180. - AkH. 181. - AkH. 182. - AkH. 183. - AkH. 184. - AkH. 185. - AkH. 187. - AkH. 188. c)–d) - AkH. 189. - AkH. 190. - AkH. 193., 194. - AkH. 195. - AkH. 196. - AkH. 197. - AkH. 198. - AkH. 200. - AkH. 145., 147. - AkH. 146. - AkH. 191. - AkH. 149. - AkH. 153. - AkH. 203. - AkH. 212. - AkH. 213. - AkH. 204. - AkH. 218., 219. - AkH. 205., 210. - AkH. 214. - AkH. 207. - AkH. 210. - AkH. 215. - AkH. 216. a) - AkH. 216. b) - AkH. 217. a) - AkH. 217. b) - AkH. 217. c) - AkH. 233. - AkH. 224., 226. g) - AkH. 225. - AkH. 228. - AkH. 229. - AkH. 232. - AkH. 226. f) - AkH. 226. a) - AkH. 226. b) - AkH. 226. e) - AkH. 226. d) - AkH. 231. - AkH. 234. - AkH. 238. - AkH. 240. - AkH. 241. - AkH. 242. - AkH. 243. a) - AkH. 243. b) - AkH. 243. c) - AkH. 243. d) - AkH. 244. - AkH. 245. - AkH. 246. - AkH. 247. a) - AkH. 247. b) - AkH. 247. c) - AkH. 247. d) - AkH. 247. e) - AkH. 247. f) - AkH. 248. a) - AkH. 248. b) - AkH. c) - AkH. 248. d) - AkH. 248. e) - AkH. 249. a) - AkH. 249. b) - AkH. 249. c) - AkH. 250. - AkH. 251. - AkH. 252. - AkH. 253. - AkH. 254–255. - Magad uram, ha gondod van - AkH. 256. - AkH. 257. - AkH. 258. - AkH. 260. - AkH. 261. - AkH. 262. - AkH. 263. - AkH. 264. - AkH. 265. - AkH. 266. - AkH. 267. - AkH. 268. - AkH. 269. - AkH. 270. - AkH. 271. - AkH. 272. - AkH. 273. - AkH. 274. - AkH. 275. - AkH. 276. - AkH. 277. - AkH. 278. - AkH. 280. - AkH. 282. - AkH. 283. - AkH. 284. - AkH. 285. - AkH. 286. - AkH. 287. - AkH. 288. - AkH. 289. - AkH. 290. - AkH. 292. - AkH. 293. - AkH. 294. - AkH. 295. - AkH. 296. - AkH. 297. - AkH. 298. - AkH. 299. - AkH. 14. a) - AkH. 14. c) - AkH. 14. d) - AkH. 14. e) - AkH. 15. - AkH. 16. - Magyar Nyelv - Magyar Nyelvőr - Édes Anyanyelvünk - The situation of language culture in Hungary - Modern Talking - Európai nyelvművelés. Az európai nyelvi kultúra múltja, jelene és jövője. Edited by Balázs Géza, Dede Éva. Inter Kht. – PRAE.HU, Budapest, 2008. ISBN 978-963-87733-2-6. Page 174 - Grétsy, László – István Vágó. Álljunk meg egy szóra! Ikva, Budapest 1991, ISBN 963-7760-91-1. - Tinta, Budapest, 2002, ISBN 963-9372-33-1 - Akadémiai, Budapest, 2004, ISBN 963-05-6298-7 - Műszaki, Budapest, 1990, ISBN 963-10-8268-7 - Műszaki, Budapest, 1982, ISBN 963-10-4404-1 - Zrínyi, Budapest, 1980, ISBN 963-326-528-2 - Gozmány László 1994. A magyar állatnevek helyesírási szabályai. Folia Entomologica Hungarica – Rovartani Közlemények, 55. 429–445. - Jávorka Levente – Fábián Pál – Hőnyi Ede (eds.) 1995/2000. Az állatfajtanevek helyesírása. Állattenyésztés és Takarmányozás, 44. 465–470. = Acta Agraria Kaposváriensis, 4. 82–86. - Mezőgazda, Budapest, 1999, ISBN 963-9121-22-3 - MorphoLogic proofing tools - Helyesírás- és nyelvhelyesség-ellenőrzés idegen nyelven a Word 2003 programban - Department of Normative Linguistics
Two-way ANOVA (Analysis of Variance) is a statistical method for analyzing the effect of two independent variables on a dependent (response) variable. Two-way ANOVA is also known as two-factor ANOVA because it involves two independent variables (factor or group variables). You can use built-in function to perform two-way ANOVA in R. The general syntax of aov() function is: # fit ANOVA model model <- aov(y ~ x1 + x2 + x1:x2, data = df) # view ANOVA summary summary(model) ||Dependent variable (should be continuous variable)| ||First independent variable (should be categorical variable)| ||Second independent variable (should be categorical variable)| ||Interaction between two independent variables| The following comprehensive example illustrates how to use two-way ANOVA for analyzing group differences (main effects) and interaction effects. As you read this post, you will gain a deeper understanding of two-way ANOVA and its practical applications. How to Perform Two-Way ANOVA in R For example, a researcher wants to analyze the effect of plant genotypes and locations on the plant height. The researcher collects the data of four genotypes from three different locations and measure plant height. The researcher wants to test the following null hypotheses: Null Hypothesis 1: The plant height is equal among plant genotypes i.e. the mean of plant height is equal Null Hypothesis 2: The plant height is equal at different locations i.e. the mean of plant height is equal Null Hypothesis 3: There is no significant effect of plant genotypes and location on the height of the plants i.e. there is no significant interaction effect Here, the alternative hypothesis is two-sided as the plant height can be lesser or greater for individual independent variables or for their interaction. Load and view the dataset, # load dataset df <- read.csv("https://reneshbedre.github.io/assets/posts/anova/two_way_anova.csv") # view five rows of data frame head(df) genotype location height 1 A L1 5 2 A L1 6 3 A L1 7 4 A L2 7 5 A L2 7 6 A L2 6 Check descriptive statistics (mean and variance) for each plant genotype and location, # load package library(dplyr) # get descriptive statistics df %>% group_by(genotype, location) %>% summarise(mean = mean(height), var = var(height)) # A tibble: 9 × 4 # Groups: genotype genotype location mean var <chr> <chr> <dbl> <dbl> 1 A L1 6 1 2 A L2 6.67 0.333 3 A L3 11 1 4 B L1 7.67 2.33 5 B L2 10 1 6 B L3 15 1 7 C L1 5.67 0.333 8 C L2 7.33 0.333 9 C L3 15.7 1.33 Visualize the box plot # load package library("ggplot2") # create boxplot ggplot(df, aes(x = factor(genotype), y = height, fill = location)) + geom_boxplot() + geom_point(aes(fill = location), size = 4, shape = 21, position = position_jitterdodge()) From the box plot and descriptive statistics, we can see that plant height is greatly differ by genotype and location. Now, we will perform a two-way ANOVA to check whether these differences in plant height are statistically significant and if there is a significant interaction effect between genotype and location. Perform a two-way ANOVA and summarise the results using # fit model model <- aov(height ~ genotype + location + genotype:location, data = df) # summary statistics summary(model) Df Sum Sq Mean Sq F value Pr(>F) genotype 2 40.67 20.33 21.11 1.90e-05 *** location 2 277.56 138.78 144.12 8.38e-12 *** genotype:location 4 23.11 5.78 6.00 0.003 ** Residuals 18 17.33 0.96 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Note: This ANOVA is performed for a balanced design i.e. equal sample size for each group. If you have an unbalanced design, you should perform an ANOVA with type III sums of squares. The two-way ANOVA analysis reports the following important statistics for main effects and interaction effects, |genotype (main effect)||21.11||1.90e-05| |location (main effect)||144.12||8.38e-12| |genotype:location (interaction effect)||6.00||0.003| According to the two-way ANOVA results, the p value is significant [F(2, 18) = 21.11, p < 0.05] for genotype. Hence, we reject the null hypothesis and conclude that plant height among genotypes is significantly different. Similarly, the p value is significant [F(2, 18) = 144.12, p < 0.05] for location. Hence, we reject the null hypothesis and conclude that location has a significant effect on plant height. The interaction between plant genotypes and location is also significant [F(4, 18) = 6, p < 0.05]. Hence, we reject the null hypothesis, and conclude that both plant genotypes and location significantly influence plant height. Enhance your skills with courses on Statistics and R - Introduction to Statistics - R Programming - Data Science: Foundations using R Specialization - Data Analysis with R Specialization - Getting Started with Rstudio This work is licensed under a Creative Commons Attribution 4.0 International License Some of the links on this page may be affiliate links, which means we may get an affiliate commission on a valid purchase. The retailer will pay the commission at no additional cost to you.
Redi's Experiment Worksheet. The, a, red, i, see, it, at, is, my. Web quiz & worksheet goals. By printing out this quiz and taking it with pen and paper creates for a good variation to only playing it online. Great to knock out the have to meet standard for redi and must reteach scientific method we all go through. Ideally, how many variables should an experiment test at a time? His Hypothesis Flies Produce Maggots Step 3: What are the steps in the scientific method. Great to knock out the have to meet standard for redi and must reteach scientific method we all go through. Independent variable (redi) covers on jars, absent or present. What Was Francesco Redi’s Hypothesis About The Appearance Of Maggots? Redi was trying to prove that maggots came from flies, and that spontaneous generation was not real. At age 21, he received a medical degree from the university of pisa, italy. Web this is a free printable worksheet in pdf format and holds a printable version of the quiz redi's experiment. What Were All Three Jars Exposed To? Web let your students explore the scientific method and experimental design with redi 's experiment on spontaneous generation. Students will make observations, create a hypothesis, determine variables, and use evidence to form a conclusion. He covered the first group of jars with fine cloth. Web Redi's Experiment — Quiz Information. Web what was redi’s initial observation. His question why do maggots suddenly appear on raw meat? Web this is a free printable worksheet in pdf format and holds a printable version of the quiz redi's experiment.by printing out this quiz and taking it with. Web 1668, An Italian Biologist, Francesco Redi, Did Experiments To Prove That Maggots Did Not Come From Meat. Spontaneous generation is correct,flies bring maggots. Web redi's experiment interactive and downloadable worksheets. Web this is a powerpoint that details the steps to the scientific method using francesco redi's famous experiment with flies and purifying meat.
Scientists believe that Psyche could be the metal core of an early planet that lost its mantle and crust due to collisions that could have occurred early in the formation of the solar system. Meanwhile, new research in the Journal of Planetary Science looked at Psyche via the Hubble Telescope at two specific points in its rotation, to capture both sides of the asteroid. Research includes the first ultraviolet observations of Psyche, helping us to increase our understanding of the surface and its possible composition. “We looked at how the ultraviolet rays reflect off the asteroid surface,”; Tracy Becker told CNN. She is the study’s lead author and planetary scientist at the Southwest Research Institute. “The way that UV radiation from Psyche is reflected is very similar to the way iron reflects sunlight,” she explained. The importance of studying Psyche Psyche’s study could give us a better understanding of the earliest periods in the history of the solar system, when objects had “higher inclination and crazier eccentricities”, and there would be more opportunities. more collisions, Becker told CNN. If Psyche was the metal core of an unprecedented planet, a closer look at it could tell us a lot about the planet’s core that we won’t be able to explore, Becker said. According to Becker, the study also found two possible signals about changes in the Psyche surface due to the solar wind. “The first thing is, as we go deeper into the UV rays, we start to see brighter asteroids,” Becker said. “In the past, when we saw that on some planetary celestial bodies, including the moon, we often told us that it was because charged particles from the sun interact with materials on the surface,” he said. We call it space weather, “she added. According to Becker, the second signal is the detection of iron oxide ultraviolet absorbent bands. “That could imply that there are some types of interactions with oxygen and metals,” Becker said. According to Becker, oxygen could have come from the sun or it could already be in the asteroid materials. Becker said more research will be needed to connect these findings with more information about when the first asteroid might form. Prepare to visit Psyche The research comes as NASA’s mission to Psyche, led by Arizona State University, is coming to an end. Lindy Elkins-Tanton, a planetary scientist and lead investigator for the mission, told CNN: “We are making space hardware and ready for launch by August 2022. Elkins-Tanton also is a small author of new research. Elkins-Tanton explains that the unmanned spacecraft will arrive at Psyche in January 2026 and it will orbit the asteroid for 21 months, mapping and studying it from afar. Upon reaching Psyche, the mission will be the first to photograph the asteroid. Scientists plan to provide these images immediately to people on Earth to see and further study, possibly within 30 minutes of taking them, Elkins-Tanton said. “Everyone in the world will be able to look at Psyche at the same time we do, and scratch their head and say, what is this?” she added. What we want to know about Psyche Elkins-Tanton said she was excited about the scientific community’s interest in learning more about Psyche ahead of the mission, which would be the real test of the asteroid theories put forward until now on. “There’s a chance for people to come up with measurements, hypotheses and predictions, then really find out if they’re true, because we’ll go and find out,” she added. Questions that Elkins-Tanton hopes will find answers that may help us understand the “ingredients that make up the cake” – our planet. “Do [Psyche] Is it mixing oxygen into it, in the way that this study has shown that it may have some? Or other light elements, like sulfur, or even potassium, mixed into the metal phase? We can say something about the temperature and pressure conditions it forms, based on its composition, that will tell us something about the size of the body it forms, and What shaped our Earth? “ “One thing we can promise right now is that Psyche will surprise us,” said Elkins-Tanton. “Everything we know about it now is probably going to go wrong when we go there and find out.” A $ 10,000 million asteroid Regarding that staggering estimate that Psyche could be worth $ 10,000 billion, Elkins-Tanton said that she was responsible for giving that number in interviews when the NASA mission was first announced in 2017. . According to Elkins-Tanton, while the conversation about mining asteroids for resources is growing here on Earth, Psyche is not the target we should strive for. “We cannot bring Psyche back to Earth. We have absolutely no technology to do that,” says Elkins-Tanton. Even if it was possible to bring back metal from Psyche without destroying Earth, that would likely crash the market, Elkins-Tanton said. “There are all kinds of problems with this, but it’s still fun to think about the value of a piece of metal the size of Massachusetts.” Exploit space and our imagination According to Elkins-Tanton, objects close to Earth are more realistic candidates for space exploration. One of the most interesting ideas was to use asteroids as a source of water, which could be built into rocket fuel. “Most of the nearest asteroids have no water ice, but they have water-bearing minerals linked in their lattice,” explains Elkins-Tanton, which can be accessed by heating the minerals. substance, Elkins-Tanton explains. “They are almost like small refueling stations,” she said. Elkins-Tanton told CNN: “This goes a bit ahead of us in terms of what we can really do, but I like it because it shows how aspiring people can be and it shows See how powerful our imagination is, ”Elkins-Tanton told CNN. “For me, it’s the tremendous power of space exploration – it gives us the drive to do great things,” she added.
*Please note: you may not see animations, interactions or images that are potentially on this page because you have not allowed Flash to run on S-cool. To do this, click here.* Often you are asked to write an answer to a given number of decimal places (be careful to read the question properly!). What you need to do: 1. Count the number of decimal places you need. 2. Look at the next digit. If it's 4 or below just write down the answer with the right amount of decimal places. If it's 5 or above write down the number but put your last decimal place up by one. For example: 2.3635 to two decimal places For example: 53.586 to two decimal places What if the last digit is a 9? A 9 goes up to a 10 so you need to put a zero in the last column and add one to the previous number. For example: 8.6397 to three decimal places Try this one! Type in what you think the answer is and click the button to see whether you are right: If you are not told how many places to write just be sensible! Generally, you should go to one more place than the numbers used in the question. With angles, no more than one decimal place should be used unless told otherwise. These involve all digits, not just decimal places. Zeros are only "significant" if they separate two other non-zero digits! What you need to do: 1. Start counting at the first non-zero digit until you have the number of digits that you need. 2. Look at the next digit. If it's a 4 or below just write the number down leaving the last digit the same. If it's a 5 or above put the last digit up by one. 3. If you are rounding whole numbers (i.e. to the left of the decimal point) put zeros in all the other columns after your last digit until you reach the decimal point. e.g. 12 736 to three significant figures is 12 700 e.g. 6530 to one significant figure is 7000 e.g. 0.576 to two significant figures is 0.58 Try this one! Type in the answer and click the button to see if you are right: In real situations, use common sense to decide on your accuracy. E.g. Length of a back garden would not be written as 8.5632 metres. It would be more sensible to write 8.6 metres! Log in here
Part of a series on the |History of Greece| Most of the areas which today are within modern Greece's borders were at some point in the past a part of the Ottoman Empire. This period of Ottoman rule in Greece, lasting from the mid-15th century until the successfull Greek War of Independence that broke out in 1821 and the establishment of the modern Greek state in 1832, is known in Greek as as Tourkokratia (Greek: Τουρκοκρατία, "Turkish rule"; English: "Turkocracy"). Some regions, however, like the Ionian islands or Mani in Peloponese were never part of the Ottoman administration, although the latter was under Ottoman suzerainty. The region of northern Greece, however, dominated by Greek Macedonia and Thrace, was not only incorporated into the Ottoman Empire at a very early stage—between the 1360s and 1430s—but along with Constantinople and Northwest Anatolia formed what became the central hub and most important territory of the empire, militarily, culturally, and economically. The Byzantine Empire, the remnant of the ancient Roman Empire who ruled most of the Greek-speaking world for over 1100 years, had been fatally weakened since the sacking of Constantinople by the Latin Crusaders in 1204. The Ottoman advance into Greece was preceded by victory over the Serbs to its north. First the Ottomans won the Battle of Maritsa in 1371. The Serb forces were then led by the King Vukasin Mrnjavcevic, the father of Prince Marko and the co-ruler of the last emperor from the Serbian Nemanjic dynasty. This was followed by another Ottoman victory in the 1389 Battle of Kosovo. With no further threat by the Serbs and the subsequent Byzantine civil wars, the Ottomans captured Constantinople in 1453 and advanced southwards into Greece, capturing Athens in 1458. The Greeks held out in the Peloponnese until 1460, and the Venetians and Genoese clung to some of the islands, but by 1500 most of the plains and islands of Greece were in Ottoman hands. The mountains of Greece were largely untouched, and were a refuge for Greeks to flee foreign rule and engage in guerrilla warfare. Cyprus fell in 1571, and the Venetians retained Crete until 1669. The Ionian Islands were only briefly ruled by the Ottomans (Kefalonia from 1479 to 1481 and from 1485 to 1500), and remained primarily under the rule of the Republic of Venice. Ottoman Greece was a multiethnic society as apart from Greeks and Turks, there were many Jews, Italians (especially Venetians), Armenians, and various Balkan peoples (Serbs, Albanians, Roma (Gypsies), Bulgarians etc.). However, the modern Western notion of multiculturalism, although at first glance appears to correspond to the system of millets, is considered to be incompatible with the Ottoman system. The Greeks with the one hand were given some privileges and freedom; with the other they were exposed to a tyranny deriving from the malpractices of its administrative personnel over which the central government had only remote and incomplete control. Despite losing their political independence, the Greeks remained dominant in the fields of commerce and business. The consolidation of Ottoman power in the 15th and 16th centuries rendered the Mediterranean safe for Greek shipping, and Greek shipowners became the maritime carriers of the Empire, making tremendous profits. After the Ottoman defeat at the Battle of Lepanto however, Greek ships often became the target of vicious attacks by Catholic (especially Spanish and Maltese) pirates. This period of Ottoman rule had a profound impact in Greek society, as new elites emerged. The Greek land-owning aristocracy that traditionally dominated the Byzantine Empire suffered a tragic fate, and was almost completely destroyed. The new leading class in Ottoman Greece were the prokritoi (πρόκριτοι in Greek) called kocabaşis by the Ottomans. The prokritoi were essentially bureaucrats and tax collectors, and gained a negative reputation for corruption and nepotism. On the other hand, the Phanariots became prominent in the imperial capital of Constantinople as businessmen and diplomats, and the Greek Orthodox Church and the Ecumenical Patriarch rose to great power under the Sultan's protection, gaining religious control over the entire Orthodox population of the Empire, Greek and Slavic. The consolidation of Ottoman rule was followed by two distinct trends of Greek migration. The first entailed Greek intellectuals, such as Basilios Bessarion, Georgius Plethon Gemistos and Marcos Mousouros, migrating to other parts of Western Europe and influencing the advent of the Renaissance (though the large scale migration of Greeks to other parts of Europe, most notably Italian university cities, began far earlier, following the Crusader capture of Constantinople). This trend had also effect on the creation of the modern Greek diaspora. The second entailed Greeks leaving the plains of the Greek peninsula and resettling in the mountains, where the rugged landscape made it hard for the Ottomans to establish either military or administrative presence. The Sultan sat at the apex of the government of the Ottoman Empire. Although he had the trappings of an absolute ruler, he was actually bound by tradition and convention. These restrictions imposed by tradition were mainly of a religious nature. Indeed, the Koran was the main restriction on absolute rule by the sultan and in this way, the Koran served as a "constitution." Ottoman rule of the provinces was characterized by two main functions. The local administrators within the provinces were to maintain a military establishment and to collect taxes. The military establishment was feudal in character. The Sultan's cavalry was entirely Turkish and were allotted land, either large allotments or small allotments based on the rank of the individual cavalryman. All non-Muslims were forbidden to ride a horse which made traveling more difficult. The Ottomans divided Greece into six sanjaks, each ruled by a Sanjakbey accountable to the Sultan, who established his capital in Constantinople in 1453. Before this division occurred, the Ottomans implemented the millet system, which segregated peoples within the Ottoman Empire based on religion. The conquered land was parceled out to Ottoman nobles, who held it as feudal fiefs (timars and ziamets) directly under the Sultan's authority. This land could not be sold or inherited, but reverted to the Sultan's possession when the fief-holder died. During their life-times, these Ottoman nobles, who were generally cavalrymen in the Sultan's cavalry, lived well on the proceeds of their estates with the land of the estate being tilled largely by peasants. The Ottomans basically installed this feudal system right over the top of the existing system of peasant tenure. The peasantry remained in possession of their own land and their tenure over their plot of land remained hereditary and inalienable. Nor was any military service ever imposed on the peasant by the Ottoman government. All non-Muslims were in theory forbidden from carrying arms, but this was ignored. Indeed, in regions such as Crete, almost every man carried arms. The Greek people were, however, heavily taxed by the Ottoman Empire and this tax included a "tribute of children." The Ottomans required that one male child in five within every Christian family be taken away from the family and enrolled in the corps of Janissaries for military training in the Sultan's army. There were many repressive laws, and occasionally the Ottoman government committed massacres against the civilian population. No Greek's word could stand against a Turk's in a law court. Under the Ottoman system of government, Greek society was at the same time fostered and restricted. With one hand the Turkish regime gave privileges and freedom to its subject people; with the other it imposed a tyranny deriving from the malpractices of its administrative personnel over which it exercised only remote and incomplete control. In fact the “rayahs” were downtrodden and exposed to the vagaries of Turkish administration and sometimes to the Greek landlords. The term rayah came to denote an underprivileged, tax-ridden and socially inferior population. The economic situation of the majority of Greece deteriorated heavily during the Ottoman era of the country. Life became ruralized and militarized. Heavy burdens of taxation were placed on the Christian population, and many Greeks were reduced to subsistence farming whereas during prior eras the region had been heavily developed and urbanized. The exception to this rule was in Constantinople and the Venetian-held Ionian islands, where many Greeks lived in prosperity. Greeks heavily resented the declining economic situation in their country during the Ottoman era. After about 1600, the Ottomans resorted to military rule in parts of Greece, which provoked further resistance, and also led to economic dislocation and accelerated population decline. Ottoman landholdings, previously fiefs held directly from the Sultan, became hereditary estates (chifliks), which could be sold or bequeathed to heirs. The new class of Ottoman landlords reduced the hitherto free Greek farmers to serfdom, leading to depopulation of the plains, and to the flight of many people to the mountains, in order to escape poverty. The Sultan regarded the Ecumenical Patriarch of the Greek Orthodox Church as the leader of all Orthodox, Greeks or not, within the empire. The Patriarch was accountable to the Sultan for the good behavior of the Orthodox population, and in exchange he was given wide powers over the Orthodox communities, including the non-Greek Slavic peoples. The Patriarch controlled the courts and the schools, as well as the Church, throughout the Greek communities of the empire. This made Orthodox priests, together with the local magnates, called Prokritoi or Dimogerontes, the effective rulers of Greek towns and cities. Some Greek towns, such as Athens and Rhodes, retained municipal self-government, while others were put under Ottoman governors. Several areas, such as the Mani Peninsula in the Peloponnese, and parts of Crete (Sfakia) and Epirus, remained virtually independent. During the frequent Ottoman–Venetian Wars, the Greeks sided with the Venetians against the Ottomans, with a few exceptions. The Orthodox Church assisted greatly in the preservation of the Greek heritage, and during the 19th century, adherence to the Greek Orthodox faith became increasingly a mark of Greek nationality. As a rule, the Ottomans did not require the Greeks to become Muslims, although many did so on a superficial level in order to avert the socioeconomic hardships of Ottoman rule or because of the alleged corruption of the Greek clergy. The regions of Greece which had the largest concentrations of Ottoman Greek Muslims were Greek Macedonia, notably the Vallaades, neighboring Epirus, and Crete (see Cretan Muslims). Under the millet logic, Greek Muslims, despite often retaining elements of their Greek culture and language, were classified simply as "Muslim", although most Greek Orthodox Christians deemed them to have "turned-Turk" and therefore saw them as traitors to their original ethno-religious communities. Some Greeks either became New Martyrs, such as Saint Efraim the Neo-Martyr or Saint Demetrios the Neo-martyr while others became Crypto-Christians (Greek Muslims who were secret practitioners of the Greek Orthodox faith) in order to avoid heavy taxes and at the same time express their identity by maintaining their secret ties to the Greek Orthodox Church. Crypto-Christians officially ran the risk of being killed if they were caught practicing a non-Muslim religion once they converted to Islam. There were also instances of Greeks from theocratic or Byzantine nobility embracing Islam such as John Tzelepes Komnenos and Misac Palaeologos Pasha. Byzantine historians noted the liberal and generous nature of Ottoman Sultans. Bayezid I, according to a Byzantine historian, freely admitted Christians into his society while Murad II set out reforms of abuses that was prevalent under Greek rulers. Persecutions of Christians did nevertheless take place under the reign of Selim I (1512-1520), known as Selim the Grim, who attempted to stamp out Christianity from the Ottoman Empire. Selim ordered the confiscation of all Christian churches, and while this order was later rescinded, Christians were heavily persecuted during his era. Taxation and the "tribute of children" Greeks paid a land tax and a heavy tax on trade, the latter taking advantage of the wealthy Greeks to fill the state coffers. Greeks, like other Christians, were also made to pay the jizya, or Islamic poll-tax which all non-Muslims in the empire were forced to pay instead of the Zakat that Muslims must pay as part of the 5 pillars of Islam. Failure to pay the jizya could result in the pledge of protection of the Christian's life and property becoming void, facing the alternatives of conversion; enslavement or death. Like in the rest of the Ottoman Empire, Greeks had to carry a receipt certifying their payment of jizya at all times or be subject to imprisonment. Most Greeks did not have to serve in the Sultan's army, but the young boys that were taken away and converted to Islam were made to serve in the Ottoman military. In addition, girls were taken in order to serve as odalisques in harems. These practices are called the "tribute of children" (devshirmeh) (in Greek παιδομάζωμα paidomazoma, meaning "child gathering"), whereby every Christian community was required to give one son in five to be raised as a Muslim and enrolled in the corps of Janissaries, elite units of the Ottoman army. There was much resistance, for example, Greek folklore tells of mothers crippling their sons to avoid their abduction. Nevertheless, entrance into the corps (accompanied by conversion to Islam) offered Greek boys the opportunity to advance as high as governor or even Grand Vizier. One prominent example is Pargali Ibrahim Pasha, who was born the son of a Greek fisherman from [Parga] and became one of the most trusted advisors of Sultan Suleiman and field general and statesman with his own palace. Recruits were in a some cases gained through voluntarily accessions, as some parents were often eager to have their children enroll in the Janissary service that ensured them a successful career and comfort. Opposition of the Greek populace to taxing or paidomazoma resulted in grave consequences. For example, in 1705 an Ottoman official was sent from Naoussa in Macedonia to search and conscript new Janissaries and was killed by Greek rebels who resisted the burden of the devshirmeh. The rebels were subsequently beheaded and their severed heads were displayed in the city of Thessaloniki. In some cases, it was greatly feared as Greek families would often have to relinquish their own sons who would convert and return later as their oppressors. In other cases, the families bribed the officers to ensure that their children got a better life as a government officer. The incorporation of Greece into the Ottoman Empire had other long-term consequences. Economic activity declined to a great extent (mainly because trade flowed towards cities like Thessaloniki, İzmir, and Constantinople), and the population declined, at least in the lowland areas (Ottoman censuses did not include many people in mountainous areas). Turkish settled extensively in Thrace and Greek Macedonia, while there were population of Greek Muslims of Christian Orthodox convert origin in especially southwestern Macedonia, such as the Vallahades. After their expulsion from Spain in 1492, Sephardic Jews settled in Thessaloniki (known in this period as Salonica or Selanik), which became the main Jewish centre of the empire. The Greeks became more inward-looking, with each region cut off from the others — only Muslims could in theory ride a horse, which made travel more difficult. Greek culture and education declined significantly (with the exception of the Orthodox Church). Influence to tradition After the 16th century, many Greek folk songs (dimotika) were produced and inspired from the way of life of the Greek people, brigands and the armed conflicts during the centuries of Ottoman rule. Klephtic songs (Greek: Κλέφτικα τραγούδια), or ballads, are a subgenre of the Greek folk music genre and are thematically oriented on the life of the klephts. Prominent conflicts were immortalised in several folk tales and songs, such as the epic ballad To tragoudi tou Daskalogianni of 1786, about the resistance warfare under Daskalogiannis. After the unsuccessful Ottoman siege of Vienna, in 1683, the Ottoman Empire entered a long decline both militarily against the Christian powers and internally, leading to an increase in corruption, repression and inefficiency. This provoked discontent which led to disorders and occasionally rebellions. As more areas drifted out of Ottoman control, the Ottomans resorted to military rule in parts of Greece. This only provoked further resistance. Moreover, it led to economic dislocation, as well as accelerated population decline. Another sign of decline was that Ottoman landholdings, previously fiefs held directly from the Sultan, became hereditary estates (chifliks), which could be sold or bequeathed to heirs. The new class of Ottoman landlords reduced the hitherto free Greek peasants to serfdom, leading to further poverty and depopulation in the plains. Athens was on its most part a run-down village, its peasant Greek population extremely poor and isolated, not allowed near the Acropolis where the more wealthy Turks were settled. The French diplomat and philhellene François-René de Chateaubriand after his visit in Sounion in 1806, wrote his impressions: "Around me there were graves, silence, disaster, death and some Greek sailors sleeping without cares on the ruins of Greece. I abandoned that divine place forever, my head filled with its greatness in the past and its downfall today". However, the overall Greek population in the plains was reinforced by the return of some Greeks from the mountains during the 17th century. On the other hand, the position of educated and privileged Greeks within the Ottoman Empire improved greatly in the 17th and 18th centuries. As the empire became more settled, and began to feel its increasing backwardness in relation to the European powers, it increasingly recruited Greeks who had the kind of administrative, technical and financial skills which the Ottomans lacked. From the late 1600s Greeks began to fill some of the highest and most important offices of the Ottoman state. The Phanariotes, a class of wealthy Greeks who lived in the Phanar district of Constantinople, became increasingly powerful. Their travels to Western Europe as merchants or diplomats brought them into contact with advanced ideas of liberalism and nationalism, and it was among the Phanariotes that the modern Greek nationalist movement was born. Many Greek merchants and travelers were influenced by the ideas of the French revolution and a new Age of Greek Enlightenment was initiated at the beginning of the 17th century in many Ottoman-ruled Greek cities and towns. Greek nationalism was also stimulated by agents of Catherine the Great, the Orthodox ruler of the Russian Empire, who hoped to acquire the lands of the declining Ottoman state, including Constantinople itself, by inciting a Christian rebellion against the Ottomans. However, during the Russian-Ottoman War which broke out in 1768, the Greeks did not rebel, disillusioning their Russian patrons. The Treaty of Kuchuk-Kainarji (1774) gave Russia the right to make "representations" to the Sultan in defense of his Orthodox subjects, and the Russians began to interfere regularly in the internal affairs of the Ottoman Empire. This, combined with the new ideas let loose by the French Revolution of 1789, began to reconnect the Greeks with the outside world and led to the development of an active nationalist movement, one of the most progressive of the time. Greece was peripherally involved in the Napoleonic Wars, but one episode had important consequences. When the French under Napoleon Bonaparte seized Venice in 1797, they also acquired the Ionian Islands, thus ending the four hundredth year of Venetian rule over the Ionian Islands. The islands were elevated to the status of a French dependency called the Septinsular Republic, which possessed local autonomy. This was the first time Greeks had governed themselves since the fall of Trebizond in 1461. Among those who held office in the islands was John Capodistria, destined to become independent Greece's first head of state. By the end of the Napoleonic Wars in 1815, Greece had re-emerged from its centuries of isolation. British and French writers and artists began to visit the country, and wealthy Europeans began to collect Greek antiquities. These "philhellenes" were to play an important role in mobilizing support for Greek independence. Uprisings before 1821 Greeks in various places of the Greek peninsula would at times rise up against Ottoman rule, mainly while taking advantage of wars the Ottoman Empire would engage into. Those uprisings were of mixed scale and impact. During the Ottoman–Venetian War (1463–1479), the Maniot Kladas brothers, Krokodelos and Epifani, were leading bands of stratioti on behalf of Venice against the Turks in Southern Peloponnese. They put Vardounia and their lands into Venetian possession, for which Epifani then acted as governor. In 1571, the Christian fleet in the Battle of Lepanto included a dozen of ships with Greek captains and crew from Crete and the Ionian islands, one of them manned with funds of El Greco. The success of the battle by the Holy League triggered uprisings in places of the peninsula such as Phocis (recorded in Chronicle of Galaxidi) and the Peloponnese, led by the brothers Melissinoi and others. All of these revolts were crushed by the following year. During the Cretan War (1645–1669), the Maniots would aid Francesco Morosini and the Venetians in the Peloponnese. Greek irregulars also aided the Venetians through the Morean War in their operations on the Ionian Sea and Peloponnese. A major uprising during that period was the Orlov Revolt (Greek: Ορλωφικά) which took place during the Russo-Turkish War (1768–1774) and triggered armed unrest in both the Greek mainland and the islands. In 1778, a Greek fleet of seventy vessels assembled by Lambros Katsonis which harassed the Turkish squadrons in the Aegean sea, captured the island of Kastelorizo and engaged the Turkish fleet in naval battles until 1790. The War of Independence |This section needs additional citations for verification. (August 2012)| A secret Greek nationalist organization called the "Friendly Society" or "Company of Friends" (Filiki Eteria) was formed in Odessa in 1814. The members of the organization planned a rebellion with the support of wealthy Greek exile communities in Britain and the United States. They also gained support from sympathizers in Western Europe, as well as covert assistance from Russia. The organization secured Capodistria, who became Russian Foreign Minister after leaving the Ionian Islands, as the leader of the planned revolt. On March 25 (now Greek Independence Day) 1821, the Orthodox Bishop Germanos of Patras proclaimed a national uprising. Simultaneous risings were planned across Greece, including in Macedonia, Crete, and Cyprus. With the initial advantage of surprise, aided by Ottoman inefficiency and the Ottomans' fight against Ali Pasha of Tepelen, the Greeks succeeded in capturing the Peloponnese and some other areas. Some of the first Greek actions were taken against unarmed Ottoman settlements, with about 40% of Turkish and Albanian Muslim residents of the Peloponnese killed outright, and the rest fleeing the area or being deported. The Ottomans recovered, and retaliated in turn with savagery, massacring the Greek population of Chios and other towns. This worked to their disadvantage by provoking further sympathy for the Greeks in Britain and France, although the British and French governments suspected that the uprising was a Russian plot to seize Greece and possibly Constantinople from the Ottomans. The Greeks were unable to establish a strong government in the areas they controlled, and characteristically fell to fighting amongst themselves. Inconclusive fighting between Greeks and Ottomans continued until 1825 when the Sultan sent a powerful fleet and army from Egypt to ravage the Aegean Islands and the Peloponnese. The atrocities that accompanied this expedition, together with sympathy aroused by the death of the poet and leading philhellene Lord Byron at Messolongi in 1824, eventually led the Great Powers to intervene. In October 1827, the British, French and Russian fleets, on the initiative of local commanders but with the tacit approval of their governments destroyed the Ottoman fleet at the Battle of Navarino. This was the decisive moment in the war of independence. In October 1828, the French landed troops in the Peloponnese to stop the Ottoman atrocities. Under their protection, the Greeks were able to regroup and form a new government. They then advanced to seize as much territory as possible, including Athens and Thebes, before the Western Powers imposed a ceasefire. A conference in London in March 1829 proposed an independent Greek state with a northern frontier running from Arta to Volos, and including only Euboia and the Cyclades among the islands. The Greeks were disappointed at these restricted frontiers, but were in no position to resist the will of Britain, France and Russia, who had contributed mightily to Greek independence. By the Convention of May 11, 1832, Greece was finally recognized as a sovereign state. When the Ottomans finally granted the Greeks their independence, a multi-power treaty was formally established in 1830. Capodistria, who had been Greece's unrecognized head of state since 1828, was assassinated by the Mavromichalis family in October 1831. To prevent further experiments in republican government, the Great Powers, especially Russia, insisted that Greece be a monarchy, and the Bavarian Prince Otto, was chosen to be its first king. - Greek Muslims - Timeline of Orthodoxy in Greece (1453–1821) - Bruce Merry, Encyclopedia of Modern Greek Literature, Turkocracy, p. 442. - World and Its Peoples. Marshall Cavendish. 2009. p. 1478. ISBN 0-7614-7902-3. The klephts were descendants of Greeks who fled into the mountains to avoid the Turks in the fifteenth century and who remained active as brigands into the nineteenth century. - Mark Mazower, Salonica, city of ghosts: Christians, Muslims, and Jews, 1430-1950. - Maurus Reinkowski, “Ottoman “Multiculturalism”? The Example of the Confessional System in Lebanon”. Lecture , Istanbul, 1997. Edited by the Orient-Institut der Deutschen Morgenlandischen Gesellschaft, Beirut,1999, pp. 15, 16. - Douglas Dakin, The Greek Struggle for Independence, 1821-1833. University of California Press, 1 Ιαν 1973, p. 16. - Clogg, 2002[page needed] - Treadgold, Warren. History of Byzantine State and Society. Stanford University Press, 1997.[page needed] - Vacalopoulos, p. 45. "The Greeks never lost their desire to escape from the heavy hand of the Turks, bad government, the impressment of their children, the increasingly heavy taxation, and the sundry caprices of the conqueror. Indeed, anyone studying the last two centuries of Byzantine rule cannot help being struck by the propensity of the Greeks to flee misfortune. The routes they chiefly took were: first, to the predominantly Greek territories, which were either still free or Frankish-controlled (that is to say, the Venetian fortresses in the Despotate of Morea, as well as in the Aegean and Ionian Islands) or else to Italy and the West generally; second, to remote mountain districts in the interior where the conqueror's yoke was not yet felt." - Woodhouse, C. M. (1998). Modern Greece: A Short History. London: Faber & Faber Pub. p. 100. ISBN 978-0571197941. - C. M. Woodhouse, Modern Greece: A Short History, p. 101. - Mazower, Mark (2006). Salonica, City of Ghosts: Christians, Muslims and Jews, 1430-1950. Vintage. p. 126. ISBN 978-0375727382. - Bat Ye'or The Dhimmi: Jews and Christians Under Islam, (Fairleigh Dickinson University Press, 1985) - C. E. Bosworth International Journal of Middle East Studies, Vol. 3, No. 2 (Cambridge University Press, Apr 1972), p. 199-216 - "The Greeks in the city [Salonica] rang their church bells, rode through the streets on horseback, wore fine clothes and did not step down from the pavement when they passed a Muslim. To us this indicates the extent of non-Muslim influence there; to [mollah] Haïroullah it was shockingly bold behaviour which would not have been tolerated in Istanbul; prohibited by imperial decree, it was explicable only in terms of the corruption of local police officials... Haïroullah clearly saw storm clouds ahead. After consulting the Qur’an, he met with the Greek archbishop and advised him to keep his flock in check, 'to be more faithful to the laws of the shari’a and to obey the orders of the governor.'... 'And from that night began the evil. Salonica, that beautiful city, which shines like an emerald in Your honoured crown, was turned into a boundless slaughter-house.' Yusuf Bey ordered his men to kill any Christians they found in the streets and for days and nights the air was filled with 'shouts, wails, screams.' " [Mazower, 2006] - Waterfield, Robert (2005). Athens: A History, From Ancient Ideal To Modern City. Basic Books. p. 285. ISBN 0-465-09063-X. - Douglas Dakin, 1973, p. 16. - Michał Bzinkowski, Eleuthería ē Thánatos!: The idea of freedom in modern Greek poetry during the war of independence in 19th century. Dionysios Solomos’ “Hymn to Liberty” - Paroulakis, pp. 10-11. - For example, during the Ottoman conquest of the Morea in 1715, local Greeks supplied the Ottomans and refused to join the Venetian army due to feared future reprisals by the Ottomans. (Stavrianos, L.S. The Balkans since 1453, p. 181). - Crypto-Christians of the Trabzon Region of Pontos - The preaching of Islam: a history of the propagation of the Muslim faith By Sir Thomas Walker Arnold, pg. 143 - The preaching of Islam: a history of the propagation of the Muslim faith By Sir Thomas Walker Arnold, pg. 137-138 - The preaching of Islam: a history of the propagation of the Muslim faith By Sir Thomas Walker Arnold, pg. 128 - Paroulakis, p. 11. - Douglas Dakin,the Greek struggle for independence, 1972 - James E. Lindsay Daily life in the medieval Islamic world, (Greenwood Publishing Group, 2005) p.121 - Madeline C. Zilfi Women and slavery in the late Ottoman Empire Cambridge University Press, 2010 - The preaching of Islam: a history of the propagation of the Muslim faith By Sir Thomas Walker Arnold, pg. 130 - Vasdravellis, I. Οι Μακεδόνες κατά την Επανάστασιν του 1821 (The Macedonians during the Revolution of 1821), 3rd improved edition, Thessaloniki: Society of Macedonian Studies, 1967.[page needed] - Shaw, p. 114. - The provisions of the Pact of Umar are cited as translated in Stillman (1979), pp. 157–158 - Mittheilungen aus der Geschichte und Dichtung der Neu-Griechen. Zweiter Band. Coblenz: Jacob Hölscher. 1825. - Roderick Beaton Folk Poetry of Modern Greece 248 pages Publisher: Cambridge University Press (May 20, 2004) ISBN 0-521-60420-6 ISBN 978-0521604208 - Hutton, James (1946). The Greek anthology in France and in the Latin writers of the Netherlands to the year 1800 Volume 28. Cornell University Press. p. 188. OCLC 3305912. LEONARD PHILARAS or VILLERET (c. 1595–1673) Philaras was born in Athens of good family and spent his childhood there. His youth was passed in Rome, where he was educated, and his manhood - Merry, Bruce (2004). Encyclopedia of modern Greek literature. Greenwood Publishing Group. p. 442. ISBN 0-313-30813-6. Leonardos Filaras (1595–1673) devoted much of his career to coaxing Western European intellectuals to support Greek liberation. Two letters from Milton (1608–1674) attest Filaras’s patriotic crusade. - Waterfield, Robert Athens: A History, from Ancient Ideal to Modern City, Basic Books (2005), pp281-293 - Eric Hobsbawm, The Age of Revolution: Europe 1789-1848, Part 1, Chapter 7, II, pp. 140–142. - Davy, John (1842). Notes and observations on the Ionian Islands and Malta. Smith, Elder. pp. 27–28. - American Baptist Foreign Mission Society (1848). The Missionary magazine. American Baptist Missionary Union. p. 25. - Longnon, J. 1949. Chronique de Morée: Livre de la conqueste de la princée de l’Amorée, 1204-1305. Paris. - Joseph von Hammer-Purgstall: Geschichte des osmanischen Reiches: Bd. 1574-1623, p. 442; note a. "Prete scorticato, la pelle sua piena di paglia portata in Constantinopoli con molte teste dei figli d'Albanesi, che avevano intelligenza colli Spagnoli" - Απόστολου Βακαλόπουλου, Ιστορία του Νέου Ελληνισμού, Γ’ τομ., Θεσσαλονίκη 1968 - Setton, Kenneth Meyer (1991), Venice, Austria, and the Turks in the Seventeenth Century, DIANE Publishing p189 - Finlay, George (1856). The History of Greece under Othoman and Venetian Domination. London: William Blackwood and Sons. p 210-3 - George Childs Kohn (Editor) Dictionary of Wars 650 pages ISBN 1-57958-204-4 ISBN 978-1579582043 Page 155 - Finley, The history of Greece under Othman and Venetian Domination, 1856 pp. 330-334 - Dakin, Douglas The Greek Struggle for Independence, 1821–1833, University of California Press, (1973) pp. 26–27 - "Greek Independence Day.". www.britannica.com. Retrieved 2009-09-09. The Greek revolt was precipitated on March 25, 1821, when Bishop Germanos of Patras raised the flag of revolution over the Monastery of Agia Lavra in the Peloponnese. The cry “Freedom or Death” became the motto of the revolution. The Greeks experienced early successes on the battlefield, including the capture of Athens in June 1822, but infighting ensued. - McManners, John (2001). The Oxford illustrated history of Christianity. Oxford University Press. pp. 521–524. ISBN 0-19-285439-9. The Greek uprising and the church. Bishop Germanos of old Patras blesses the Greek banner at the outset of the national revolt against the Ottomans on 25 March 1821. The solemnity of the scene was enhanced two decades later in this painting by T. Vryzakis….The fact that one of the Greek bishops, Germanos of Old Patras, had enthusiastically blessed the Greek uprising at the onset (25 March 1821) and had thereby helped to unleash a holy war, was not to gain the church a satisfactory, let alone a dominant, role in the new order of things. - Jelavich, p. 217. - Finkel, Caroline. Osman's Dream. Basic Books, 2005. p. 57. "Istanbul was only adopted as the city's official name in 1930". - Hobsbawm, Eric John. The Age of Revolution. New American Library, 1962. ISBN 0-451-62720-2. - Jelavich, Barbara. History of the Balkans, 18th and 19th Centuries. New York: Cambridge University Press, 1983. ISBN 0-521-27458-3. - Paroulakis, Peter H. The Greek War of Independence. Hellenic International Press, 1984. - Shaw, Stanford. History of the Ottoman Empire and Modern Turkey: Volume I. Cambridge: Cambridge University Press, 1977. - Vacalopoulos, Apostolis. The Greek Nation, 1453–1669. Rutgers University Press, 1976.
Learn something new every day More Info... by email The best tips for teaching critical thinking skills include understanding the analytical level of your students and placing an emphasis on writing essays. Since essays involve supporting the writer's thesis, or main idea, with supporting arguments plus research, students can be taught the difference between well-reasoned judgments and mere opinion or belief. Another way to teach critical thinking skills is to encourage students to see both details and the bigger picture by using the "forest and trees" analogy. Introducing a four-step approach to solving problems that involves recognizing the problem, exploring all options through creative brainstorming, taking time to reflect on the issues and finally eliminating solutions or ideas that won't work. Reasoning based on valid information is a keystone of critical thinking. A good tip to keep in mind when teaching critical thinking skills is to make sure students understand how to choose valid research sources of information when doing essay assignments. If they don't, making reasonable judgments and logical supporting statements isn't likely to be possible. It can be easy to believe almost any source as having reliable information unless one is taught to look for valid sources only. Not having accurate sources or facts makes it difficult to eliminate faulty solutions or ideas in problem solving. Students should learn how to disregard inaccurate or dubious information based on a lack of evidence or facts to back up the source and instead use critical reading approaches. Teaching students to take time to reflect on a problem or issue without making a snap decision, especially one based on emotion, is crucial in communicating the concepts of critical thinking. Unless trained otherwise, many people don't actually spend time thinking and reflecting on the different sides and questions involved in a topic. Rather, they voice their opinion, which is usually fueled by emotions or past personal experience rather than on a thoughtful, wider perspective. Teaching critical thinking skills by emphasizing reflection can often be accomplished by instructing students to think about an issue or problem from many different sides. Such creative thought, brainstorming or open thinking, usually leads to questions or connected ideas that in turn may lead to valid points about the subject or situation. Opening the topic up also tends to reveal more options in solving problems connected to it. Presenting a problem to the class for students to brainstorm and reflect on can help in teaching critical thinking skills. As solutions are mentioned by different students, evaluating the suggestions using a critical approach can further the lesson. If the class seems to focus on either too general or too specific options, bringing up the "forest and trees" analogy may help increase the level of thought. The expression "not being able to see the forest for the trees" can communicate the message that too much attention to details is resulting in some main points being missed. The opposite scenario, "not being able to see the trees for the forest" will present another common issue when teaching critical thinking skills. Seeing only the general trend, without looking at individual cases can especially work against thinking critically, as it often leads to stereotyping through over-generalization.
The following diagrams show the Triangle Inequality Theorem and Angle-Side Relationship Theorem. Scroll down the page for examples and solutions. The Triangle Inequality theorem states that The sum of the lengths of any two sides of a triangle is greater than the length of the third side. The Converse of the Triangle Inequality theorem states that It is not possible to construct a triangle from three line segments if any of them is longer than the sum of the other two. Example 1: Find the range of values for s for the given triangle. Step 1: Using the triangle inequality theorem for the above triangle gives us three statements: s + 4 > 7 ⇒ s > 3 s + 7 > 4 ⇒ s > –3 (not valid because lengths of sides must be positive) 7 + 4 > s ⇒ s < 11 Step 2: Combining the two valid statements: 3 < s < 11 Answer: The length of s is greater than 3 and less than 11 What is the Triangle Inequality Theorem? The following video states and investigates the triangle inequality theorem. The sum of lengths of any two sides of a triangle must be greater than the length of the third. We really only need to make sure the sum of the lengths of the two shorter sides is greater than the length of the longest side. Which lengths can form a triangle? Description of the Triangle Inequality The following video describes triangle inequality by trying to construct triangles with different length segments. What are the conditions required to draw a triangle to illustrate triangle inequality? Intuition behind the triangle inequality theorem The Angle-Side Relationship states that In a triangle, the side opposite the larger angle is the longer side. In a triangle, the angle opposite the longer side is the larger angle. Example 1: Compare the lengths of the sides of the following triangle. Step 1: We need to find the size of the third angle. The sum of all the angles in any triangle is 180º. ∠A + ∠B + ∠C = 180° ⇒ ∠A + 30° + 65° = 180° ⇒ ∠A = 180° - 95° ⇒ ∠A = 85° Step 2: Looking at the relative sizes of the angles. ∠B < ∠C < ∠A Step 3: Following the angle-side relationship we can order the sides accordingly. Remember it is the side opposite the angle. Examples of the angle-side relationships in triangles Angle side relationships in Triangles If 2 sides of a triangle are not congruent, then the larger angle is opposite the larger side. If 2 angles of a triangle are not congruent, then the larger side is opposite the larger angle. The measure of Angle A is greater than the measure of Angle B, and the measure of Angle B is greater than the measure of Angle C. Find the possible values for the length of side AC. Try the free Mathway calculator and problem solver below to practice various math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations. We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page.
Science, Tech, Math › Science How to Calculate Atomic Mass Review the Steps to Calculate Atomic Mass Share Flipboard Email Print For a single atom, atomic mass is the sum of the protons and neutrons. Electrons are much smaller than protons and neutrons, so their mass isn't factored into the calculation. Science Photo Library/Andrzej Wojcicki/Getty Images Science Chemistry Molecules Basics Chemical Laws Periodic Table Projects & Experiments Scientific Method Biochemistry Physical Chemistry Medical Chemistry Chemistry In Everyday Life Famous Chemists Activities for Kids Abbreviations & Acronyms Biology Physics Geology Astronomy Weather & Climate By Anne Marie Helmenstine, Ph.D. Anne Marie Helmenstine, Ph.D. Facebook Twitter Chemistry Expert Ph.D., Biomedical Sciences, University of Tennessee at Knoxville B.A., Physics and Mathematics, Hastings College Dr. Helmenstine holds a Ph.D. in biomedical sciences and is a science writer, educator, and consultant. She has taught science courses at the high school, college, and graduate levels. Learn about our Editorial Process Updated on February 14, 2020 You may be asked to calculate atomic mass in chemistry or physics. There is more than one way to find atomic mass. Which method you use depends on the information you're given. First, it's a good idea to understand what exactly, atomic mass means. What Is Atomic Mass? Atomic mass is the sum of the masses of the protons, neutrons, and electrons in an atom, or the average mass, in a group of atoms. However, electrons have so much less mass than protons and neutrons that they don't factor into the calculation. So, the atomic mass is the sum of the masses of protons and neutrons. There are three ways to find atomic mass, depending on your situation. Which one to use depends on whether you have a single atom, a natural sample of the element, or simply need to know the standard value. 3 Ways to Find Atomic Mass The method used to find atomic mass depends on whether you're looking at a single atom, a natural sample, or a sample containing a known ratio of isotopes: 1) Look Up Atomic Mass on the Periodic Table If it's your first encounter with chemistry, your instructor will want you to learn how to use the periodic table to find the atomic mass (atomic weight) of an element. This number usually is given below an element's symbol. Look for the decimal number, which is a weighted average of the atomic masses of all the natural isotopes of an element. Example: If you are asked to give the atomic mass of carbon, you first need to know its element symbol, C. Look for C on the periodic table. One number is carbon's element number or atomic number. Atomic number increase as you go across the table. This is not the value you want. The atomic mass or atomic weight is the decimal number, The number of significant figures varies according to the table, but the value is around 12.01. This value on a periodic table is given in atomic mass units or amu, but for chemistry calculations, you usually write atomic mass in terms of grams per mole or g/mol. The atomic mass of carbon would be 12.01 grams per mole of carbon atoms. 2) Sum of Protons and Neutrons for a Single Atom To calculate the atomic mass of a single atom of an element, add up the mass of protons and neutrons. Example: Find the atomic mass of an isotope of carbon that has 7 neutrons. You can see from the periodic table that carbon has an atomic number of 6, which is its number of protons. The atomic mass of the atom is the mass of the protons plus the mass of the neutrons, 6 + 7, or 13. 3) Weighted Average for All Atoms of an Element The atomic mass of an element is a weighted average of all the element's isotopes based on their natural abundance. It is simple to calculate the atomic mass of an element with these steps. Typically, in these problems, you are provided with a list of isotopes with their mass and their natural abundance either as a decimal or percent value. Multiply each isotope's mass by its abundance. If your abundance is a percent, divide your answer by 100.Add these values together. The answer is the total atomic mass or atomic weight of the element. Example: You are given a sample containing 98% carbon-12 and 2% carbon-13. What is the relative atomic mass of the element? First, convert the percentages to decimal values by dividing each percentage by 100. The sample becomes 0.98 carbon-12 and 0.02 carbon-13. (Tip: You can check your math by making certain the decimals add up to 1. 0.98 + 0.02 = 1.00). Next, multiply the atomic mass of each isotope by the proportion of the element in the sample: 0.98 x 12 = 11.760.02 x 13 = 0.26 For the final answer, add these together: 11.76 + 0.26 = 12.02 g/mol Advanced Note: This atomic mass is slightly higher than the value given in the periodic table for the element carbon. What does this tell you? The sample you were given to analyze contained more carbon-13 than average. You know this because your relative atomic mass is higher than the periodic table value, even though the periodic table number includes heavier isotopes, such as carbon-14. Also, note the numbers given on the periodic table apply to the Earth's crust/atmosphere and may have little bearing on the expected isotope ratio in the mantle or core or on other worlds. Over time, you may notice the atomic mass values listed for each element on the periodic table may change slightly. This happens when scientists revise the estimated isotope ratio in the crust. In modern periodic tables, sometimes a range of values is cited rather than a single atomic mass. Find More Worked Examples Cite this Article Format mla apa chicago Your Citation Helmenstine, Anne Marie, Ph.D. "How to Calculate Atomic Mass." ThoughtCo, Aug. 27, 2020, thoughtco.com/how-to-calculate-atomic-mass-603823. Helmenstine, Anne Marie, Ph.D. (2020, August 27). How to Calculate Atomic Mass. Retrieved from https://www.thoughtco.com/how-to-calculate-atomic-mass-603823 Helmenstine, Anne Marie, Ph.D. "How to Calculate Atomic Mass." ThoughtCo. https://www.thoughtco.com/how-to-calculate-atomic-mass-603823 (accessed May 24, 2022). copy citation Watch Now: What Is An Atom?
Grade 3 Fractions And Decimals Worksheets Free Our Grade 3 Fractions And Decimals Worksheets Provide Practice Exercises On Introductory Fraction And Decimal Concepts Including Identifying Simple Fractions Equivalent Fractions And Simple Fraction And Decimal Addition And Subtraction Several Worksheets With Answer Keys Are Provided For Each Type Of Exercise 3rd Grade Fractions Worksheets Free Printables Now Things Get Real Interesting As The Third Grade Math Menu Features Mixed And Equivalent Fractions Plus Fraction Conversion Adding And Subtracting Fractions And Comparing Like Fractions Each Of These Concepts And More Are Covered In Our Third Grade Fraction Worksheets And With Colorful Images And Games Like Measuring Cup Madness And Restaurant Math These Third Grade Fraction Worksheets Keep Students Entertained As They Learn 3rd Grade Fractions Worksheets Parenting This Math Worksheet Will Give Your Child Practice Identifying Fractions Of Shapes And Filling In The Missing Numbers In Fractions Fractions Of Shapes Skill Learning Equal Parts Just A Part Of The Whole Your Third Grader Will Shade In Parts Of Rectangles And Circles In This Coloring Math Worksheet To Match A Given Fraction Amount 3rd Grade Fractions Worksheets Lessons And Printables 3rd Grade Fractions Worksheets Lessons And Printables Fractions Grades K 2 One Half Circle The Shape One Half Color One Third Circle The Shape Third Grade Fractions Worksheets Edhelper Third Grade Fractions Worksheets Worksheet 81 Third Grade Fractions Worksheets Worksheet 82 Third Grade Fractions Worksheets Worksheet 83 Teacher Resources Made By Other Teachers Simple Fractions No Prep Packet Fraction Printables Fractions Decimals Percents Fraction Pizza Equivalent Fractions On A Number Line 3 3rd Grade Fractions Unit Conceptual Lessons And Practice AboutGrade 3 Fraction Word Problems Worksheets K5 Learning These Grade 3 Worksheets Give A Selection Of Word Problems Dealing With Fractions The Material Is Introductory Level And Is Intended To Highlight The Use Of Fractions In Real Life Situations Identifying And Comparing Fractions Word Problems These Printable Worksheets Have Grade 3 Word Problems Related To Identifying And Or Comparing FractionsEquivalent Fractions Third Grade Math Worksheets Here Is A Collection Of Our Printable Worksheets For Topic Equivalent Fractions Of Chapter Understand Fractions In Section Fractions And Decimals A Brief Description Of The Worksheets Is On Each Of The Worksheet Widgets Click On The Images To View Download Or Print Them All Worksheets Are Free For Individual And Non Commercial UseGrade 3 Math Worksheets Convert Imporper Fractions To Worksheets Math Grade 3 Fractions Decimals Improper Fractions To Mixed S Worksheets Convert Improper Fractions To Mixed Numbers Below Are Six Versions Of Our Grade 3 Math Worksheet On Converting Improper Fractions Numerator Is Greater Than The Denominator To Mixed Numbers These Worksheets Are Files Similar Convert Mixed Numbers To Fractions Fractions 3p Learning Put These Fractions In Order From Smallest To Largest Use The Fraction Wall At The Top Of This Page To Decide Which Fraction Is Larger And Circle It Label The Following Fractions Introducing Fractions Comparing And Ordering Fractions This Fraction Wall Is Just Like Your Fraction Strips Laid Out Side By Side 1 2 3 1 Whole 1 2 1 2 Halves Fractions Worksheets Printable Fractions Worksheets For These Fractions Worksheets Are A Great Resource For Children In Kindergarten 1st Grade 2nd Grade 3rd Grade 4th Grade And 5th Grade Click Here For A Detailed Description Of All The Fractions Worksheets If You Re Looking For A Great Tool For Adding Subtracting Multiplying Or Dividing Mixed Fractions Check Out This Online Fraction Calculator Quick Link For All Fractions Worksheets Third Grade Worksheet Fractions. The worksheet is an assortment of 4 intriguing pursuits that will enhance your kid's knowledge and abilities. The worksheets are offered in developmentally appropriate versions for kids of different ages. Adding and subtracting integers worksheets in many ranges including a number of choices for parentheses use. You can begin with the uppercase cursives and after that move forward with the lowercase cursives. Handwriting for kids will also be rather simple to develop in such a fashion. If you're an adult and wish to increase your handwriting, it can be accomplished. As a result, in the event that you really wish to enhance handwriting of your kid, hurry to explore the advantages of an intelligent learning tool now! Consider how you wish to compose your private faith statement. Sometimes letters have to be adjusted to fit in a particular space. When a letter does not have any verticals like a capital A or V, the very first diagonal stroke is regarded as the stem. The connected and slanted letters will be quite simple to form once the many shapes re learnt well. Even something as easy as guessing the beginning letter of long words can assist your child improve his phonics abilities. Third Grade Worksheet Fractions. There isn't anything like a superb story, and nothing like being the person who started a renowned urban legend. Deciding upon the ideal approach route Cursive writing is basically joined-up handwriting. Practice reading by yourself as often as possible. Research urban legends to obtain a concept of what's out there prior to making a new one. You are still not sure the radicals have the proper idea. Naturally, you won't use the majority of your ideas. If you've got an idea for a tool please inform us. That means you can begin right where you are no matter how little you might feel you've got to give. You are also quite suspicious of any revolutionary shift. In earlier times you've stated that the move of independence may be too early. Each lesson in handwriting should start on a fresh new page, so the little one becomes enough room to practice. Every handwriting lesson should begin with the alphabets. Handwriting learning is just one of the most important learning needs of a kid. Learning how to read isn't just challenging, but fun too. The use of grids The use of grids is vital in earning your child learn to Improve handwriting. Also, bear in mind that maybe your very first try at brainstorming may not bring anything relevant, but don't stop trying. Once you are able to work, you might be surprised how much you get done. Take into consideration how you feel about yourself. Getting able to modify the tracking helps fit more letters in a little space or spread out letters if they're too tight. Perhaps you must enlist the aid of another man to encourage or help you keep focused. Third Grade Worksheet Fractions. Try to remember, you always have to care for your child with amazing care, compassion and affection to be able to help him learn. You may also ask your kid's teacher for extra worksheets. Your son or daughter is not going to just learn a different sort of font but in addition learn how to write elegantly because cursive writing is quite beautiful to check out. As a result, if a kid is already suffering from ADHD his handwriting will definitely be affected. Accordingly, to be able to accomplish this, if children are taught to form different shapes in a suitable fashion, it is going to enable them to compose the letters in a really smooth and easy method. Although it can be cute every time a youngster says he runned on the playground, students want to understand how to use past tense so as to speak and write correctly. Let say, you would like to boost your son's or daughter's handwriting, it is but obvious that you want to give your son or daughter plenty of practice, as they say, practice makes perfect. Without phonics skills, it's almost impossible, especially for kids, to learn how to read new words. Techniques to Handle Attention Issues It is extremely essential that should you discover your kid is inattentive to his learning especially when it has to do with reading and writing issues you must begin working on various ways and to improve it. Use a student's name in every sentence so there's a single sentence for each kid. Because he or she learns at his own rate, there is some variability in the age when a child is ready to learn to read. Teaching your kid to form the alphabets is quite a complicated practice.
Why is Six Afraid of Seven? challenges children to think flexibly about adding or subtracting 1, 2, or 3 from the numbers 1 through 10. This game encourages students to count on (as opposed to counting all) by using small addends (1, 2, and 3). Also, without the aid of a physical number line, students become able to navigate up and down quickly. With each turn, students must search their own cards for one that can be played, effectively carrying out several mathematical problems mentally. Strategy is also involved as they must think about which cards they should play to pave the way for future turns, and rid themselves of cards more quickly. To print out cards, card backs, rules and notes, please click here. Common Core Standards *Click on the category title (ex: "Counting and Cardinality") to view the entire standards of the Common Core. Know number names and the count sequence. - K.CC.1. Count to 100 by ones and by tens. - K.CC.2. Count forward beginning from a given number within the known sequence (instead of having to begin at 1). - K.CC.3. Write numbers from 0 to 20. Represent a number of objects with a written numeral 0-20 (with 0 representing a count of no objects). Count to tell the number of objects. - K.CC.4. Understand the relationship between numbers and quantities; connect counting to cardinality. - When counting objects, say the number names in the standard order, pairing each object with one and only one number name and each number name with one and only one object. - Understand that the last number name said tells the number of objects counted. The number of objects is the same regardless of their arrangement or the order in which they were counted. - Understand that each successive number name refers to a quantity that is one larger. - K.CC.5. Count to answer "how many?" questions about as many as 20 things arranged in a line, a rectangular array, or a circle, or as many as 10 things in a scattered configuration; given a number from 1 to 20, count out that many objects. Understand addition as putting together and adding to, and understand subtraction as taking apart and taking from. - K.OA.1. Represent addition and subtraction with objects, fingers, mental images, drawings, sounds (e.g., claps), acting out situations, verbal explanations, expressions, or equations. - K.OA.2. Solve addition and subtraction word problems, and add and subtract within 10, e.g., by using objects or drawings to represent the problem. - K.OA.3. Decompose numbers less than or equal to 10 into pairs in more than one way, e.g., by using objects or drawings, and record each decomposition by a drawing or equation (e.g., 5 = 2 + 3 and 5 = 4 + 1). - K.OA.4. For any number from 1 to 9, find the number that makes 10 when added to the given number, e.g., by using objects or drawings, and record the answer with a drawing or equation. - K.OA.5. Fluently add and subtract within 5. Work with numbers 11-19 to gain foundations for place value. - K.NBT.1. Compose and decompose numbers from 11 to 19 into ten ones and some further ones, e.g., by using objects or drawings, and record each composition or decomposition by a drawing or equation (such as 18 = 10 + 8); understand that these numbers are composed of ten ones and one, two, three, four, five, six, seven, eight, or nine ones. Represent and solve problems involving addition and subtraction. - 1.OA.1. Use addition and subtraction within 20 to solve word problems involving situations of adding to, taking from, putting together, taking apart, and comparing, with unknowns in all positions, e.g., by using objects, drawings, and equations with a symbol for the unknown number to represent the problem. - 1.OA.2. Solve word problems that call for addition of three whole numbers whose sum is less than or equal to 20, e.g., by using objects, drawings, and equations with a symbol for the unknown number to represent the problem. Understand and apply properties of operations and the relationship between addition and subtraction. - 1.OA.3. Apply properties of operations as strategies to add and subtract. Examples: If 8 + 3 = 11 is known, then 3 + 8 = 11 is also known. (Commutative property of addition). To add 2 + 6 + 4, the second two numbers can be added to make a ten, so 2 + 6 + 4 = 2 + 10 = 12. (Associative property of addition.) - 1.OA.4.Understand subtraction as an unknown-addend problem. For example, subtract 10 - 8 by finding the number that makes 10 when added to 8. Add and subtract within 20. Add and subtract within 20. - 1.OA.5. Relate counting to addition and subtraction (e.g., by counting on 2 to add 2). - 1.OA.6. Add and subtract within 20, demonstrating fluency for addition and subtraction within 10. Use strategies such as counting on; making ten (e.g., 8 + 6 = 8 + 2 + 4 = 10 + 4 = 14); decomposing a number leading to a ten (e.g., 13 - 4 = 13 - 3 - 1 = 10 - 1 = 9); using the relationship between addition and subtraction (e.g., knowing that 8 + 4 = 12, one knows 12 - 8 = 4); and creating equivalent but easier or known sums (e.g., adding 6 + 7 by creating the known equivalent 6 + 6 + 1 = 12 + 1 = 13). Work with addition and subtraction equations. - 1.OA.7. Understand the meaning of the equal sign, and determine if equations involving addition and subtraction are true or false. For example, which of the following equations are true and which are false? 6 = 6, 7 = 8 - 1, 5 + 2 = 2 + 5, 4 + 1 = 5 + 2. - 1.OA.8. Determine the unknown whole number in an addition or subtraction equation relating three whole numbers. For example, determine the unknown number that makes the equation true in each of the equations 8 + ? = 11, 5 = _ - 3, 6 + 6 = _. Extend the counting sequence. - 1.NBT.1. Count to 120, starting at any number less than 120. In this range, read and write numerals and represent a number of objects with a written numeral. Understand place value. - 1.NBT.2. Understand that the two digits of a two-digit number represent amounts of tens and ones. Understand the following as special cases: - 10 can be thought of as a bundle of ten ones called a ten. - The numbers from 11 to 19 are composed of a ten and one, two, three, four, five, six, seven, eight, or nine ones. Add and subtract within 20. - 2.OA.2. Fluently add and subtract within 20 using mental strategies. By end of Grade 2, know from memory all sums of two one-digit numbers.
In the last section we focussed on what networks are, why they are used and covered a broad range of related topics such as how networks are categorised, topologies, performance factors, internet technologies and more. The second half of this part of the GCSE specification concentrates almost solely on how data is transmitted through a network, regardless of the connection type used. As you may well be aware, there are multiple ways in which devices may be connected together physically, for example wirelessly. Furthermore, there are an almost limitless number of different tasks that a device may be carrying out on a network, for example acting as a server or streaming video. Each task is different and places different demands and priorities on the network. For this reason, a range of methods and rules are used to ensure that traffic, regardless of type or purpose, is able to be reliably and securely sent from source to destination. In this section (click to jump): - Wireless Transmission – Wifi - IP and MAC addresses Wireless Transmission – Wifi Wireless transmission of network data really did change the entire way we use computing devices. Without wireless we wouldn’t have any of the portability we take for granted today and you certainly wouldn’t be able to update the world on important, life changing events on social media no matter where you were, such as next time you notice it’s snowing and decide to tell everyone else in case they hadn’t noticed, even though they too have eyes and can experience such crazy phenomena such as the weather by just looking out of the window, into the dark, dangerous real world outside… Wireless network connections are often referred to as WiFi – which stands for “Wireless Fidelity.” No, I don’t understand why either. WiFi is a set of standards for transmitting data using radio waves to computer devices in the place of a physical cable. It is designed to work using the same networking methods or protocols as a wired connection. To summarise, WiFi is: - A set of standards, IEEE 802.11 to be precise. - Which dictate how computers should be connected without wires - It includes methods of encrypting and securing data which is sent - Also states which frequencies should be used for transmission of data (2.4Ghz and 5Ghz bands) Wireless communication is now so ubiquitous that it can be found in everything from computers, consoles and tablets all the way down to children’s toys. Whilst WiFi is undoubtedly very convenient and carries with it a wealth of advantages over a wired connection, it is not perfect and there are some significant disadvantages. - No need to connect using cables - Devices can be located anywhere where there is sufficient signal strength – flexibility - Easy to connect new devices - Cheaper to install a wireless access point than to run cables to every device that needs to connect - Signals are badly affected by distance - and solid objects - and interference from other devices/electronics - Security – anyone can intercept wireless data/traffic, although it should be encrypted if you are using a secure connection These disadvantages are too significant to simply state and move on from. Range and interference can be dealt with in a number of ways, but to understand how requires an in depth conversation about radio waves and how they work. Security is a massive issue as anyone can intercept wireless network traffic and there is no way to tell that it has happened. WiFi is inherently less secure than Ethernet. Why? Because you can easily intercept packets of data by just sticking an antenna in the air, whereas with Ethernet you’d need to physically plug yourself in to the network. Consequently, it was necessary for WiFi to be developed alongside some form of encryption that could be applied to connections. Multiple methods of securing connections between wireless devices have been devised over time such as the initial WEP and now WPA, which stands for Wifi Protected Access. The current standard for encrypting and securing wireless data called WPA3, although the world is still full of WPA2 devices. - An implementation of the 802.11i standard (you can’t get enough 802.11 – look it up it’s truly fascinating reading.) - Per packet encryption - 256 bit encryption key In case you were wondering how secure 256 bit encryption is, if you were to try and guess the key by brute force (trying different combinations until you happened to be successful) then you would potentially have to try 115792089237316195 42357098500868790785 32699846656405640394 57584007913129639936 different combinations. I don’t know about you, but I don’t even know how to pronounce a number so large. By the time you and the fastest computer on earth had tried all of those combinations you would almost certainly be dead, the universe would probably have ended too and you would go down in history as one of the least interesting people ever to have lived. Encryption is covered in more detail in 1.4.2 – Identifying and Preventing Vulnerabilities. It doesn’t make sense to repeat the information twice, so for now, here’s a very brief summary: Encryption is the process of scrambling data so that if it is stolen or intercepted then it will mean nothing without the key to decrypt it. This should keep data safe even if someone manages to gain access to physical hardware or your network. What is it? - Encryption scrambles data using a complex, one way algorithm. - This encryption algorithm uses a “key” to encrypt the data - The data is scrambled and cannot be accessed without another, different key to “unlock” it. - A different key is required to decrypt data How does it work? - The recipient of a file makes available a “public key“ - The sender encrypts the data to be sent using this public key. This key can be used by anyone to encrypt data – it’s public! - However, this key cannot be used to decrypt the data! It is a one way process - The scrambled data is sent to the recipient - The recipient uses their “private key” to decrypt the data so they can view it - The recipient keeps their private key secret – if it were to be revealed then anyone could unlock/decrypt data sent to them. How does it prevent attacks or help us recover from an attack? - Decryption makes sending data over a network more secure as even if a connection is compromised then the data should remain unreadable. - Encryption buys us time to make changes, change passwords or make other steps to secure data before the encryption is broken Wireless standards are all in the 802.11 standard set and each one defines how connections are to be established, how the sending and receiving of data should work and also how fast that transmission can happen. - 802.11b – 11mbps (very slow) - 802.11g – 54mbps - 802.11n – 600mbps - 802.11ac – 1300mbps - 802.11ax (or WiFi 6e as it’s known) – up to 9608mbps depending on hardware used (stupidly fast) Each of these wireless standards is affected by transmission frequencies that have been allocated to their use. It may sound strange, but the transmission of data, TV and Radio signals are very tightly regulated. You are not allowed to simply pick a frequency of your choice and start sending out transmissions. Regulators have assigned two main frequency bands for wireless network connections, these are 2.4Ghz (gigahertz) and 5Ghz - 2.4Ghz is generally a slower, more stable signal and can be sent longer distances - 5Ghz is generally faster but is more affected by objects, distance etc Each band is split into a number of channels to try and separate out traffic on the network, especially in areas where there may be multiple wireless networks (think about how many networks you can pick up around your house from your neighbours). However, those channels do overlap in 2.4Ghz transmissions and furthermore, everyone with a wireless network is attempting to use the same available frequencies. Therefore, one big problem in wireless networking can be poor performance in congested areas where there is simply too much activity on each channel/band. Before we go any further, we need to understand a little bit more about radio waves so we can both understand what these 2.4 and 5Ghz frequencies actually are and so we can talk about some issues we face when using wireless networks and transmission. In Science lessons you should have come across waves and the terminology used to describe them. In the diagram above, you can see that the wave flows up and down across the X axis. This up and down wave motion is called an “oscillation.” The number of times a wave goes up and down (oscillates) during a second is called the frequency of the wave – literally “how many times it oscillates.” To complete the picture, the peaks of these wave oscillations (the very top or the very bottom) can be far apart or close together. The distance between two peaks is called the wavelength. Finally, the peak of a wave may be very high and steep, or very low and shallow. The height of the wave from the X axis is called the amplitude of the wave. The combination of these three features – oscillation, wavelength and amplitude give the wave its properties. In other words, changing these three things will change the behaviour of a wave and what it does. Without bending your mind with science too much, lots of things in the universe are a form of electromagnetic waves: - Visible Light - Radio Waves The only thing which differentiates this electromagnetic energy is…. the wavelength or frequency of the waves. As we now know, WiFi is allocated the 2.4Ghz and 5Ghz frequency bands. In the diagram above you can see this sits somewhere between radio waves and microwaves. - Electromagnetic waves cover a broad spectrum of frequencies. The frequency of a wave/transmission refers to how quickly the wave oscillates - This oscillation for WiFi is measured in Gigahertz (Ghz) – 2.4Ghz or 5Ghz - To receive data your wireless card literally “tunes in” to a frequency and listens to it in almost exactly the same way you tune your radio to a station to receive it. - And just like there are many radio stations on many different frequencies, there are many different frequencies within each band that WiFi can be transmitted on. As with everything in computing, wireless transmission is a trade off between performance (speed) and other factors such as the distance you can transmit data over. Unlike a cabled connection, waves are affected by their surroundings. Whilst a 2.4 or 5Ghz wave can absolutely pass through apparently solid objects such as walls, they are significantly weakened when they do so. Other factors such as conductive materials (the metal frames buildings are made from, for example) can also significantly affect the strength and quality of a wave. Whilst neither band is perfect, 5Ghz is the future of WiFi as its strengths far outweigh any weaknesses. These advantages and disadvantages are summarised below and are necessary for you to understand in your exam. Advantages of 2.4Ghz - Can travel long distances - Less affected (not immune!) by physical objects Disadvantages of 2.4Ghz - Transmission speeds are slower than 5Ghz - Has only three non-overlapping channels (see below) - More affected by congestion where there are multiple wireless networks in a small area Advantages of 5Ghz - Very high transmission speeds - Many more channels available to avoid congestion - Channels can be combined to create higher bandwidth Disadvantages of 5Ghz - Much more affected by physical objects - Does not cover as great a distance as 2.4Ghz WiFi is fairly clever in that you can have multiple networks all transmitting and receiving on the same frequency. If you did this with radio, it’d sound really weird because you’d be listening to two completely different stations at exactly the same time! Your wireless card or access point is clever enough to be able to filter out some of the “noise” of other networks it is not currently connected to. However, no amount of clever noise cancelling and error correction algorithms are going to handle this noise beyond a certain point. This explains why if there are a lot of wireless networks in a small space then it can actually have a drastic effect on performance and make things go very slowly – this also goes some way to explain why you can never, ever get 4G on your phone working in a football stadium during a game… To mitigate this interference, in each band whether that is 2.4 or 5Ghz there are multiple WiFi channels that can be used to separate networks out – very much like radio stations are all on slightly different frequencies so they do not interfere with each other. - The frequencies that WiFi can used are split down into smaller ranges – each one becomes a “channel” - If several networks are using the same channel then data will be subject to interference and this causes slow downs - However, most wireless Routers or Access Points are now intelligent enough to switch channels if interference is bad… - …this dynamically and actively reduces interference and should improve performance. - The downside is that on 2.4Ghz networks they overlap so there are actually only ever 3 channels in any given area that are completely separate. See below: As you can see in the diagram above, whilst there are 14 channels, they all overlap. There are only three channels which are truly separated from the others in the 2.4GHZ wireless range. 5GHZ solves this problem by having far, far more channels available which can keep traffic separated. Ignoring the fact the diagram above is in Danish (Kanalnummer = Channel number), you can see there are vastly more channels available for 5Ghz connections than 2.4Ghz Bluetooth is a standard for creating short range wireless connections between devices. It is largely used for peripherals such as remote controls, games console controllers, wireless headphones and so forth. Bluetooth shares many similarities with WiFi in that data is sent using packets, it uses the 2.4Ghz frequency band to transmit data and the frequency range it uses is split up in channels so that multiple devices can operate in the same area. The main difference is in the transmission power used. Bluetooth is an extremely low power standard, meaning that signals are transmitted or received over no more than 10 meters or so. This is more than enough for wireless speakers, headphones or remote controls which must only operate over the distance of the average room. Bluetooth is ideal for mobile, battery powered devices due to its low power consumption and small transmission distances meaning it does not interfere with other devices in the vicinity. Bluetooth networks tend to be “ad-hoc” meaning they are set up between clients as and when they are needed and are discarded when they are not needed any longer. IP and MAC addresses In 1.3.1 – Networks and Topologies, we talked about how data is sent through a network by splitting it up into packets that are then routed (by routers, no less) through the internet. In order to identify routers and servers connected to the internet, each one is allocated an IP address. It seems confusing to then learn that devices connected to a network actually have two addresses that identify them on a network! Why?! IP addresses are used in routing – the act of finding a path through a network to a destination device. The alternative, MAC addresses, are used by switches to find a specific device on an internal network – you can think of it as the final stage in the journey, ensuring data is delivered to the right place. There are some key differences between the two types of address and we look at each of them in turn below. To summarise our existing understanding, you should know that: - Devices are connected to networks using either wires or wireless access points (radio waves) - Data that is sent is split in to packets - These packets are delivered to devices on internal LANs by switches… - …or moved closer to their destination in a WAN by routers Every packet sent through a network will have a destination and source IP address written in to the header data of each packet. This is so any router knows where the packet should be delivered to and also where it came from. Each device connected to a network is given an IP (Internet Protocol) address. An IP address looks something like this: IP addresses have a very specific structure: - There are always 4 numbers, separated by dots - Each number can be anything from 0 to 255 - These numbers are…. 8 binary bits! Remember the biggest number we can store in binary with 8 bits is 255. - So, therefore, an IP address is 32 bits long in total - IP addresses can be anything in the range 0.0.0.0 to 255.255.255.255 The internet is a fluid, dynamic network. Devices are constantly being connected or disconnected from the network, which is one small part of the reason why no one knows what it really looks like and we call it “the cloud.” As a consequence, the IP protocol allocates IP addresses on a temporary basis, they are “leased” to devices that ask for them. This means that IP addresses allocated to a device connected to a network can, and do, change. Furthermore, as IP addresses are limited to the range 0.0.0.0 to 255.255.255.255 (and some of these are actually reserved for special purposes) there are not enough IP addresses to simply assign one to each device permanently. There are many clever tricks that have been implemented in networking to get around this lack of IP addresses. - IP addresses are dynamically assigned. This means automatically given out by Routers and Switches when you connect to them - Because they are dynamic they can change. - This means you can’t use an IP address to identify an actual piece of hardware – you only know that an IP address is a destination on a network. - There are approximately 4.2 billion possible IP addresses. This means there are not enough for all the devices that are/could be connected to the internet! The IP addresses we have been looking at are specified in version 4 of the IP protocol, they are “IP V4” addresses. The problem of running out of IP addresses had been known about long before it actually happened and work began on a new version of IP which would solve this issue – IP V6. Do not ask me what happened to version 5, perhaps it was late to dinner. IP V6 addresses are longer and consequently we are never, ever going to run out of them as there are enough to allocate an IP V6 address to pretty much every particle in the universe. IP V6 addresses are 128 bits in length versus the 32 bits used for IP V4. IP V4 offers 4.2 billion unique addresses, whereas IP V6 has… wait for it… The only issue with IP V6 is that there is a lot of networking hardware out there in the world (understatement of the year award). Network equipment tends to last a long, long time and therefore the transition to new hardware that supports and actually uses IP V6 has been extraordinarily slow and painful, but it is the future. - When you connect to a network, a switch or router gives your device an IP address - This IP address is not a unique identifier for your device and can change – they are dynamic - The IP address is used to route packets to a destination device connected to a network - Packets contain a source and destination IP address in their headers - IP addresses are 32 bit, written in decimal, separated by dots – e.g. 192.168.0.1 (which will probably take you to your router log in page if you type it in to a browser…) - We can and will run out of IP V4 addresses - The solution is IP V6. The eagle eyed amongst you, when reading about IP addresses, should’ve asked the question “if IP addresses don’t uniquely identify a device, what does?” More to the point, you should also have asked “…and how do we know one device from another?” Or “How come if I have a the latest iPhone and my friend has the exact same one, how does a network know which is which?!” And you would have a point. The answer, of course, is MAC addresses. To define, the term MAC Address means Media Access Control Address. We’re still none the wiser are we? A MAC address is simply a unique identifier given to any device which is capable of connecting to a network. Therefore, anything that has wireless or wired capability has a MAC address. A MAC address looks something like this: This looks significantly different to an IP address and that’s because… it is. But not as much as you might think. Due to convention, instead of writing down MAC addresses in decimal/denary as we do with IP addresses, each 8 bit section is written as a two digit hexadecimal number. This is probably to do with the length of a MAC address compared to an IP address and therefore we are less likely to make mistakes when reading out a MAC address written in hexadecimal MAC addresses consist of: - 6 “octets” separated by colons “:” - Each octet may be any number from 0-255 (like an IP address) - Each octet is written in Hexadecimal (which explains all the letters). If you’re not sure what that is, look at number systems here. Some facts about MAC addresses that you might find really useful in the exam: - They are totally unique and never change. If you throw away a device, the MAC address dies with it. - It is used to uniquely identify each device on a network - There are 2^48 possible MAC addresses (281,474,976,710,656 possible MAC addresses) so we are not going to run out of them in the near future. - A MAC address is given to a device when it is manufactured – unlike IP addresses which can be assigned every time you join a network. When a device is connected to a network, it is assigned an IP address so that data can be routed to it through the internet. However, a Switch will then translate this IP address into a MAC address to ensure that packets are delivered to their final destination. This explains why your home internet can be shared so easily: - You have only one internet connection at home - Which means you are only given one IP address for your house - Your switch uses MAC addresses to ensure that data is directed to the right device when it arrives We could define what standards are by simply saying “Standards are a set, fixed method or way of doing something.” That, whilst completely true, doesn’t help us to understand why they exist nor why they are so important in our every day lives. You will be well aware that there is always more than one way of doing something. If you’ve ever been abroad, then you’ll know that even something as commonplace as sending electricity to an appliance is totally different in other countries than it is in the UK. Having different methods or standards for doing the same thing is fine in some respects – each country sticks to one type of plug within their borders. However, it starts to fall apart and cause problems on a global scale. The ultimate goal of having standards or standardising the way we do things is to end up with one, single method of doing something. This has the distinct advantage that, no matter where you are in the world, which language you speak, which manufacturer made your product – it will just work. Doing things in a standard way has many distinct advantages: - It reduces costs – you do not need to develop your own solution to a problem - It creates compatibility – your products work with those from other manufacturers - It is convenient – people know how things work and what to expect - It usually results in cheaper products for consumers leading to wider adoption Some companies don’t like standards. It costs many millions to create a new product and, understandably, a business will want to maximise its investment. If you use standards then anyone can create a competing product with the same features. If you use proprietary technology (meaning you do not use a freely available standard or you refuse to share your methods) then you can exclusively sell a product. Sometimes this approach works, but more often it does not. Computing and technology history is littered with examples of great proprietary products that are now no longer relevant. One famous example is Sony who created the first decent video recorder. They called their technology Betamax and these were fantastic products. Betamax videos were small, compact and created very high quality recordings of your favourite TV show. However, they were very expensive, around £500 in the early 1980’s which is roughly somewhere around £3 trillion in todays money. Sony refused to share or sell their Betamax technology to anyone else and so competitors came up with their own version called VHS, short for Video Home System. VHS tapes were larger than Betamax and produced inferior quality recordings, however, because VHS was an open standard meaning anyone could make a VHS recorder or VHS tapes, they quickly became the most popular choice and Betamax was dead in the water despite its superiority. What has this got to do with computers and networking? Everything. At the same time as Betamax, in the 1980’s, home computers were all the rage. They were available from a variety of once famous brands such as Commodore, Amstrad, Sinclair, Acorn and so on. Each one worked in a completely different way despite all using very similar CPU, memory and other technology. These computers could not easily talk to each other and were completely incompatible. In 1980, IBM decided to enter the market for “mini” computers – machines that could fit on your desk rather than take up an entire room to themselves. IBM made the decision that in order to cut costs, they would use only off the shelf, widely available components in their new IBM PC as it was to be called. This meant they were making a machine that used standard, well understood components. The magic that tied this system together was something called the “BIOS.” This is a clever piece of software which kicks in when the power is turned on and basically makes the computer work to the point where it can load software and be used. IBM kept this secret to themselves and were soon selling hundreds of thousands of computers. Other companies quickly worked out how the IBM BIOS worked and made their own versions and copies, all of a sudden, the computer world was being standardised almost completely by accident. IBM “compatible” computers flooded the market and soon any business machine was either an IBM PC or a PC compatible – the Personal Computer had become the de facto standard in computing. The advent of the IBM PC and its clones marks the point in history where standards became more important than who manufactured a device. By standardising computing, the world had taken a massive leap for the following reasons: - Software could be written for one single platform, meaning it could be written once and run on many different machines, made by many different manufacturers - Computers could communicate as they all used the same or similar components – this made networking simple - IBM PC’s were modular, meaning you could plug in expansion cards which added new features, making it possible to customise a machine for a particular purpose. Ultimately, we arrive at the present day. A time where, because of standards being adopted across the world, we are able to cheaply build devices of all types, sizes and purposes which can communicate over the internet. Without standards, there would be no internet, no global communication or information sharing. All of this, just because we agreed to a set of rules for how computers should talk to each other. Protocols – Implementing Standards Computing today is dominated by standards. In this section we are now going to focus solely on standards associated with communicating on a network. These standards for network communications are called “protocols.” Definition: A protocol is a set of rules which establish how communication between two devices should happen. Protocols define absolutely every last tiny detail of what can, could and might happen during a communication. Everything from how to initiate a connection, how to confirm you have received data, what to do if there is a mistake or error. Protocols leave nothing to the imagination, nothing to chance and there is no allowance for interpretation. In other words, the rules are the rules and that’s it! Because protocol descriptions and rules can very quickly become large and complex in nature, something broad like “connecting to the internet” or “requesting and displaying a web page” will be broken down into manageable chunks called “layers” – we are going to discuss these in more detail in the next section. Layers are usually descriptions of tasks that have to be carried out, such as “link” a connection or to create a “session” between a client and server. However, even layers are too vague and broad and so each layer will be broken down even further into very clear, distinct tasks that must be carried out and these are described by individual protocols. An individual protocol might define the rules for: - How devices establish a communication link - How data should be sent and received (including how it is broken up and re-assembled) - How data is routed around a network - How devices identify themselves and connect to a network - How errors and data corruption should be handled A protocol may be implemented (carried out) by either hardware, for example a network card, or in software, for example a web browser or operating system. Reducing Complexity – Layers and the TCP/IP Model Networking, as we so sweepingly call it, is actually extremely complex to manage at a hardware and software level. The internet as we know it today is the result of multiple technologies having developed and evolved over time and eventually merged together as the accepted standard way of communicating. Sending data across a network is not a case of “here you go, send that down the wire.” If we break down the problem of connecting computers together in order to send and receive data, we may possibly describe the main issues facing us as follows: - Software – which programs want to use networking and communication features? - Network Management – how do we establish connections, split up and send data, manage connections between devices? - Physical Networking – how do we actually connect devices at a physical level? How do we transmit electrical signals that represent data? How do we speed up transmission and avoid data loss? There are, clearly, a lot of questions to be answered! As such, those involved in networking quickly came up with a conceptual model called the “TCP/IP model” and also the “OSI Model” to try and describe what will happen at each significant stage of creating a network connection, sending data and receiving data. In the exam, we are expected to have a broad overview of the TCP/IP model. To put it in simple terms, TCP/IP is the model which describes how internet connections are established and used. The “TCP/IP stack” as it is called, describes how data is presented, prepared and transmitted through a network (the internet). When you send data, it travels down through the layers and is then sent. When data is received, it travels through the stack from bottom to top to be reconstructed and displayed. TCP/IP is split into four main layers: TCP/IP at this “layer level” shows how a network connection should be organised and gives an idea of what happens at each stage. Each layer of the model contains many protocols – the things that rigidly define how a particular part of the stack will work. Breaking up a significant challenge such as “making the internet work” into a model like this has some significant advantages: - Developers of both hardware and software may focus on one particular layer, without having to worry about the others. This saves time and money. - Each layer can be robustly defined in terms of what it does and how it functions. This is the whole purpose of standards! Everyone knows what should happen at each layer or in each protocol. - Scalability – new protocols may be added in each layer to add new functionality and features. In the exam, you are not expected to draw the TCP/IP stack nor to name its layers. However, you do need to know what a layer is and why conceptual models break up networking tasks into layers. Definition of layers: - Layers divide the design of a set of related protocols into small, well defined pieces. - Each layer above a lower layer adds to the services provided by the lower layers - Each layer is independent of other layers and clearly defines the services it provides. No layer needs to know HOW another layer works, only what to pass to it or expect from it. The big advantage of layers is that they allow us to make changes in one layer without affecting other layers! Ethernet is arguably the standard protocol on which all other networking protocols rely on. Remember, a protocol is a set of defined rules and methods for sending and receiving data on a network and if you adhere to these standards, your device will be able to communicate with any other device which also adheres to these standards. Ethernet is IEEE standard 802.3. Now you know that, you can wow your friends, parents or pet trifle with your new found knowledge. But, what does this actually mean? - A set of standards - Which dictate how computers should be physically connected… - …and exactly how data will travel between those devices. - More secure than wireless because it uses physical connections. Ethernet is found at the very bottom layer of the TCP/IP model and for some reason, OCR have picked it out as a protocol you should be especially aware of. They are also extremely picky about the use of the word Ethernet, so let us establish one important point right now: It will be sufficient for your exam for you to summarise Ethernet in the following way: Ethernet identifies network devices by their MAC addresses. It is responsible for establishing the physical connection (at an electrical signal level) between devices on a network. The Ethernet standard defines and specifies the type of cables that must be used and how data is to be transmitted down those cables in the form of “frames” of data. A frame, at GCSE level, is simply a collection of data encapsulating a packet from higher layers which is to be sent across a connection. Ethernet has several advantages: - It is the standard for network communication and as such virtually all devices that are networked use these standards. Manufacturers will always adhere to the Ethernet standard when making a device with wired connectivity. - Compatibility – Any device with an RJ45 network socket will support the Ethernet standard - Speed – it’s silly fast – speeds of up to 100 Gigabits can be achieved - Security – you have to physically connect to the network to intercept packets - Reliability – the use of packets and error correction methods means data can be sent quickly and reliably on the network - Ethernet can be affected by interference on or around cables which may disrupt data transmission - Distance – Ethernet cables have a maximum sensible length of 100m before the signal degrades too much to be useful. Other protocols you need to know about The topic of protocols and layers is an absolute favourite on the OCR GCSE, however due to the complexity of protocols, you are only required to understand the briefest of overviews for each one. Below are all the protocols which are on the GCSE specification, along with a very short explanation of what each stands for, is responsible for and how it may be used. You shouldn’t need to know any more than this… What does it stand for? Transmission Control Protocol/Internet Protocol What is it? A “suite of protocols” which define how internet connections are created and managed. TCP/IP refers to the two main protocols TCP, which is responsible for establishing connections between devices and ensuring reliable delivery of data and IP which is responsible for splitting data into packets and routing them through the internet to their destination. TCP/IP actually contains many more protocols, organised into four conceptual layers, which provide all of the functionality we expect from an internet connected device such as being able to transmit and receive web pages, emails or even synchronise the time between devices. Anything else I need to know? Everything in the section above which explains TCP/IP and layers in more detail! What does it stand for? Hyper Text Transfer Protocol What is it? The protocol responsible for establishing a connection between your web browser and a web server and transmitting website data from server to client. Web browsers send HTTP requests for a website to a web server, when these are received, a response along with the web page itself is returned from the server to the client. HTTP requests and responses are numerical and classified according the the first number. You may well have seen some of these when a website fails to work, such as “404 – Not Found” when a page has been removed or “500 – Server Timed Out” when there is an issue establishing a connection in time. Anything else I need to know? HTTP is a plain text protocol, meaning all data sent between client and server is not encrypted in any way. if it is intercepted, then it can be immediately understood. HTTP uses URL’s (Uniform Resource Locators – website addresses) to locate web servers. These URL’s must be translated into an IP address through the use of another protocol – DNS! What does it stand for? Hyper Text Transfer Protocol – Secure What is it? HTTPS is an extension to the HTTP protocol, meaning it is completely identical in many ways except one – data is sent in a secure manner through the use of encryption. HTTPS establishes a secure connection to a web server using yet another protocol called “TLS” which stands for Transport Layer Security. TLS encrypts traffic sent between client and server which prevents a third party from understanding any data they may intercept. Anything else I need to know? Without HTTPS, it would not be possible to use the WWW securely. Every day activities such as shopping, using online banking or logging in to any website for any reason rely on HTTPS to not only protect the user names and passwords sent between client and server, but also the sensitive data that may then be subsequently transmitted What does it stand for? File Transfer Protocol What is it? A protocol designed purely for the sending and receiving of files between a client and a server. OCR are very clear that they want you to say it is a client server model in your answers! FTP actually creates two connections between a client and server, one for control signals (requests, sign in, out etc) and one for the actual data transfer. All data, log in information and control commands are sent in plain text. Anything else I need to know? FTP has been phased out of use during recent years and most modern web browsers no longer support it due to multiple security concerns and better methods of file transfer emerging. Alternatives such as Secure FTP (FTPS) and SSH FTP (SFTP) are now used. What does it stand for? Post Office Protocol What is it? POP is used exclusively to connect to email servers and to retrieve email. To be clear, POP only gets email from a server and then deletes the server copy, it does almost nothing else. POP is an extremely simple protocol to implement and is still in use today despite being one of the earlier protocols to be created. POP stores email locally, on the client computer. Anything else I need to know? POP cannot be used to send emails. POP has been developed to allow encrypted connections to email servers. What does it stand for? Internet Message Access Protocol What is it? IMAP is another protocol used for the retrieval of email from a server, however it is more fully featured than POP. Whilst POP will connect, retrieve mail and then disconnect from a server, IMAP can maintain a connection to the email server, meaning if new email arrives it can automatically be fetched. IMAP contains features which allow the use of folders in an email inbox, multiple users are able to access the same mailbox on a server and it also provides support for monitoring the state of mail messages such as read, unread, high priority and so forth. Anything else I need to know? IMAP is not responsible for the sending of email messages! What does it stand for? Simple Mail Transfer Protocol What is it? SMTP manages the sending of email. It is called an “outgoing” protocol to reflect the fact that it is responsible for taking email messages from a client and sending them to an email server to be delivered or forwarded on to another mail server. SMTP, therefore, is the protocol which governs mail being sent from client to server, or from mail server to mail server. Anything else I need to know? SMTP supports secure connections and encryption as standard.
Inclusive education is defined as an attempt to overcome various forms of exclusion. It aims to educate students with special needs as well as gifted students; combat poverty and economic marginalization and segregation based on gender; give special attention to cultural plurality and diversity as an educational right that needs to be affirmed; protect the rights of minorities and migrants. Thus, inclusive education attempts to integrate all citizens in society, while upholding values of social justice. According to the report prepared for the World Education conference in 2008, inclusive education concept means that “all children should be subject to similar learning-teaching methods regardless of their social and cultural background and the different abilities and skills they possess. Education opportunities should be provided for all even for those with special needs of whom the ones with certain potentials should be integrated with the normal students” (p. 49). Special education needs Bahrain has always paid great attention to “all vulnerable groups within the community and especially the people with disability and special needs” as these groups are an integrated part of the community and its overall development. According to the Special Education Directorate in the Ministry of Education, special education is defined as “the programs and services provided for children who differ from their peers, whether physically, mentally or emotionally, to the point that they need special expertise, approaches or educational materials that would help them achieve the best possible educational outcomes, whether in regular classes or special classes if their problems are more severe.” These include specialized programs for students with intellectual disabilities, Down syndrome, autism, physical disability, visual impairment and hearing impairment, as well as outstanding and gifted students. The National Strategy (2012) calls for more integration. In this regard, the Ministry of Education has ensured that, based on assessment of their cases, students with special needs could be integrated in regular classrooms and many efforts have been made in this respect. The MOE has then started to integrate all children with disabilities and special needs in public schools as well as private schools, based on their parents’ choice. Special emphasis has been put on the preparation of the environment in terms of awareness, facilities and special training in dealing and interacting with people with disabilities. The aim is to enrol students with special needs in mainstream schools without discrimination and integrate them with other students. In the academic year 2019-2020, there were 179 schools that have implemented the special education programme at different education levels as set by the Ministry. Many non-state educational institutions, profit and non-profit, deliver educations for special needs students in separate special schools. The Kingdom of Bahrain acceded to the International Convention on the Elimination of All Forms of Racial Discrimination of 1965, but has not ratified the Convention against Discrimination in Education (1960) though it has reported to UNESCO in multiple consultations . In parallel, Article 7 of the Bahraini Constitution (2002) states that a) the State […] guarantees educational and cultural services to its citizens. Education is compulsory and free in the early stages as specified and provided by law. The necessary plan to combat illiteracy is laid down by law. Article 6 of the 2005 Education Law states that “Basic education is a right of those children who reach the age of six years at the beginning of the academic year. The Kingdom is obliged to provide education for them and their parents or legal guardians are obliged to facilitate this. This shall be for a period of at least nine years of schooling. The Ministry of Education in the Kingdom will issue the necessary decrees to regulate and enforce the compulsory nature of education with regard to parents and legal guardians.” Article 7 of the Act states: “Basic and secondary education shall be free in schools within the Kingdom”. Bahrain joined the Arab Agreement for Employing and Rehabilitating Persons with Disabilities of 1993 in 1999 and the Arab Decade for Disabled People championed by League of Arab States. It ratified the UN Convention on the Rights of Persons with Disabilities (CRPD) (2006) with Law 22/2011. Article 5 of the Education Law 27 (2005) outlines the rights of people with disabilities to be integrated, but Article 2 of this Law refers only to Bahraini citizens. However, according to the policies adopted by the Ministry of Education, non-Bahraini children who reside in Bahrain also have the right to join all the special education programs and benefit from all special education services offered either in public or private schools, based on their parents’ choice. Bahrain has developed the “National Strategy for Persons with Disabilities ”, endorsed in 2013, to provide guidance to government sectors, NGOs, the private sector, professional groups, educators, advocates, and society at large, on the tasks required to ensure that the rights of persons with disabilities are effectively observed and realized. It adopts a human rights and development approach to disability which focuses on the removal of barriers to equal participation and the elimination of discrimination. This National Strategy also highlights that there is a need for clearer and more direct legislation to guarantee the rights of people with special needs. Also, mechanisms still must be devised to help implement the CRPD. Finally, national laws need to be issued to echo and reflect international conventions; and monitoring and evaluation mechanism’s need to be provided. In addition, the Strategic Partnership Framework 2018-2022 aims to “strengthen national capacity to support children living with disabilities through a multi-sector approach that considers services and support provided at the national, community and family level” (p. 18) . Bahrain ratified Convention on the Elimination of All Forms of Discrimination Against Women (CEDAW) in 2002. Additionally, in line with UNESCO’s international decade of literacy (2003-2012), Bahrain established nurseries where female adult learners can leave their children while in class and offered appropriate transportation especially for female adult learners. That said, there does not seem to be any specific laws or policies for the inclusion of gender minorities in the education system. The Kingdom of Bahrain also establishes the principle of gender equality in higher education, both in terms to access and scholarships (p. 11). Ethnic and linguistic groups Article 7 of the Constitution (2002) states that “The law regulates care for religious and national instruction in the various stages and forms of education and at all stages is concerned to develop the citizen’s personality and his pride in his Arabism”. Bahrain adopted the UN Declaration on the Rights of Indigenous Peoples (2007). Refugees and students in conflict areas Bahrain has not signed the 1951 Convention relating to the Status of Refugees and the 1967 Protocol. Moreover, Bahrain is neither a party to the 1954 Convention relating to the Status of Stateless Persons (the 1954 Convention), nor to the 1961 Convention on the Reduction of Statelessness (the 1961 Convention). Bahrain does not have a domestic legislation on refugees. A multi-year project led by UNESCO and funded by the Sultan Bin Abdel Aziz Al-Saud Foundation aimed to assess the situation regarding inclusion of vulnerable groups including students in conflict areas and help countries exchange experiences. In 2014, Bahrain amended its 1963 Citizenship act allowing mothers to confer their nationality to their children born either in their home countries or abroad if the fathers are either Bahrain nationals or unknown or stateless. Despite the adoption of some case by case measures, children of Bahraini mothers and non-Bahraini fathers do not obtain automatically Bahraini nationality confronting them for example to education and residency fees. According to Article 1 of Law No. (35) of 2009 Concerning the Treatment of Non-Bahraini Wives and Children of Bahraini Women Married to Non-Bahrainis in the Same Way Bahrainis are Treated Regarding Some Fees Prescribed for Some Government Services, “the wife of a non-Bahraini and the children of a Bahraini woman married to a non-Bahraini shall be treated as a Bahraini citizen in all matters pertaining to the fees prescribed for public health and education services as well as residence fees, provided that they are permanently residing in the Kingdom of Bahrain.” Additionally, Education is free for all children in Bahrain, be they Bahraini or non-Bahraini, from grades 1 to 12. As well as that, according to Article (3) of Decree Nº (82) of 2017 Concerning the Amendment of Article (3) of Decree Nº (24) of 2008 Concerning the Eligibility Criteria for the Disability Allowance in the Kingdom of Bahrain, sons and daughters of Bahraini mothers who are married to foreigners who permanently reside in the Kingdom of Bahrain are entitle to disability allowances from the government of Bahrain if they are disabled and their disability is confirmed by the disability evaluation committee. The people with disability and special needs are supported by different ministries including the Ministry of Labour & Social Development , Ministry of Education , Ministry of Interior, Information & eGovernment Authority , Ministry of Health , Ministry of Information Affairs, Ministry of Works , Electricity and Water Authority, Ministry of Industry, and many related NGOs. Based on the Directive number 29 (2006), a new department dedicated to special education was established under the Ministry of Education. In addition, articles 17 and 18 of the Law 74 (2006) stipulates the establishment of the High Committee for Disabled People’s Affairs, headed by the Minister of Social Affairs. This Committee has the mandate to plan and coordinate all efforts related to the care for individuals with special needs. Its aims to establish the frameworks for assessing needs and conditions and requirements for admission in rehabilitation centres; propose laws and policies; put forward legislative proposals that help overcome all forms of explicit and implicit discrimination against people with special needs; and advocate for the rights of people with special needs as well the effective implementation of the UN Convention on the Rights of Persons with Disabilities. In parallel, in 2010, the Ministerial Council of Bahrain created a committee, which would include members from the mentioned High Committee for Disabled People’s Affairs and the Ministry of Social Affairs, to assess disabilities with the main objectives to create national standards for assessing disabilities as well as to coordinate with the Ministry of Education to integrate children with disabilities in public schools. Finally, Ministerial Decree Number 50 (2010) aims to coordinate efforts of assessing and evaluating cases of disabilities. It also aims to better coordinate efforts between Ministries of Health, Education, the High Committee for Disabled People’s Affairs, and various universities in Bahrain. Since 2008, the Ministry of Education has been equipping various schools and classrooms in ways that make them highly accessible to students with various physical disabilities (p. 54). Similarly, several buildings were made accessible across Bahrain, including health care centres and schools. Yet the National Strategy for the Rights of Individuals with Disabilities (2012-2016) underlines the need to have clearer laws and policies that introduce national standards related to accessibility to services, buildings, and transportation. The National Strategy for the Rights of Individuals with Disabilities (2012-2016) mentions that Bahraini legislation does not emphasize some key aspects of special needs accommodation including: communication which would include Braille and other means; language; reasonable accommodation measures; and public design which would entail ensuring that the public space accommodates both people with special needs, among others (Ministry of Social Affairs, 2012, p. 34). As documented on the database of the UNESCO Institute for Statistics, the Kingdom of Bahrain has succeeded in ensuring that all special needs students have access to basic educational services. The proportion of primary, lower secondary and upper secondary schools with adapted infrastructure and materials for students with disabilities has reached 100% in all educational stages. In 2015, the minister of Education highlighted that schools integrating students with special needs were equipped with all aids and facilities required for that category of students such as ramps for physically handicapped students and passage ways and signs in Braille, specs and special books and computers for those with visual disabilities, medical hearing aids and wheelchairs. In addition, he provided special education teachers and female workers accompanying suffering from autism, with 11 buses equipped with mechanical lifts and including all features that make them fit to Students with Special Needs. Finally, a full curriculum was designed and implemented starting academic year 2015/2016 for students with basic mental disabilities and Down syndrome. Some subject matters, such as Chemistry, Physics and Biology, were replaced by other subject matters, such as Arabic and Social Studies, to accommodate the special needs and abilities of students with visual impairment. The Bahraini government financially supports special education teacher training at the Arabian Gulf University. In addition, 540 specialized personnel are employed to work in the government-run schools. They specialize in mental disability and autism and have all received Bachelor degrees in Psychology (with special needs focus), advanced diploma and master degrees in special education. As for those working with students with other types of disabilities, intensive workshops and meetings with specialists are organized to help prepare and train them. These trainings seem to be available for both pre-service and in-service teachers. The Education and Training Quality Authority of the Kingdom of Bahrain issues an annual report. In addition, the National Strategy for the Rights of Individuals with Disabilities (2012-2016) aims to implement a monitoring system based on nationally developed and approved indicators. The Ministry of Social Affairs also conducts periodical survey studies on cases with disabilities through its social aid and social research units. Moreover, all departments and agencies dealing with or offering services of any kind to people with disabilities are expected to submit semi-annual and annual reports to unit of social rehabilitation under this Ministry. Finally, Decree 62 (2007) established a committee focused on monitoring activities and implementation of national strategy under the High Committee for Disabled Affairs.
Basic math glossary-O Basic math glossary-O define words beginning with the letter O An angle that is not a right angle, which means the angle can be acute or obtuse. An angle whose measure is bigger than 90 degrees. A triangle with one obtuse angle. A polygon with 8 sides and 8 angles. A number that is less than 0. A number that cannot be divided evenly by 2. Also known as open sentence, it is an equation that is neither true or false. For example, n + 6 = 9 is an open statement because we do not know n. Therefore, we have no clue if it is true or false Sequence from smallest to largest. The x and y values that give the location of point in a coordinate system. The point where the x-axis and the y-axis intersect with coordinates (0,0). . A unit of weight equal to 1/16 of a pound Buy a comprehensive geometric formulas ebook. All geometric formulas are explained with well selected word problems. Nov 15, 18 05:01 PM Modeling multiplication with number counters - Learning multiplication is fun! New math lessons Your email is safe with us. We will only use it to inform you about new math lessons.
The Main Difference between Linear and Nonlinear Circuit In simple words, a linear circuit is an electric circuit in which circuit parameters (Resistance, inductance, capacitance, waveform, frequency etc) are constant. In other words, a circuit whose parameters are not changed with respect to Current and Voltage is called Linear Circuit. Fundamentally, the word “linear” literally means “along with a straight line”. As the name tells everything, a linear circuit means linear characteristics in between Current and Voltage, which means, current flowing through a circuit is directly proportional to the applied Voltage. If we increase the applied voltage, then the current flowing through the circuit will also increase, and vice versa. If we draw the circuit output characteristic curve in between Current and Voltage, it will look like a straight line (Diagonal) as shown in fig (1). Refer to Ohm’s Law, where we recognize that: “If the applied voltage increases, then Current also increases (where resistance remains same).” But this is not always the case. That’s why we use P=VxI instead of V=IxR (in Transformer) In other words, In a linear circuit, the output response of the circuit is directly proportional to the input. Simple Explanation of the above statement is, in an electric circuit, in which the applied sinusoidal voltage having frequency “f”, the output (Current through a component or Voltage between two points) of that circuit is also sinusoidal having frequency “f”. Examples of Liner Circuits and Linear Elements - Resistance and Resistive Circuit - Inductor and Inductive Circuits - Capacitor and Capacitive Circuits Non Linear Circuit A nonlinear circuit is an electric circuit whose parameters are varied with respect to Current and Voltage. In other words, an electric circuit in which circuit parameters (Resistance, inductance, capacitance, waveform, frequency etc) is not constant, is called Non Linear Circuit. If we draw the circuit output characteristic curve in between Current and Voltage, it will look like a curved or bending line as shown in fig (2). Examples of Non-Liner Circuits and Non Linear Elements - Iron Core - inductor (when the core is saturated) - and any circuit composed exclusively of ideal Diode, - and Iron Core inductor is called Non linear circuit. Solving Linear and Nonlinear Circuits Solving of a nonlinear circuit is a little bit complex then linear circuits. Linear circuit can be solved with simple techniques and scientific calculator. While solving Non linear circuits, a lot of data and information is required. But nowadays, due to aggressive technological changes and Modernization, we can simulate and analyze, with output curves both linear and non linear circuits very easily with the help of circuit simulation tools like PSpice , MATLAB, Multisim etc. You may also read:
P U Z Z L E R When this honeybee gets back to its hive, it will tell the other bees how to return to the food it has found. By moving in a special, very precisely defined pattern, the bee conveys to other workers the information they need to find a flower bed. Bees communicate by “speaking in vectors.” What does the bee have to tell the other bees in order to specify where the flower bed is located relative to the hive? (E. Webber/Visuals Unlimited) c h a p t e r Vectors Chapter Outline 3.1 Coordinate Systems 3.2 Vector and Scalar Quantities 3.3 Some Properties of Vectors 58 3.4 Components of a Vector and Unit Vectors 3.1 59 Coordinate Systems W e often need to work with physical quantities that have both numerical and directional properties. As noted in Section 2.1, quantities of this nature are represented by vectors. This chapter is primarily concerned with vector algebra and with some general properties of vector quantities. We discuss the addition and subtraction of vector quantities, together with some common applications to physical situations. Vector quantities are used throughout this text, and it is therefore imperative that you master both their graphical and their algebraic properties. y (x, y) Q P (–3, 4) 3.1 2.2 COORDINATE SYSTEMS x O Many aspects of physics deal in some form or other with locations in space. In Chapter 2, for example, we saw that the mathematical description of an object’s motion requires a method for describing the object’s position at various times. This description is accomplished with the use of coordinates, and in Chapter 2 we used the cartesian coordinate system, in which horizontal and vertical axes intersect at a point taken to be the origin (Fig. 3.1). Cartesian coordinates are also called rectangular coordinates. Sometimes it is more convenient to represent a point in a plane by its plane polar coordinates (r, ), as shown in Figure 3.2a. In this polar coordinate system, r is the distance from the origin to the point having cartesian coordinates (x, y), and is the angle between r and a fixed axis. This fixed axis is usually the positive x axis, and is usually measured counterclockwise from it. From the right triangle in Figure 3.2b, we find that sin y/r and that cos x/r. (A review of trigonometric functions is given in Appendix B.4.) Therefore, starting with the plane polar coordinates of any point, we can obtain the cartesian coordinates, using the equations x r cos (3.1) y r sin (3.2) y x r √x 2 y 2 Figure 3.1 Designation of points in a cartesian coordinate system. Every point is labeled with coordinates (x, y). y (x, y) r θ x O (a) y sin θ = r Furthermore, the definitions of trigonometry tell us that tan (5, 3) (3.3) (3.4) These four expressions relating the coordinates (x, y) to the coordinates (r, ) apply only when is defined, as shown in Figure 3.2a — in other words, when positive is an angle measured counterclockwise from the positive x axis. (Some scientific calculators perform conversions between cartesian and polar coordinates based on these standard conventions.) If the reference axis for the polar angle is chosen to be one other than the positive x axis or if the sense of increasing is chosen differently, then the expressions relating the two sets of coordinates will change. cos θθ = xr tan θ = r y y x θ x (b) Figure 3.2 (a) The plane polar coordinates of a point are represented by the distance r and the angle , where is measured counterclockwise from the positive x axis. (b) The right triangle used to relate (x, y) to (r, ). Quick Quiz 3.1 Would the honeybee at the beginning of the chapter use cartesian or polar coordinates when specifying the location of the flower? Why? What is the honeybee using as an origin of coordinates? You may want to read Talking Apes and Dancing Bees (1997) by Betsy Wyckoff. 60 CHAPTER 3 EXAMPLE 3.1 Vectors Polar Coordinates The cartesian coordinates of a point in the xy plane are (x, y) ( 3.50, 2.50) m, as shown in Figure 3.3. Find the polar coordinates of this point. y(m) r √x 2 y 2 √(3.50 m)2 (2.50 m)2 4.30 m tan y x 2.50 m 0.714 3.50 m 216° θ x(m) Note that you must use the signs of x and y to find that the point lies in the third quadrant of the coordinate system. That is, 216° and not 35.5°. r –3.50, –2.50 Figure 3.3 Solution Finding polar coordinates when cartesian coordinates are given. 3.2 2.3 VECTOR AND SCALAR QUANTITIES As noted in Chapter 2, some physical quantities are scalar quantities whereas others are vector quantities. When you want to know the temperature outside so that you will know how to dress, the only information you need is a number and the unit “degrees C” or “degrees F.” Temperature is therefore an example of a scalar quantity, which is defined as a quantity that is completely specified by a number and appropriate units. That is, A scalar quantity is specified by a single value with an appropriate unit and has no direction. 훾 Other examples of scalar quantities are volume, mass, and time intervals. The rules of ordinary arithmetic are used to manipulate scalar quantities. If you are getting ready to pilot a small plane and need to know the wind velocity, you must know both the speed of the wind and its direction. Because direction is part of the information it gives, velocity is a vector quantity, which is defined as a physical quantity that is completely specified by a number and appropriate units plus a direction. That is, A vector quantity has both magnitude and direction. 훽 Figure 3.4 As a particle moves from 훽 to 훾 along an arbitrary path represented by the broken line, its displacement is a vector quantity shown by the arrow drawn from 훽 to 훾. Another example of a vector quantity is displacement, as you know from Chapter 2. Suppose a particle moves from some point 훽 to some point 훾 along a straight path, as shown in Figure 3.4. We represent this displacement by drawing an arrow from 훽 to 훾, with the tip of the arrow pointing away from the starting point. The direction of the arrowhead represents the direction of the displacement, and the length of the arrow represents the magnitude of the displacement. If the particle travels along some other path from 훽 to 훾, such as the broken line in Figure 3.4, its displacement is still the arrow drawn from 훽 to 훾. 3.3 (a) 61 Some Properties of Vectors (b) (c) (a) The number of apples in the basket is one example of a scalar quantity. Can you think of other examples? (Superstock) (b) Jennifer pointing to the right. A vector quantity is one that must be specified by both magnitude and direction. (Photo by Ray Serway) (c) An anemometer is a device meteorologists use in weather forecasting. The cups spin around and reveal the magnitude of the wind velocity. The pointer indicates the direction. (Courtesy of Peet Bros.Company, 1308 Doris Avenue, Ocean, NJ 07712) In this text, we use a boldface letter, such as A, to represent a vector quantity. Another common method for vector notation that you should be aware of is the : use of an arrow over a letter, such as A . The magnitude of the vector A is written either A or 兩 A 兩. The magnitude of a vector has physical units, such as meters for displacement or meters per second for velocity. 3.3 SOME PROPERTIES OF VECTORS y O x Equality of Two Vectors For many purposes, two vectors A and B may be defined to be equal if they have the same magnitude and point in the same direction. That is, A B only if A B and if A and B point in the same direction along parallel lines. For example, all the vectors in Figure 3.5 are equal even though they have different starting points. This property allows us to move a vector to a position parallel to itself in a diagram without affecting the vector. Figure 3.5 These four vectors are equal because they have equal lengths and point in the same direction. Adding Vectors 2.4 The rules for adding vectors are conveniently described by geometric methods. To add vector B to vector A, first draw vector A, with its magnitude represented by a convenient scale, on graph paper and then draw vector B to the same scale with its tail starting from the tip of A, as shown in Figure 3.6. The resultant vector R A B is the vector drawn from the tail of A to the tip of B. This procedure is known as the triangle method of addition. For example, if you walked 3.0 m toward the east and then 4.0 m toward the north, as shown in Figure 3.7, you would find yourself 5.0 m from where you R = A + B B A Figure 3.6 When vector B is added to vector A, the resultant R is the vector that runs from the tail of A to the tip of B. CHAPTER 3 Vectors m )2 =5 .0 m 62 + D )2 +( 4.0 D + C C = A |R |= + B (3 .0 m 4.0 m R = 53° ( 4.0 3.0 ) θθ = tan–1 B A 3.0 m Figure 3.8 Geometric construction for summing four vectors. The resultant vector R is by definition the one that completes the polygon. Figure 3.7 Vector addition. Walking first 3.0 m due east and then 4.0 m due north leaves you 兩 R 兩 5.0 m from your starting point. started, measured at an angle of 53° north of east. Your total displacement is the vector sum of the individual displacements. A geometric construction can also be used to add more than two vectors. This is shown in Figure 3.8 for the case of four vectors. The resultant vector R A B C D is the vector that completes the polygon. In other words, R is the vector drawn from the tail of the first vector to the tip of the last vector. An alternative graphical procedure for adding two vectors, known as the parallelogram rule of addition, is shown in Figure 3.9a. In this construction, the tails of the two vectors A and B are joined together and the resultant vector R is the diagonal of a parallelogram formed with A and B as two of its four sides. When two vectors are added, the sum is independent of the order of the addition. (This fact may seem trivial, but as you will see in Chapter 11, the order is important when vectors are multiplied). This can be seen from the geometric construction in Figure 3.9b and is known as the commutative law of addition: ABBA Commutative law (3.5) When three or more vectors are added, their sum is independent of the way in which the individual vectors are grouped together. A geometric proof of this rule Commutative Law = B R Figure 3.9 (a) In this construction, the resultant R is the diagonal of a parallelogram having sides A and B. (b) This construction shows that A B B A — in other words, that vector addition is commutative. R B = A B + + B A A A A (a) (b) B 3.3 Figure 3.10 Geometric constructions for verifying the associative law of addition. Associative Law C + + (B (A + B+C A C B) + C) C 63 Some Properties of Vectors A+B B B A A for three vectors is given in Figure 3.10. This is called the associative law of addition: (3.6) A (B C) (A B) C Associative law In summary, a vector quantity has both magnitude and direction and also obeys the laws of vector addition as described in Figures 3.6 to 3.10. When two or more vectors are added together, all of them must have the same units. It would be meaningless to add a velocity vector (for example, 60 km/h to the east) to a displacement vector (for example, 200 km to the north) because they represent different physical quantities. The same rule also applies to scalars. For example, it would be meaningless to add time intervals to temperatures. Negative of a Vector The negative of the vector A is defined as the vector that when added to A gives zero for the vector sum. That is, A ( A) 0. The vectors A and A have the same magnitude but point in opposite directions. Subtracting Vectors The operation of vector subtraction makes use of the definition of the negative of a vector. We define the operation A B as vector B added to vector A: A B A ( B) (3.7) The geometric construction for subtracting two vectors in this way is illustrated in Figure 3.11a. Another way of looking at vector subtraction is to note that the difference A B between two vectors A and B is what you have to add to the second vector to obtain the first. In this case, the vector A B points from the tip of the second vector to the tip of the first, as Figure 3.11b shows. Vector Subtraction B A C=A–B –B B C=A–B A (a) (b) Figure 3.11 (a) This construction shows how to subtract vector B from vector A. The vector B is equal in magnitude to vector B and points in the opposite direction. To subtract B from A, apply the rule of vector addition to the combination of A and B: Draw A along some convenient axis, place the tail of B at the tip of A, and C is the difference A B. (b) A second way of looking at vector subtraction. The difference vector C A B is the vector that we must add to B to obtain A. 64 CHAPTER 3 EXAMPLE 3.2 Vectors A Vacation Trip A car travels 20.0 km due north and then 35.0 km in a direction 60.0° west of north, as shown in Figure 3.12. Find the magnitude and direction of the car’s resultant displacement. Solution In this example, we show two ways to find the resultant of two vectors. We can solve the problem geometrically, using graph paper and a protractor, as shown in Figure 3.12. (In fact, even when you know you are going to be carry- ing out a calculation, you should sketch the vectors to check your results.) The displacement R is the resultant when the two individual displacements A and B are added. To solve the problem algebraically, we note that the magnitude of R can be obtained from the law of cosines as applied to the triangle (see Appendix B.4). With 180° 60° 120° and R 2 A2 B 2 2AB cos , we find that R √A2 B 2 2AB cos √(20.0 km)2 (35.0 km)2 2(20.0 km)(35.0 km)cos 120° N W 48.2 km E S The direction of R measured from the northerly direction can be obtained from the law of sines (Appendix B.4): y(km) sin sin B R 40 B R 60.0° θ sin 20 β A –20 38.9° x(km) 0 B 35.0 km sin sin 120° 0.629 R 48.2 km Figure 3.12 Graphical method for finding the resultant displacement vector R A B. The resultant displacement of the car is 48.2 km in a direction 38.9° west of north. This result matches what we found graphically. Multiplying a Vector by a Scalar If vector A is multiplied by a positive scalar quantity m, then the product mA is a vector that has the same direction as A and magnitude mA. If vector A is multiplied by a negative scalar quantity m, then the product mA is directed opposite A. For example, the vector 5A is five times as long as A and points in the same direction as A; the vector 13 A is one-third the length of A and points in the direction opposite A. y Quick Quiz 3.2 If vector B is added to vector A, under what condition does the resultant vector A B have magnitude A B ? Under what conditions is the resultant vector equal to zero? A Ay θ O Ax 3.4 x Figure 3.13 Any vector A lying in the xy plane can be represented by a vector Ax lying along the x axis and by a vector Ay lying along the y axis, where A Ax Ay . 2.5 COMPONENTS OF A VECTOR AND UNIT VECTORS The geometric method of adding vectors is not recommended whenever great accuracy is required or in three-dimensional problems. In this section, we describe a method of adding vectors that makes use of the projections of vectors along coordinate axes. These projections are called the components of the vector. Any vector can be completely described by its components. Consider a vector A lying in the xy plane and making an arbitrary angle with the positive x axis, as shown in Figure 3.13. This vector can be expressed as the 3.4 65 Components of a Vector and Unit Vectors sum of two other vectors A x and A y . From Figure 3.13, we see that the three vectors form a right triangle and that A A x A y . (If you cannot see why this equality holds, go back to Figure 3.9 and review the parallelogram rule.) We shall often refer to the “components of a vector A,” written A x and A y (without the boldface notation). The component A x represents the projection of A along the x axis, and the component A y represents the projection of A along the y axis. These components can be positive or negative. The component A x is positive if A x points in the positive x direction and is negative if A x points in the negative x direction. The same is true for the component A y . From Figure 3.13 and the definition of sine and cosine, we see that cos Ax /A and that sin Ay /A. Hence, the components of A are Ax A cos (3.8) Ay A sin (3.9) Components of the vector A These components form two sides of a right triangle with a hypotenuse of length A. Thus, it follows that the magnitude and direction of A are related to its components through the expressions A √Ax2 Ay2 (3.10) Magnitude of A 冢A 冣 (3.11) Direction of A tan1 Ay x Note that the signs of the components Ax and Ay depend on the angle . For example, if 120°, then A x is negative and A y is positive. If 225°, then both A x and A y are negative. Figure 3.14 summarizes the signs of the components when A lies in the various quadrants. When solving problems, you can specify a vector A either with its components A x and A y or with its magnitude and direction A and . Quick Quiz 3.3 Can the component of a vector ever be greater than the magnitude of the vector? Suppose you are working a physics problem that requires resolving a vector into its components. In many applications it is convenient to express the components in a coordinate system having axes that are not horizontal and vertical but are still perpendicular to each other. If you choose reference axes or an angle other than the axes and angle shown in Figure 3.13, the components must be modified accordingly. Suppose a vector B makes an angle with the x axis defined in Figure 3.15. The components of B along the x and y axes are Bx B cos and By B sin , as specified by Equations 3.8 and 3.9. The magnitude and direction of B are obtained from expressions equivalent to Equations 3.10 and 3.11. Thus, we can express the components of a vector in any coordinate system that is convenient for a particular situation. y Ax negative Ax positive Ay positive Ay positive Ax negative Ax positive Ay negative Ay negative Figure 3.14 The signs of the components of a vector A depend on the quadrant in which the vector is located. y′ x′ B By′ θ′ Unit Vectors Vector quantities often are expressed in terms of unit vectors. A unit vector is a dimensionless vector having a magnitude of exactly 1. Unit vectors are used to specify a given direction and have no other physical significance. They are used solely as a convenience in describing a direction in space. We shall use the symbols x Bx′ O Figure 3.15 The component vectors of B in a coordinate system that is tilted. 66 CHAPTER 3 Vectors i, j, and k to represent unit vectors pointing in the positive x, y, and z directions, respectively. The unit vectors i, j, and k form a set of mutually perpendicular vectors in a right-handed coordinate system, as shown in Figure 3.16a. The magnitude of each unit vector equals 1; that is, 兩 i 兩 兩 j 兩 兩 k 兩 1. Consider a vector A lying in the xy plane, as shown in Figure 3.16b. The product of the component Ax and the unit vector i is the vector Ax i, which lies on the x axis and has magnitude 兩 Ax 兩. (The vector Ax i is an alternative representation of vector A x .) Likewise, A y j is a vector of magnitude 兩 Ay 兩 lying on the y axis. (Again, vector A y j is an alternative representation of vector A y .) Thus, the unit – vector notation for the vector A is A Ax i Ay j (3.12) For example, consider a point lying in the xy plane and having cartesian coordinates (x, y), as in Figure 3.17. The point can be specified by the position vector r, which in unit – vector form is given by r xi yj Position vector (3.13) This notation tells us that the components of r are the lengths x and y. Now let us see how to use components to add vectors when the geometric method is not sufficiently accurate. Suppose we wish to add vector B to vector A, where vector B has components Bx and By . All we do is add the x and y components separately. The resultant vector R A B is therefore R (Ax i Ay j) (Bx i By j) y or R (Ax Bx)i (Ay By)j x j (3.14) Because R R x i R y j, we see that the components of the resultant vector are R x Ax Bx i (3.15) R y Ay By k z y (a) y y (x,y) By R Ry Ay j A r Ay B A x Ax i x O x (b) Figure 3.16 (a) The unit vectors i, j, and k are directed along the x, y, and z axes, respectively. (b) Vector A Ax i Ay j lying in the xy plane has components Ax and Ay . Figure 3.17 The point whose cartesian coordinates are (x, y) can be represented by the position vector r xi yj. Bx Ax Rx Figure 3.18 This geometric construction for the sum of two vectors shows the relationship between the components of the resultant R and the components of the individual vectors. 3.4 67 Components of a Vector and Unit Vectors We obtain the magnitude of R and the angle it makes with the x axis from its components, using the relationships R √R x2 R y2 √(Ax Bx)2 (Ay By)2 tan Ry Rx Ay By Ax Bx (3.16) (3.17) We can check this addition by components with a geometric construction, as shown in Figure 3.18. Remember that you must note the signs of the components when using either the algebraic or the geometric method. At times, we need to consider situations involving motion in three component directions. The extension of our methods to three-dimensional vectors is straightforward. If A and B both have x, y, and z components, we express them in the form A Ax i Ay j Az k (3.18) B Bx i By j Bz k (3.19) R (Ax Bx)i (Ay By)j (Az Bz)k (3.20) The sum of A and B is Note that Equation 3.20 differs from Equation 3.14: in Equation 3.20, the resultant vector also has a z component R z Az Bz . Quick Quiz 3.4 If one component of a vector is not zero, can the magnitude of the vector be zero? Explain. Quick Quiz 3.5 If A B 0, what can you say about the components of the two vectors? Problem-Solving Hints Adding Vectors When you need to add two or more vectors, use this step-by-step procedure: • Select a coordinate system that is convenient. (Try to reduce the number of components you need to find by choosing axes that line up with as many vectors as possible.) • Draw a labeled sketch of the vectors described in the problem. • Find the x and y components of all vectors and the resultant components (the algebraic sum of the components) in the x and y directions. • If necessary, use the Pythagorean theorem to find the magnitude of the resultant vector and select a suitable trigonometric function to find the angle that the resultant vector makes with the x axis. QuickLab Write an expression for the vector describing the displacement of a fly that moves from one corner of the floor of the room that you are in to the opposite corner of the room, near the ceiling. 68 CHAPTER 3 EXAMPLE 3.3 Vectors The Sum of Two Vectors Find the sum of two vectors A and B lying in the xy plane and given by A (2.0i 2.0j) m and B (2.0i 4.0j) m Solution Comparing this expression for A with the general expression A Ax i Ay j, we see that Ax 2.0 m and that Ay 2.0 m. Likewise, Bx 2.0 m and By 4.0 m. We obtain the resultant vector R, using Equation 3.14: R A B (2.0 2.0)i m (2.0 4.0)j m (4.0i 2.0j) m or R x 4.0 m EXAMPLE 3.4 R y 2.0 m R √R x2 R y2 √(4.0 m)2 (2.0 m)2 √20 m 4.5 m We can find the direction of R from Equation 3.17: tan Ry Rx 2.0 m 4.0 m 0.50 Your calculator likely gives the answer 27° for tan1( 0.50). This answer is correct if we interpret it to mean 27° clockwise from the x axis. Our standard form has been to quote the angles measured counterclockwise from the x axis, and that angle for this vector is 333°. The Resultant Displacement A particle undergoes three consecutive displacements: d1 (15i 30j 12k) cm, d2 (23i 14j 5.0k) cm, and d3 ( 13i 15j) cm. Find the components of the resultant displacement and its magnitude. Solution Rather than looking at a sketch on flat paper, visualize the problem as follows: Start with your fingertip at the front left corner of your horizontal desktop. Move your fingertip 15 cm to the right, then 30 cm toward the far side of the desk, then 12 cm vertically upward, then 23 cm to the right, then 14 cm horizontally toward the front edge of the desk, then 5.0 cm vertically toward the desk, then 13 cm to the left, and (finally!) 15 cm toward the back of the desk. The EXAMPLE 3.5 The magnitude of R is given by Equation 3.16: mathematical calculation keeps track of this motion along the three perpendicular axes: R d1 d2 d3 (15 23 13)i cm (30 14 15)j cm (12 5.0 0)k cm (25i 31j 7.0k) cm The resultant displacement has components R x 25 cm, R y 31 cm, and R z 7.0 cm. Its magnitude is R √R x 2 R y 2 R z 2 √(25 cm)2 (31 cm)2 (7.0 cm)2 40 cm Taking a Hike A hiker begins a trip by first walking 25.0 km southeast from her car. She stops and sets up her tent for the night. On the second day, she walks 40.0 km in a direction 60.0° north of east, at which point she discovers a forest ranger’s tower. (a) Determine the components of the hiker’s displacement for each day. y(km) W E S 20 Solution If we denote the displacement vectors on the first and second days by A and B, respectively, and use the car as the origin of coordinates, we obtain the vectors shown in Figure 3.19. Displacement A has a magnitude of 25.0 km and is directed 45.0° below the positive x axis. From Equations 3.8 and 3.9, its components are Tower R 10 0 Car –10 Ax A cos(45.0°) (25.0 km)(0.707) 17.7 km Ay A sin(45.0°) (25.0 km)(0.707) 17.7 km N –20 Figure 3.19 R A B. x(km) 45.0° 20 A 30 40 B 60.0° 50 Tent The total displacement of the hiker is the vector 3.4 69 Components of a Vector and Unit Vectors The negative value of Ay indicates that the hiker walks in the negative y direction on the first day. The signs of Ax and Ay also are evident from Figure 3.19. The second displacement B has a magnitude of 40.0 km and is 60.0° north of east. Its components are Bx B cos 60.0° (40.0 km)(0.500) 20.0 km R x Ax Bx 17.7 km 20.0 km 37.7 km R y Ay By 17.7 km 34.6 km 16.9 km In unit – vector form, we can write the total displacement as R (37.7i 16.9j) km By B sin 60.0° (40.0 km)(0.866) 34.6 km (b) Determine the components of the hiker’s resultant displacement R for the trip. Find an expression for R in terms of unit vectors. The resultant displacement for the trip R A B has components given by Equation 3.15: Solution EXAMPLE 3.6 Exercise Determine the magnitude and direction of the total displacement. Answer 41.3 km, 24.1° north of east from the car. Let’s Fly Away! A commuter airplane takes the route shown in Figure 3.20. First, it flies from the origin of the coordinate system shown to city A, located 175 km in a direction 30.0° north of east. Next, it flies 153 km 20.0° west of north to city B. Finally, it flies 195 km due west to city C. Find the location of city C relative to the origin. Solution It is convenient to choose the coordinate system shown in Figure 3.20, where the x axis points to the east and the y axis points to the north. Let us denote the three consecutive displacements by the vectors a, b, and c. Displacement a has a magnitude of 175 km and the components Displacement b, whose magnitude is 153 km, has the components bx b cos(110°) (153 km)(0.342) 52.3 km by b sin(110°) (153 km)(0.940) 144 km Finally, displacement c, whose magnitude is 195 km, has the components cx c cos(180°) (195 km)(1) 195 km cy c sin(180°) 0 a x a cos(30.0°) (175 km)(0.866) 152 km Therefore, the components of the position vector R from the starting point to city C are a y a sin(30.0°) (175 km)(0.500) 87.5 km R x a x bx cx 152 km 52.3 km 195 km 95.3 km y(km) C N 250 B W c 200 b 100 a 30.0° 50 R (95.3i 232j) km. That is, the airplane can reach city C from the starting point by first traveling 95.3 km due west and then by traveling 232 km due north. A 50 232 km In unit – vector notation, 110° R R y a y by cy 87.5 km 144 km 0 S 20.0° 150 E x(km) 100 150 200 Exercise Find the magnitude and direction of R. Figure 3.20 The airplane starts at the origin, flies first to city A, then to city B, and finally to city C. Answer 251 km, 22.3° west of north. 70 CHAPTER 3 Vectors R=A+B R B R B A A (a) (b) Figure 3.21 (a) Vector addition by the triangle method. (b) Vector addition by the parallelogram rule. SUMMARY y A Ay θ O Ax x Figure 3.22 The addition of the two vectors Ax and Ay gives vector A. Note that Ax Ax i and Ay A y j, where Ax and Ay are the components of vector A. Scalar quantities are those that have only magnitude and no associated direction. Vector quantities have both magnitude and direction and obey the laws of vector addition. We can add two vectors A and B graphically, using either the triangle method or the parallelogram rule. In the triangle method (Fig. 3.21a), the resultant vector R A B runs from the tail of A to the tip of B. In the parallelogram method (Fig. 3.21b), R is the diagonal of a parallelogram having A and B as two of its sides. You should be able to add or subtract vectors, using these graphical methods. The x component Ax of the vector A is equal to the projection of A along the x axis of a coordinate system, as shown in Figure 3.22, where Ax A cos . The y component Ay of A is the projection of A along the y axis, where Ay A sin . Be sure you can determine which trigonometric functions you should use in all situations, especially when is defined as something other than the counterclockwise angle from the positive x axis. If a vector A has an x component Ax and a y component Ay , the vector can be expressed in unit – vector form as A Ax i Ay j. In this notation, i is a unit vector pointing in the positive x direction, and j is a unit vector pointing in the positive y direction. Because i and j are unit vectors, 兩 i 兩 兩 j 兩 1. We can find the resultant of two or more vectors by resolving all vectors into their x and y components, adding their resultant x and y components, and then using the Pythagorean theorem to find the magnitude of the resultant vector. We can find the angle that the resultant vector makes with respect to the x axis by using a suitable trigonometric function. QUESTIONS 1. Two vectors have unequal magnitudes. Can their sum be zero? Explain. 2. Can the magnitude of a particle’s displacement be greater than the distance traveled? Explain. 3. The magnitudes of two vectors A and B are A 5 units and B 2 units. Find the largest and smallest values possible for the resultant vector R A B. 4. Vector A lies in the xy plane. For what orientations of vector A will both of its components be negative? For what orientations will its components have opposite signs? 5. If the component of vector A along the direction of vector B is zero, what can you conclude about these two vectors? 6. Can the magnitude of a vector have a negative value? Explain. 7. Which of the following are vectors and which are not: force, temperature, volume, ratings of a television show, height, velocity, age? 8. Under what circumstances would a nonzero vector lying in the xy plane ever have components that are equal in magnitude? 9. Is it possible to add a vector quantity to a scalar quantity? Explain. 71 Problems PROBLEMS 1, 2, 3 = straightforward, intermediate, challenging = full solution available in the Student Solutions Manual and Study Guide WEB = solution posted at http://www.saunderscollege.com/physics/ = Computer useful in solving problem = Interactive Physics = paired numerical/symbolic problems Section 3.1 WEB Coordinate Systems 1. The polar coordinates of a point are r 5.50 m and 240°. What are the cartesian coordinates of this point? 2. Two points in the xy plane have cartesian coordinates (2.00, 4.00) m and ( 3.00, 3.00) m. Determine (a) the distance between these points and (b) their polar coordinates. 3. If the cartesian coordinates of a point are given by (2, y) and its polar coordinates are (r, 30°), determine y and r. 4. Two points in a plane have polar coordinates (2.50 m, 30.0°) and (3.80 m, 120.0°). Determine (a) the cartesian coordinates of these points and (b) the distance between them. 5. A fly lands on one wall of a room. The lower left-hand corner of the wall is selected as the origin of a twodimensional cartesian coordinate system. If the fly is located at the point having coordinates (2.00, 1.00) m, (a) how far is it from the corner of the room? (b) what is its location in polar coordinates? 6. If the polar coordinates of the point (x, y) are (r, ), determine the polar coordinates for the points (a) ( x, y), (b) ( 2x, 2y), and (c) (3x, 3y). Section 3.2 Vector and Scalar Quantities Section 3.3 Some Properties of Vectors 12. WEB 13. 14. WEB 15. ative x axis. Using graphical methods, find (a) the vector sum A B and (b) the vector difference A B. A force F1 of magnitude 6.00 units acts at the origin in a direction 30.0° above the positive x axis. A second force F2 of magnitude 5.00 units acts at the origin in the direction of the positive y axis. Find graphically the magnitude and direction of the resultant force F1 + F2 . A person walks along a circular path of radius 5.00 m. If the person walks around one half of the circle, find (a) the magnitude of the displacement vector and (b) how far the person walked. (c) What is the magnitude of the displacement if the person walks all the way around the circle? A dog searching for a bone walks 3.50 m south, then 8.20 m at an angle 30.0° north of east, and finally 15.0 m west. Using graphical techniques, find the dog’s resultant displacement vector. Each of the displacement vectors A and B shown in Figure P3.15 has a magnitude of 3.00 m. Find graphically (a) A B, (b) A B, (c) B A, (d) A 2B. Report all angles counterclockwise from the positive x axis. y B 7. An airplane flies 200 km due west from city A to city B and then 300 km in the direction 30.0° north of west from city B to city C. (a) In straight-line distance, how far is city C from city A? (b) Relative to city A, in what direction is city C? 8. A pedestrian moves 6.00 km east and then 13.0 km north. Using the graphical method, find the magnitude and direction of the resultant displacement vector. 9. A surveyor measures the distance across a straight river by the following method: Starting directly across from a tree on the opposite bank, she walks 100 m along the riverbank to establish a baseline. Then she sights across to the tree. The angle from her baseline to the tree is 35.0°. How wide is the river? 10. A plane flies from base camp to lake A, a distance of 280 km at a direction 20.0° north of east. After dropping off supplies, it flies to lake B, which is 190 km and 30.0° west of north from lake A. Graphically determine the distance and direction from lake B to the base camp. 11. Vector A has a magnitude of 8.00 units and makes an angle of 45.0° with the positive x axis. Vector B also has a magnitude of 8.00 units and is directed along the neg- 3.00 m A 0m 3.0 30.0° O Figure P3.15 x Problems 15 and 39. 16. Arbitrarily define the “instantaneous vector height” of a person as the displacement vector from the point halfway between the feet to the top of the head. Make an order-of-magnitude estimate of the total vector height of all the people in a city of population 100 000 (a) at 10 a.m. on a Tuesday and (b) at 5 a.m. on a Saturday. Explain your reasoning. 17. A roller coaster moves 200 ft horizontally and then rises 135 ft at an angle of 30.0° above the horizontal. It then travels 135 ft at an angle of 40.0° downward. What is its displacement from its starting point? Use graphical techniques. 18. The driver of a car drives 3.00 km north, 2.00 km northeast (45.0° east of north), 4.00 km west, and then 72 CHAPTER 3 Vectors 3.00 km southeast (45.0° east of south). Where does he end up relative to his starting point? Work out your answer graphically. Check by using components. (The car is not near the North Pole or the South Pole.) 19. Fox Mulder is trapped in a maze. To find his way out, he walks 10.0 m, makes a 90.0° right turn, walks 5.00 m, makes another 90.0° right turn, and walks 7.00 m. What is his displacement from his initial position? 24. Section 3.4 Components of a Vector and Unit Vectors 20. Find the horizontal and vertical components of the 100-m displacement of a superhero who flies from the top of a tall building following the path shown in Figure P3.20. WEB 25. 26. y 30.0° x 27. 100 m 28. Figure P3.20 21. A person walks 25.0° north of east for 3.10 km. How far would she have to walk due north and due east to arrive at the same location? 22. While exploring a cave, a spelunker starts at the entrance and moves the following distances: She goes 75.0 m north, 250 m east, 125 m at an angle 30.0° north of east, and 150 m south. Find the resultant displacement from the cave entrance. 23. In the assembly operation illustrated in Figure P3.23, a robot first lifts an object upward along an arc that forms one quarter of a circle having a radius of 4.80 cm and 29. 30. 31. 32. 33. 34. 35. 36. Figure P3.23 lying in an east – west vertical plane. The robot then moves the object upward along a second arc that forms one quarter of a circle having a radius of 3.70 cm and lying in a north – south vertical plane. Find (a) the magnitude of the total displacement of the object and (b) the angle the total displacement makes with the vertical. Vector B has x, y, and z components of 4.00, 6.00, and 3.00 units, respectively. Calculate the magnitude of B and the angles that B makes with the coordinate axes. A vector has an x component of 25.0 units and a y component of 40.0 units. Find the magnitude and direction of this vector. A map suggests that Atlanta is 730 mi in a direction 5.00° north of east from Dallas. The same map shows that Chicago is 560 mi in a direction 21.0° west of north from Atlanta. Assuming that the Earth is flat, use this information to find the displacement from Dallas to Chicago. A displacement vector lying in the xy plane has a magnitude of 50.0 m and is directed at an angle of 120° to the positive x axis. Find the x and y components of this vector and express the vector in unit – vector notation. If A 2.00i 6.00j and B 3.00i 2.00j, (a) sketch the vector sum C A B and the vector difference D A B. (b) Find solutions for C and D, first in terms of unit vectors and then in terms of polar coordinates, with angles measured with respect to the x axis. Find the magnitude and direction of the resultant of three displacements having x and y components (3.00, 2.00) m, ( 5.00, 3.00) m, and (6.00, 1.00) m. Vector A has x and y components of 8.70 cm and 15.0 cm, respectively; vector B has x and y components of 13.2 cm and 6.60 cm, respectively. If A B 3C 0, what are the components of C? Consider two vectors A 3i 2j and B i 4j. Calculate (a) A B, (b) A B, (c) 兩 A B 兩, (d) 兩 A B 兩, (e) the directions of A B and A B. A boy runs 3.00 blocks north, 4.00 blocks northeast, and 5.00 blocks west. Determine the length and direction of the displacement vector that goes from the starting point to his final position. Obtain expressions in component form for the position vectors having polar coordinates (a) 12.8 m, 150°; (b) 3.30 cm, 60.0°; (c) 22.0 in., 215°. Consider the displacement vectors A (3i 3j) m, B (i 4j) m, and C ( 2i 5j) m. Use the component method to determine (a) the magnitude and direction of the vector D A B C and (b) the magnitude and direction of E A B C. A particle undergoes the following consecutive displacements: 3.50 m south, 8.20 m northeast, and 15.0 m west. What is the resultant displacement? In a game of American football, a quarterback takes the ball from the line of scrimmage, runs backward for 10.0 yards, and then sideways parallel to the line of scrimmage for 15.0 yards. At this point, he throws a forward Problems pass 50.0 yards straight downfield perpendicular to the line of scrimmage. What is the magnitude of the football’s resultant displacement? 37. The helicopter view in Figure P3.37 shows two people pulling on a stubborn mule. Find (a) the single force that is equivalent to the two forces shown and (b) the force that a third person would have to exert on the mule to make the resultant force equal to zero. The forces are measured in units of newtons. y F1 = 120 N F2 = 80.0 N 75.0˚ 60.0˚ x Figure P3.37 38. A novice golfer on the green takes three strokes to sink the ball. The successive displacements are 4.00 m to the north, 2.00 m northeast, and 1.00 m 30.0° west of south. Starting at the same initial point, an expert golfer could make the hole in what single displacement? 39. Find the x and y components of the vectors A and B shown in Figure P3.15; then derive an expression for the resultant vector A B in unit – vector notation. 40. You are standing on the ground at the origin of a coordinate system. An airplane flies over you with constant velocity parallel to the x axis and at a constant height of 7.60 103 m. At t 0, the airplane is directly above you, so that the vector from you to it is given by P0 (7.60 103 m)j. At t 30.0 s, the position vector leading from you to the airplane is P30 (8.04 103 m)i (7.60 103 m)j. Determine the magnitude and orientation of the airplane’s position vector at t 45.0 s. 41. A particle undergoes two displacements. The first has a magnitude of 150 cm and makes an angle of 120° with the positive x axis. The resultant displacement has a magnitude of 140 cm and is directed at an angle of 35.0° to the positive x axis. Find the magnitude and direction of the second displacement. 73 42. Vectors A and B have equal magnitudes of 5.00. If the sum of A and B is the vector 6.00 j, determine the angle between A and B. 43. The vector A has x, y, and z components of 8.00, 12.0, and 4.00 units, respectively. (a) Write a vector expression for A in unit – vector notation. (b) Obtain a unit – vector expression for a vector B one-fourth the length of A pointing in the same direction as A. (c) Obtain a unit – vector expression for a vector C three times the length of A pointing in the direction opposite the direction of A. 44. Instructions for finding a buried treasure include the following: Go 75.0 paces at 240°, turn to 135° and walk 125 paces, then travel 100 paces at 160°. The angles are measured counterclockwise from an axis pointing to the east, the x direction. Determine the resultant displacement from the starting point. 45. Given the displacement vectors A (3i 4j 4k) m and B (2i 3j 7k) m, find the magnitudes of the vectors (a) C A B and (b) D 2A B, also expressing each in terms of its x, y, and z components. 46. A radar station locates a sinking ship at range 17.3 km and bearing 136° clockwise from north. From the same station a rescue plane is at horizontal range 19.6 km, 153° clockwise from north, with elevation 2.20 km. (a) Write the vector displacement from plane to ship, letting i represent east, j north, and k up. (b) How far apart are the plane and ship? 47. As it passes over Grand Bahama Island, the eye of a hurricane is moving in a direction 60.0° north of west with a speed of 41.0 km/h. Three hours later, the course of the hurricane suddenly shifts due north and its speed slows to 25.0 km/h. How far from Grand Bahama is the eye 4.50 h after it passes over the island? 48. (a) Vector E has magnitude 17.0 cm and is directed 27.0° counterclockwise from the x axis. Express it in unit – vector notation. (b) Vector F has magnitude 17.0 cm and is directed 27.0° counterclockwise from the y axis. Express it in unit – vector notation. (c) Vector G has magnitude 17.0 cm and is directed 27.0° clockwise from the y axis. Express it in unit – vector notation. 49. Vector A has a negative x component 3.00 units in length and a positive y component 2.00 units in length. (a) Determine an expression for A in unit – vector notation. (b) Determine the magnitude and direction of A. (c) What vector B, when added to vector A, gives a resultant vector with no x component and a negative y component 4.00 units in length? 50. An airplane starting from airport A flies 300 km east, then 350 km at 30.0° west of north, and then 150 km north to arrive finally at airport B. (a) The next day, another plane flies directly from airport A to airport B in a straight line. In what direction should the pilot travel in this direct flight? (b) How far will the pilot travel in this direct flight? Assume there is no wind during these flights. 74 WEB CHAPTER 3 Vectors 51. Three vectors are oriented as shown in Figure P3.51, where 兩 A 兩 20.0 units, 兩 B 兩 40.0 units, and 兩 C 兩 30.0 units. Find (a) the x and y components of the resultant vector (expressed in unit – vector notation) and (b) the magnitude and direction of the resultant vector. y 100 m Start y x 300 m End B 200 m A 30° 45.0° O 60° 150 m x 45.0° Figure P3.57 C Figure P3.51 52. If A (6.00i 8.00j) units, B ( 8.00i 3.00j) units, and C (26.0i 19.0j) units, determine a and b such that aA bB C 0. ADDITIONAL PROBLEMS 53. Two vectors A and B have precisely equal magnitudes. For the magnitude of A B to be 100 times greater than the magnitude of A B, what must be the angle between them? 54. Two vectors A and B have precisely equal magnitudes. For the magnitude of A B to be greater than the magnitude of A B by the factor n, what must be the angle between them? 55. A vector is given by R 2.00i 1.00j 3.00k. Find (a) the magnitudes of the x, y, and z components, (b) the magnitude of R, and (c) the angles between R and the x, y, and z axes. 56. Find the sum of these four vector forces: 12.0 N to the right at 35.0° above the horizontal, 31.0 N to the left at 55.0° above the horizontal, 8.40 N to the left at 35.0° below the horizontal, and 24.0 N to the right at 55.0° below the horizontal. (Hint: Make a drawing of this situation and select the best axes for x and y so that you have the least number of components. Then add the vectors, using the component method.) 57. A person going for a walk follows the path shown in Figure P3.57. The total trip consists of four straight-line paths. At the end of the walk, what is the person’s resultant displacement measured from the starting point? 58. In general, the instantaneous position of an object is specified by its position vector P leading from a fixed origin to the location of the object. Suppose that for a certain object the position vector is a function of time, given by P 4i 3j 2t j, where P is in meters and t is in seconds. Evaluate d P/dt. What does this derivative represent about the object? 59. A jet airliner, moving initially at 300 mi/h to the east, suddenly enters a region where the wind is blowing at 100 mi/h in a direction 30.0° north of east. What are the new speed and direction of the aircraft relative to the ground? 60. A pirate has buried his treasure on an island with five trees located at the following points: A(30.0 m, 20.0 m), B(60.0 m, 80.0 m), C( 10.0 m, 10.0 m), D(40.0 m, 30.0 m), and E( 70.0 m, 60.0 m). All points are measured relative to some origin, as in Figure P3.60. Instructions on the map tell you to start at A and move toward B, but to cover only one-half the distance between A and B. Then, move toward C, covering one-third the distance between your current location and C. Next, move toward D, covering one-fourth the distance between where you are and D. Finally, move toward E, covering one-fifth the distance between you and E, stop, and dig. (a) What are the coordinates of the point where the pirate’s treasure is buried? (b) ReB E y x C A D Figure P3.60 75 Answers to Quick Quizzes arrange the order of the trees, (for instance, B(30.0 m, 20.0 m), A(60.0 m, 80.0 m), E( 10.0 m, 10.0 m), C(40.0 m, 30.0 m), and D( 70.0 m, 60.0 m), and repeat the calculation to show that the answer does not depend on the order of the trees. 61. A rectangular parallelepiped has dimensions a, b, and c, as in Figure P3.61. (a) Obtain a vector expression for the face diagonal vector R1 . What is the magnitude of this vector? (b) Obtain a vector expression for the body diagonal vector R2 . Note that R1 , ck, and R2 make a right triangle, and prove that the magnitude of R2 is √a 2 b 2 c 2. 62. A point lying in the xy plane and having coordinates (x, y) can be described by the position vector given by r x i y j. (a) Show that the displacement vector for a particle moving from (x 1 , y 1 ) to (x 2 , y 2 ) is given by d (x 2 x 1 )i (y 2 y 1 )j. (b) Plot the position vectors r1 and r2 and the displacement vector d, and verify by the graphical method that d r2 r1 . 63. A point P is described by the coordinates (x, y) with respect to the normal cartesian coordinate system shown in Figure P3.63. Show that (x, y), the coordinates of this point in the rotated coordinate system, are related to (x, y) and the rotation angle by the expressions x x cos y x sin z y sin y cos y a P b y′ x′ O x R2 α c O R1 x y Figure P3.61 Figure P3.63 ANSWERS TO QUICK QUIZZES 3.1 The honeybee needs to communicate to the other honeybees how far it is to the flower and in what direction they must fly. This is exactly the kind of information that polar coordinates convey, as long as the origin of the coordinates is the beehive. 3.2 The resultant has magnitude A B when vector A is oriented in the same direction as vector B. The resultant vector is A B 0 when vector A is oriented in the direction opposite vector B and A B. 3.3 No. In two dimensions, a vector and its components form a right triangle. The vector is the hypotenuse and must be longer than either side. Problem 61 extends this concept to three dimensions. 3.4 No. The magnitude of a vector A is equal to √Ax2 Ay2 Az2. Therefore, if any component is nonzero, A cannot be zero. This generalization of the Pythagorean theorem is left for you to prove in Problem 61. 3.5 The fact that A B 0 tells you that A B. Therefore, the components of the two vectors must have opposite signs and equal magnitudes: Ax Bx , Ay By , and Az Bz .
Contents Lesson 14-1Counting Outcomes Lesson 14-2Permutations and Combinations Lesson 14-3Probability of Compound Events Lesson 14-4Probability Distributions Lesson 14-5Probability Simulations Lesson 1 Contents Example 1Tree Diagram Example 2Fundamental Counting Principle Example 3Counting Arrangements Example 4Factorial Example 5Use Factorials to Solve a Problem Example 1-1a At football games, a student concession stand sells sandwiches on either wheat or rye bread. The sandwiches come with salami, turkey, or ham, and either chips, a brownie, or fruit. Use a tree diagram to determine the number of possible sandwich combinations. Example 1-1b Answer:The tree diagram shows that there are 18 possible combinations. Example 1-1c A lunch buffet offers a combination of a meat, a vegetable, and a drink for $5.99. The choices of meat are chicken or pork; the choices of vegetable are carrots, broccoli, green beans, or potatoes; and the choices of drink are milk, lemonade, or a soft drink. Use a tree diagram to determine the number of possible lunch combinations. Answer:24 different lunches Example 1-2a The Too Cheap computer company sells custom made personal computers. Customers have a choice of 11 different hard drives, 6 different keyboards, 4 different mice, and 4 different monitors. How many different custom computers can you order? Multiply to find the number of custom computers. hard drive choices keyboard choices mice choices monitor choices number of custom computers Answer:The number of different custom computers is 1056. Example 1-2b A major league team is trying to organize their draft. In their first five rounds, they want to pick a pitcher, a catcher, a first baseman, a third basemen, and an outfielder. They are considering 7 pitchers, 9 catchers, 3 first baseman, 4 third baseman, and 12 outfielders. How many ways can they draft players for these five positions? Answer: 9072 Example 1-3a There are 8 students in the Algebra Club at Central High School. The students want to stand in a line for their yearbook picture. How many different ways could the 8 students stand for their picture? The number of ways to arrange the students can be found by multiplying the number of choices for each position. There are eight people from which to choose for the first position. After choosing a person for the first position, there are seven people left from which to choose for the second position. Example 1-3b There are now six choices for the third position. This process continues until there is only one choice left for the last position. Let n represent the number of arrangements. Answer:There are 40,320 different ways they could stand. Example 1-3c There are 11 people performing in a talent show. The program coordinator is trying to arrange the order in which each participant will perform. How many different ways can the order of performances be arranged? Answer:39,916,800 ways Example 1-4a Find the value of 9!. Definition of factorial Simplify. Answer: Example 1-4b Find the value of 7!. Answer: 5040 Example 1-5a Jill and Miranda are going to a national park for their vacation. Near the campground where they are staying, there are 8 hiking trails. How many different ways can they hike all of the trails if they hike each trail only once? Use a factorial. Definition of factorial Simplify. Answer:There are 40,320 ways in which Jill and Miranda can hike all 8 trails. Example 1-5b Jill and Miranda are going to a national park for their vacation. Near the campground where they are staying, there are 8 hiking trails. If they only have time to hike on 5 of the trails, how many ways can they do this? Use the Fundamental Counting Principle to find the sample space. Fundamental Counting Principle Simplify. Answer:There are 6720 ways that Jill and Miranda can hike 5 of the trails. Example 1-5c Jack and Renee want to take a cross-country trip over the summer to 10 different cities. They are trying to decide the order in which they should travel. a.How many different orders can they travel to the 10 cities if they go to each city once? b.Suppose they only have time to go to 8 of the cities. How many ways can they do this? Answer: 3,628,800 Answer: 1,814,400 End of Lesson 1 Lesson 2 Contents Example 1Tree Diagram Permutation Example 2Permutation Example 3Permutation and Probability Example 4Combination Example 5Use Combinations Example 2-1a Ms. Baraza asks pairs of students to go in front of her Spanish class to read statements in Spanish, and then to translate the statement into English. One student is the Spanish speaker and one is the English speaker. If Ms. Baraza has to choose between Jeff, Kathy, Guillermo, Ana, and Patrice, how many different ways can Ms. Baraza pair the students? Use a tree diagram to show the possible arrangements. Example 2-1b Answer:There are 20 different ways for the 5 students to be paired. Example 2-1c There are five finalists in the student art contest: Cal, Jeanette, Emily, Elizabeth, and Ron. The winner and the runner-up of the contest will receive prizes. How many possible ways are there for the winners to be chosen? Answer: 20 Example 2-2a Find Definition of Subtract. Example 2-2b Simplify. Answer:There are 1680 permutations of 8 objects taken 4 at a time. Definition of factorial 1 1 Example 2-2c Find Answer: 15,120 Example 2-3a Shaquille has a 5-digit pass code to access his account. The code is made up of the even digits 2, 4, 6, 8, and 0. Each digit can be used only once. How many different pass codes could Shaquille have? Since the order of the numbers in the code is important, this situation is a permutation of 5 digits taken 5 at a time. Definition of permutation Example 2-3b Definition of factorial Answer:There are 120 possible pass codes with the digits 2, 4, 6, 8, and 0. Example 2-3c Shaquille has a 5-digit pass code to access his account. The code is made up of the even digits 2, 4, 6, 8, and 0. Each digit can be used only once. What is the probability that the first two digits of his code are both greater than 5? Use the Fundamental Counting Principle to determine the number of ways for the first two digits to be greater than 5. There are 2 digits greater than 5 and 3 digits less than 5. The number of choices for the first two digits, if they are greater than 5, is 2 1. The number of choices for the remaining digits is Example 2-3d The number of favorable outcomes is or 12. There are 12 ways for this event to occur out of the 120 possible permutations. Simplify. Answer:The probability that the first two digits of the pass code are greater than 5 is or 10%. Example 2-3e Bridget and Brittany are trying to find a house, but they cannot remember the address. They can remember only that the digits used are 1, 2, 5, and 8, and that no digit is used twice. a.How many possible addresses are there? b. What is the probability that the first two numbers are odd? Answer:24 addresses Answer: or about 17% Example 2-4a Multiple-Choice Test Item Customers at Tony’s Pizzeria can choose 4 out of 12 toppings for each pizza for no extra charge. How many different combinations of pizza toppings can be chosen? A 495 B 792 C 11,880 D 95,040 Read the Test Item The order in which the toppings are chosen does not matter, so this situation represents a combination of 12 toppings taken 4 at a time. Example 2-4b Solve the Test Item Definition of combination Definition of factorial 1 1 Example 2-4c Simplify. Answer:There are 495 different ways to select toppings. Choice A is correct. Example 2-4d Multiple-Choice Test Item A cable company is having a sale on their premium channels. Out of 8 possible premium channels, they are allowing customers to pick 5 channels at no extra charge. How many channel packages are there? A 6720 B 56 C 336 D 120 Answer:B Example 2-5a Diane has a bag full of coins. There are 10 pennies, 6 nickels, 4 dimes, and 2 quarters in the bag. How many different ways can Diane pull four coins out of the bag? The order in which the coins are chosen does not matter, so we must find the number of combinations of 22 coins taken 4 at a time. Definition of combination Example 2-5b Simplify. Answer:There are 7315 ways to pull 4 coins out of a bag of 22. Divide by the GCF, 18!. 1 1 Example 2-5c Diane has a bag full of coins. There are 10 pennies, 6 nickels, 4 dimes, and 2 quarters in the bag. What is the probability that she will pull two pennies and two nickels out of the bag? There are two questions to consider. How many ways can 2 pennies be pulled from 10? How many ways can 2 nickels be pulled from 6? Using the Fundamental Counting Principle, the answer can be determined with the product of the two combinations. Example 2-5d ways to choose 2 pennies out of 10 ways to choose 2 nickels out of 6 Definition of combination Simplify. Example 2-5e Divide the first term by its GCF, 8!, and the second term by its GCF, 4!. Simplify. There are 675 ways to choose this particular combination out of 7315 possible combinations. Example 2-5f Simplify. Answer:The probability that Diane will select two pennies and two nickels is or about 9%. Example 2-5g At a factory, there are 10 union workers, 12 engineers, and 5 foremen. The company needs 6 of these workers to attend a national conference. a.How many ways could the company choose the 6 workers? b.If the workers are chosen randomly, what is the probability that 3 union workers, 2 engineers, and 1 foreman are selected? Answer:296,010 ways Answer: or about 13% End of Lesson 2 Lesson 3 Contents Example 1Independent Events Example 2Dependent Events Example 3Mutually Exclusive Events Example 4Inclusive Events Example 3-1a Roberta is flying from Birmingham to Chicago to visit her grandmother. She has to fly from Birmingham to Houston on the first leg of her trip. In Houston she changes planes and heads on to Chicago. The airline reports that the flight from Birmingham to Houston has a 90% on time record, and the flight from Houston to Chicago has a 50% on time record. What is the probability that both flights will be on time? Example 3-1b Multiply. Answer:The probability that both flights will be on time is 45%. Definition of independent events Example 3-1c Two cities, Fairfield and Madison, lie on different faults. There is a 60% chance that Fairfield will experience an earthquake by the year 2010 and a 40% chance that Madison will experience an earthquake by Find the probability that both cities will experience an earthquake by Answer:24% Example 3-2a At the school carnival, winners in the ring-toss game are randomly given a prize from a bag that contains 4 sunglasses, 6 hairbrushes, and 5 key chains. Three prizes are randomly drawn from the bag and not replaced. Find P( sunglasses, hairbrush, key chain ). The selection of the first prize affects the selection of the next prize since there is one less prize from which to choose. So, the events are dependent. Example 3-2b First prize: Second prize: Third prize: Example 3-2c Multiply. Substitution Answer:The probability of drawing sunglasses, a hairbrush, and a key chain is Example 3-2d At the school carnival, winners in the ring-toss game are randomly given a prize from a bag that contains 4 sunglasses, 6 hairbrushes, and 5 key chains. Three prizes are randomly drawn from the bag and not replaced. Find P( hairbrush, hairbrush, key chain ). Notice that after selecting a hairbrush, not only is there one fewer prize from which to choose, there is also one fewer hairbrush. Example 3-2e Multiply. Answer:The probability of drawing two hairbrushes and then a key chain is Substitution Example 3-2f At the school carnival, winners in the ring-toss game are randomly given a prize from a bag that contains 4 sunglasses, 6 hairbrushes, and 5 key chains. Three prizes are randomly drawn from the bag and not replaced. Find P( sunglasses, hairbrush, not key chain ). Since the prize that is not a key chain is selected after the first two prizes, there are 10 – 2 or 8 prizes that are not key chains. Example 3-2g Multiply. Substitution Answer:The probability of drawing sunglasses, a hairbrush, and not a key chain is Example 3-2h A gumball machine contains 16 red gumballs, 10 blue gumballs, and 18 green gumballs. Once a gumball is removed from the machine, it is not replaced. Find each probability if the gumballs are removed in the order indicated. a. P( red, green, blue ) b. P( blue, green, green ) c. P( green, blue, not red ) Answer: Example 3-3a Alfred is going to the Lakeshore Animal Shelter to pick a new pet. Today, the shelter has 8 dogs, 7 cats, and 5 rabbits available for adoption. If Alfred randomly picks an animal to adopt, what is the probability that the animal would be a cat or a dog? Since a pet cannot be both a dog and a cat, the events are mutually exclusive. Example 3-3b Definition of mutually exclusive events Substitution Example 3-3c Add. Answer:The probability of randomly picking a cat or a dog is Example 3-3d The French Club has 16 seniors, 12 juniors, 15 sophomores, and 21 freshmen as members. What is the probability that a member chosen at random is a junior or a senior? Answer: Example 3-4a A dog has just given birth to a litter of 9 puppies. There are 3 brown females, 2 brown males, 1 mixed-color female, and 3 mixed-color males. If you choose a puppy at random from the litter, what is the probability that the puppy will be male or mixed-color? Since three of the puppies are both mixed-colored and males, these events are inclusive. Example 3-4b Definition of inclusive events Substitution LCD is 9. Example 3-4c Simplify. Answer:The probability of a puppy being a male or mixed-color is or about 67%. Example 3-4d In Mrs. Kline’s class, 7 boys have brown eyes and 5 boys have blue eyes. Out of the girls, 6 have brown eyes and 8 have blue eyes. If a student is chosen at random from the class, what is the probability that the student will be a boy or have brown eyes? Answer: End of Lesson 3 Lesson 4 Contents Example 1Random Variable Example 2Probability Distribution Example 4-1a The owner of a pet store asked customers how many pets they owned. The results of this survey are shown in the table. Find the probability that a randomly chosen customer has at most 2 pets. Number of Pets Number of Customers There are or 73 outcomes in which a customer owns at most 2 pets, and there are 100 survey results. Example 4-1b Answer:The probability that a randomly chosen customer owns at most 2 pets is Example 4-1c The owner of a pet store asked customers how many pets they owned. The results of this survey are shown in the table. Find the probability that a randomly chosen customer has 2 or 3 pets. Number of Pets Number of Customers There are or 51 outcomes in which a customer owns 2 or 3 pets. Example 4-1d Answer:The probability that a randomly chosen customer owns 2 or 3 pets is Example 4-1e A survey was conducted concerning the number of movies people watch at the theater per month. The results of this survey are shown in the table. a.Find the probability that a randomly chosen person watches at most 1 movie per month. Number of movies (per month) Number of people Answer: Example 4-1f A survey was conducted concerning the number of movies people watch at the theater per month. The results of this survey are shown in the table. Answer: Number of movies (per month) Number of people b. Find the probability that a randomly chosen person watches 0 or 4 movies per month. Example 4-2a The table shows the probability distribution of the number of students in each grade at Sunnybrook High School. If a student is chosen at random, what is the probability that he or she is in grade 11 or above? Recall that the probability of a compound event is the sum of the probabilities of each individual event. The probability of a student being in grade 11 or above is the sum of the probability of grade 11 and the probability of grade 12. X = GradeP(X)P(X) Example 4-2b Sum of individual probabilities Answer:The probability of a student being in grade 11 or above is 0.45. Example 4-2c The table shows the probability distribution of the number of students in each grade at Sunnybrook High School. Make a probability histogram of the data. Draw and label the vertical and horizontal axes. Remember to use equal intervals on each axis. Include a title. X = GradeP(X)P(X) Example 4-2d Answer: Example 4-2e The table shows the probability distribution of the number of children per family in the city of Maplewood. a.If a family was chosen at random, what is the probability that they have at least 2 children? X = Number of Children P(X)P(X) Answer: 0.66 Example 4-2f b.Make a probability histogram of the data. Answer: End of Lesson 4 Lesson 5 Contents Example 1Experimental Probability Example 2Empirical Study Example 3Simulation Example 4Theoretical and Experimental Probability Example 5-1a Miguel shot 50 free throws in the gym and found that his experimental probability of making a free throw was 40%. How many free throws did Miguel make? Miguel’s experimental probability of making a free throw was 40%. The number of successes can be written as 40 out of every 100 free throws. experimental probability number of success total number of free throws Example 5-1b Since Miguel only shot 50 free throws, write and solve a proportion. experimental successes Miguel’s successes Miguel’s total free throws experimental total free throws Find the cross products. Simplify. Divide each side by 100. Answer:Miguel made 20 free throws. Example 5-1c Nancy was testing her serving accuracy in volleyball. She served 80 balls and found that her experimental probability of keeping it in bounds was 60%. How many serves did she keep in bounds? Answer: 48 Example 5-2a A pharmaceutical company performs three clinical studies to test the effectiveness of a new medication. Each study involves 100 volunteers. The results of the studies are shown in the table. Study of New Medication ResultStudy 1Study 2Study 3 Expected Success Rate 70% Condition Improved 61%74%67% No Improvement 39%25%33% Condition Worsened 0% 1% 0% What is the experimental probability that the drug showed no improvement in patients for all three studies? Example 5-2b The number of outcomes with no improvement for the three studies was or 97 out of the 300 total patients. experimental probability Answer:The experimental probability of the three studies wasor about 32%. Example 5-2c A new study is being developed to analyze the relationship between heart rate and watching scary movies. A researcher performs three studies, each with 100 volunteers. Based on similar studies, the researcher expects that 80% of the subjects will experience a significant increase in heart rate. The table shows the results of the study. Study of Heart Rate ResultStudy 1Study 2Study 3 Expected Success Rate 80% Rate increased significantly 83%75%78% Little or no increase 16%24%19% Rate decreased 1% 3% Example 5-2d What is the experimental probability that the movie would cause a significant increase in heart rate for all three studies? Answer: or about 79% Example 5-3a In the last 30 school days, Bobbie’s older brother has given her a ride to school 5 times. What could be used to simulate whether Bobbie’s brother will give her a ride to school? Bobbie got a ride to school ondays. Answer:Since a die has 6 sides, you could use one side of a die to represent a ride to school. Example 5-3b In the last 30 school days, Bobbie’s older brother has given her a ride to school 5 times. Describe a way to simulate whether Bobbie’s brother will give her a ride to school in the next 20 school days. Choose the side of the die that will be used to represent a ride to school. Answer:Let the 1-side of the die equal a ride to school. Toss the die 20 times and record each result. Example 5-3c In the last 52 days, it has rained 4 times. a.What could be used to simulate whether it will rain on a given day? b.Describe a way to simulate whether it will rain in the next 15 days. Answer:It rained onof the days. You could use a deck of cards to simulate the situation. Answer:Let the aces equal a rainy day. Draw cards 15 times and record the results. Example 5-4a Dogs Ali raises purebred dogs. One of her dogs is expecting a litter of four puppies, and Ali would like to figure out the most likely mix of male and female puppies. Assume that One possible simulation would be to toss four coins, one for each puppy, with heads representing female and tails representing male. What is an alternative to using 4 coins that could model the possible combinations of the puppies? Example 5-4b Each puppy can be male or female, so there are 2 2 2 2 or 16 possible outcomes for the litter Sample answer: a spinner with 16 equal divisions Example 5-4c Dogs Ali raises purebred dogs. One of her dogs is expecting a litter of four puppies, and Ali would like to figure out the most likely mix of male and female puppies. Assume that Find the theoretical probability that there will be 4 female puppies in a litter. Example 5-4d There are 16 possible outcomes, and the number of combinations that have 4 female puppies is 4 C 4 or 1. Answer:So the theoretical probability is Example 5-4e Dogs Ali raises purebred dogs. One of her dogs is expecting a litter of four puppies, and Ali would like to figure out the most likely mix of male and female puppies. Assume that The results of a simulation Ali performed are shown in the table on the following slide. How does the theoretical probability that there will be 4 females compare with Ali’s results? Example 5-4f OutcomesFrequency 4 female, 0 male 3 3 female, 1 male 13 2 female, 2 male 18 1 female, 3 male 12 0 female, 4 male 4 Theoretical probability combinations with 4 female puppies possible outcomes Example 5-4g Experimental probability Ali performed 50 trials and 3 of those resulted in 4 females. So, the experimental probability is Answer:The theoretical probability is a little more than 6% and the experimental probability is 6%, so they are very close. Example 5-4h In baseball, the Cleveland Indians and Chicago White Sox play each other five times in the next week. The manager would like to figure out the most likely mix of wins and losses. Assume that a.What objects can be used to model the possible outcomes of the games? Sample Answer: Flip five coins, one for each game, with heads representing an Indians win, and tails representing a White Sox win. Example 5-4i b.Find the theoretical probability that the Indians will win three games. Answer: Example 5-4j c.Below are the results of the last thirty 5-game series between the two teams. How does the theoretical probability that the Indians will win three games compare with the results? OutcomesFrequency Indians win every game 2 Indians win four, White Sox win one 6 Indians win three, White Sox win two 10 Indians win two, White Sox win three 7 Indians win one, White Sox win four 4 White Sox win every game 1 Example 5-4k Answer:The theoretical probability is a little more than 31% and the experimental probability is a little more than 33%, so they are moderately close. End of Lesson 5 Algebra1.com Explore online information about the information introduced in this chapter. Click on the Connect button to launch your browser and go to the Algebra 1 Web site. At this site, you will find extra examples for each lesson in the Student Edition of your textbook. When you finish exploring, exit the browser program to return to this presentation. If you experience difficulty connecting to the Web site, manually launch your Web browser and go to Transparency 1 Click the mouse button or press the Space Bar to display the answers. Transparency 2 Click the mouse button or press the Space Bar to display the answers. Transparency 3 Click the mouse button or press the Space Bar to display the answers. Transparency 4 Click the mouse button or press the Space Bar to display the answers. Transparency 5 Click the mouse button or press the Space Bar to display the answers. Help To navigate within this Interactive Chalkboard product: Click the Forward button to go to the next slide. Click the Previous button to return to the previous slide. Click the Section Back button to return to the beginning of the lesson you are working on. If you accessed a feature, this button will return you to the slide from where you accessed the feature. Click the Main Menu button to return to the presentation main menu. Click the Help button to access this screen. Click the Exit button or press the Escape key [Esc] to end the current slide show. Click the Extra Examples button to access additional examples on the Internet. Click the 5-Minute Check button to access the specific 5-Minute Check transparency that corresponds to each lesson. End of Custom Show End of Custom Shows WARNING! Do Not Remove This slide is intentionally blank and is set to auto-advance to end custom shows and return to the main presentation.
Known in Europe as the Mayer–Norton theorem, Norton's theorem holds, to illustrate in DC circuit theory terms, that (see image): - Any linear electrical network with voltage and current sources and only resistances can be replaced at terminals A-B by an equivalent current source INO in parallel connection with an equivalent resistance RNO. - This equivalent current INO is the current obtained at terminals A-B of the network with terminals A-B short circuited. - This equivalent resistance RNO is the resistance obtained at terminals A-B of the network with all its voltage sources short circuited and all its current sources open circuited. The Norton equivalent circuit is used to represent any network of linear sources and impedances at a given frequency. Norton's theorem and its dual, Thévenin's theorem, are widely used for circuit analysis simplification and to study circuit's initial-condition and steady-state response. To find the equivalent, - Find the Norton current INo. Calculate the output current, IAB, with a short circuit as the load (meaning 0 resistance between A and B). This is INo. - Find the Norton resistance RNo. When there are no dependent sources (all current and voltage sources are independent), there are two methods of determining the Norton impedance RNo. - Calculate the output voltage, VAB, when in open circuit condition (i.e., no load resistor — meaning infinite load resistance). RNo equals this VAB divided by INo. - Replace independent voltage sources with short circuits and independent current sources with open circuits. The total resistance across the output port is the Norton impedance RNo. This is equivalent to calculating the Thevenin resistance. - However, when there are dependent sources, the more general method must be used. This method is not shown below in the diagrams. - Connect a constant current source at the output terminals of the circuit with a value of 1 Ampere and calculate the voltage at its terminals. This voltage divided by the 1 A current is the Norton impedance RNo. This method must be used if the circuit contains dependent sources, but it can be used in all cases even when there are no dependent sources. Example of a Norton equivalent circuit In the example, the total current Itotal is given by: The current through the load is then, using the current divider rule: And the equivalent resistance looking back into the circuit is: So the equivalent circuit is a 3.75 mA current source in parallel with a 2 kΩ resistor. Conversion to a Thévenin equivalent A Norton equivalent circuit is related to the Thévenin equivalent by the following equations: Queueing theory The passive circuit equivalent of "Norton's theorem" in queuing theory is called the Chandy Herzog Woo theorem. In a reversible queueing system, it is often possible to replace an uninteresting subset of queues by a single (FCFS or PS) queue with an appropriately chosen service rate. See also - Millman's theorem - Source transformation - Superposition theorem - Thévenin's theorem - Maximum power transfer theorem - Extra element theorem - Johnson (2003b) - Johnson (2003a) - Chandy et al. - Brittain, J.E. (March 1990). "Thevenin's theorem". IEEE Spectrum 27 (3): 42. doi:10.1109/6.48845. Retrieved 1 February 2013. - Chandy, K. M.; Herzog, U.; Woo, L. (Jan 1975). "Parametric Analysis of Queuing Networks". IBM Journal of Research and Development 19 (1): 36–42. doi:10.1147/rd.191.0036. - Dorf, Richard C.; Svoboda, James A. (2010). "Chapter 5 – Circuit Theorems". Introduction to Electric Circuits (8th ed.). Hoboken, NJ: John Wiley & Sons. pp. 162–207. ISBN 978-0-470-52157-1. - Gunther, N.J. (2004). Analyzing computer systems performance : with PERL::PDQ (Online-Ausg. ed.). Berlin: Springer. p. 281. ISBN 3-540-20865-8. - Johnson, D.H. (2003). "Origins of the equivalent circuit concept: the voltage-source equivalent". Proceedings of the IEEE 91 (4): 636–640. doi:10.1109/JPROC.2003.811716. - Johnson, D.H. (2003). "Origins of the equivalent circuit concept: the current-source equivalent". Proceedings of the IEEE 91 (5): 817–821. doi:10.1109/JPROC.2003.811795. - Mayer, H. F. (1926). "Ueber das Ersatzschema der Verstärkerröhre (On equivalent circuits for electronic amplifiers]". Telegraphen- und Fernsprech-Technik 15: 335–337. - Norton, E. L. (1926). Technical Report TM26–0–1860 – Design of finite networks for uniform frequency characteristic. Bell Laboratories.
Your students might know their way around a numerator and denominator, but are they ready for what’s next? Suddenly, it’s time to learn how to add fractions — and your class is confused. You’re not alone. Adding fractions may seem daunting, but it doesn’t need to be. We’ve put together a guide to help you successfully teach your students how to add fractions, covering: - Why students struggle with fractions - Types of fractions - 3 easy steps for adding fractions - Adding mixed fractions - The importance of adding fractions - 5 Engaging activities Why do students struggle with fractions? Fractions — especially fraction operations — are a tricky subject for most students. Trouble with fractions can reduce confidence in math and lead to math anxiety, if students don’t receive enough support in the subject. Fractions are a struggle for a few reasons. Research has found the biggest issues are: 1. Understanding what the numbers mean Before fractions, students are used to working with whole numbers: basic numbers that represent whole amounts. Fractions introduce students to rational numbers, which come with a whole new set of rules and patterns. The meaning behind fractions is confusing when you compare them to whole numbers. Whole numbers are only expressed one way, while fractions can be expressed in many ways and still represent the same amount. For instance, there’s only one way to represent the number three, but ²⁄₄ represents the same amount as ½, 0.5 and 50%. As a student, this is hard to wrap your head around. 2. Different operations for whole numbers and fractions The methods you use to add, subtract, multiply and divide whole numbers are different than doing the same operations for fractions. Rules become much more unpredictable and confusing. Many students and teachers have a limited understanding of how or why these methods are used. Fractions are harder to represent with visuals or manipulatives, and the rules for adding them are more difficult to understand. Learning how to multiply and divide fractions can add even more confusion, as students must remember the differences between these operations. This is a big adjustment for students who are already comfortable with whole number arithmetic. Types of fractions Students must first understand the difference between each type of fraction to successfully add them. First, let’s start with the basic components of a fraction. A fraction represents parts of a whole. The numerator (the top number) illustrates the number of parts you have. The denominator (the bottom number) shows the total number of parts the whole is divided into. In the illustration above, our circle is divided into four parts. This means four is our denominator. Of those four parts, one is highlighted. This means one is our numerator. So, our fraction is ¼ or one quarter. There are three general categories of fractions: Proper, improper and mixed. |The numerator is less than the denominator|| ¾ (three quarters) |The numerator is greater than the denominator|| ⁷⁄₄ (seven quarters) A whole number and a proper fraction combined 1 ¾ (one and three quarters) In addition to these, fraction equations will be split into two distinct categories: those with like fractions and those with unlike fractions. |Fractions with the same denominator|| ¼ and ¾ |Fractions with different denominators|| ¼ and ⅜ A base knowledge of these types will help students understand what to do when faced with a question about adding fractions. Now that you’re familiar with each type of fraction, you can get adding! Teach your students the three-step formula below to confidently tackle fraction addition equations. 3 Easy steps for adding fractions It may seem scary at first, but adding fractions can be easy. All you need to do is follow three simple steps: - Step 1: Find a common denominator - Step 2: Add the numerators (and keep the denominator) - Step 3: Simplify the fraction Let’s look at each step in a bit more detail. Step 1: Find a common denominator If your two denominators are already the same, you’re adding fractions with like denominators. Fantastic! This means you can skip to step two. If your denominators are different, you’re adding fractions with unlike denominators. When adding unlike fractions, you need to find a common denominator so you can add the two fractions together. Check out the video below to understand why we need a common denominator to add fractions. You can find the common denominator using equivalent fractions: fractions that have the same value. For instance, ²⁄₄, ³⁄₆ and ⁴⁄₈ are equivalent fractions because they can all be reduced to ½. There are two main methods for finding the common denominator. 1) The common denominator method In this method, you’ll multiply the top and bottom of each fraction by the denominator of the other. For example, consider the following equation: ⅓ + ⅙ Our fractions have two different denominators: three and six. We need to multiply the numerator and denominator in ⅓ by six, then multiply the numerator and denominator in ⅙ by three. When we do this, our new fractions become ⁶⁄₁₈ and ³⁄₁₈. The two new fractions have the same denominator, so now we can add them! 2) The least common denominator method This method involves finding the smallest of all common denominators, then multiplying your original fractions to get that denominator. To find the least common denominator, list all the multiples of the number and find the smallest number that’s the same among them. For example, using the same equation as before — ⅓ + ⅙ — you can put together a table to determine the smallest common multiple. As you can see from our table, the smallest multiple that’s the same is six. So, for ⅓ , both the numerator and denominator must be multiplied by two to get ²⁄₆. For ⅙, the numbers must be multiplied by one, so the fraction stays the same. Once again, our fractions are ready to be added! Step 2: Add the numerators (and keep the denominator) This step is rather straightforward. Add your numerators together so the sum becomes the new numerator, while the denominator stays the same. Let’s use our previous example: ⅓ + ⅙ Using our new equation from the common denominator method — ⁶⁄₁₈ + ³⁄₁₈ — we need to add six and three together. The denominator will still be eighteen. Six plus three is nine, so our answer is ⁹⁄₁₈. Step 3: Simplify the fraction If your fraction contains high numbers, you may need to simplify it. Simplifying involves finding the smallest equivalent fraction possible. In our previous equation, our answer was ⁹⁄₁₈. This number seems a bit large, so we’ll see if we can simplify it to an easier number. To simplify a fraction, you need a common factor: a number that will divide into both numbers evenly. For example, two is a common factor of four and six, because both numbers can be divided by two. The two easiest methods for simplifying a fraction are: 1) Trial and error For this method, just keep dividing the numerator and denominator by small numbers. Start with two, then three, then four — and so on until you get the smallest possible answer. With our answer of ⁹⁄₁₈, we can keep trying to divide by small numbers until we find one that works. Can both nine and eighteen be divided by two? No. We can’t divide nine by two evenly. Ok, let’s try another number. Can both nine and eighteen be divided by three? Yes! When we divide both by three, our fraction becomes ³⁄₆. Now that we have a simpler answer, it’s time to see if we can we simplify even further. Three and six can both be divided by three again, so our final answer is ½. 2) Find the Greatest Common Factor (GCF) The GCF is the highest number that divides evenly into two or more numbers. This method is similar to finding the least common denominator — you’ll find the answer by listing all possible factors. Using our previous example of ⁹⁄₁₈, we’ll find and list all the factors of each number, starting from one. Once you’ve listed all of the factors of that number, all you have to do is find the largest number repeated in both lists. A handy table helps for this, too. Factors of 9 Factors of 18 We’ll use our table to find the largest number common to both numbers. In this case, the greatest common factor for nine and eighteen is nine. Now we can divide both numbers by nine to get our reduced fraction: ½. When you put all three steps of adding fractions together, it looks like this: Adding mixed fractions The above steps work great for proper and improper fractions, but what about adding fractions with whole numbers? Adding mixed fractions is actually quite simple: just convert it to an improper fraction and you’re ready to start adding! Every mixed fraction can be made into an improper fraction. For instance, 1 ¾ is the same thing as ⁷⁄₄. There are three steps for converting mixed fractions to improper fractions: 1. Multiply the whole number by the denominator Let’s use 1 ¾. If we multiply our whole number (one) by our denominator (four), we get four. 2. Add that number to the numerator Our new number (four) plus our numerator (three) is seven. 3. Write your new numerator over the original denominator Our new numerator (seven) over our original denominator (four) equals ⁷⁄₄. Now you can add the fraction! The importance of adding fractions As a teacher, you’re probably quite familiar with the age-old question students ask: “why am I even doing this?” In this context, it’s certainly a valid question. Why is adding fractions so important to learn? For starters, there are many real-life applications of this arithmetic. On many occasions, you’ll need to find the total number of parts of a whole when they’re combined. Here are a few potential examples of adding fractions in real life: - Exercising: If you run a ¼ mile on Monday and ¾ mile on Tuesday, how far did you run over both days? - Time-management: If you work 8 ½ hours on Monday and 6 ¾ hours on Tuesday, how many hours did you work over both days? - Cooking/baking: If you add ½ cup of milk chocolate chips and ⅓ cup of white chocolate chips to your cookie dough, what’s the total amount of chocolate chips in your recipe? If that wasn’t enough, proficiency with fraction operations is actually quite important for learning more advanced math and science — eventually leading to success in many academic or career paths. Limited knowledge in fraction operations can lead to weaker skills in later math and science. One study found that in the United States and the United Kingdom, elementary students’ fraction knowledge could predict general math abilities in high school. The survey of workplace Skills, Technology, and Management Practices (STAMP) found that 68% of employed people 18 or older used fractions in their daily work. This means a significant number of adults in the United States require a solid base knowledge of fractions and their operations. Learning these skills as early as possible is key for success in many workplaces. 5 Engaging activities for adding fractions Now that you know what to teach your students about adding fractions, let’s focus on the how. Get inspired by these five engaging activity ideas to supplement your adding fractions lessons. View this post on Instagram Reteaching with Prodigy as a class activity is one of the ways Mrs Patton utilizes Prodigy in her classroom. ✍️ The on screen colored pencils allow students to easily show their work! 👍🏼 👍🏼 #MakeMathFun #primaryschool #elementaryschool #elementary #classroom #edtech #TeacherTribe #TeachersOfInstagram #TeachersOfIG #teachergram Prodigy is a free, curriculum-aligned learning platform with over 1,500 skills for kids to practice math. You can use it to target all kinds of fractions skills, from basic understanding to more complicated operations like addition. Prodigy takes players through an exciting adventure, where they answer math questions to “battle” other characters. Students are so engaged with the game, they’ll actually want to keep playing — and practicing more math as a result! The platform is a great tool for lesson supplementation, homework assignments and more. It can also help you differentiate instruction and target specific trouble spots, helping each student succeed at their own pace. “Our last test was on Fractions, and that was the first time that I had really made sure every day on Prodigy they were practicing those specific skills, and the test results were very reflective of the additional practice they’d received!” 3rd Grade Teacher East Syracuse-Minoa Central Schools 2) Bump game Promote some healthy competition in your classroom with an engaging board game where players “bump” each other by adding fractions to claim a spot on the board. You can find lots of bump games for various subjects. In this adding fractions edition, players must roll dice to find a relevant equation, then place their game pieces on the fraction that corresponds with the answer. The player who gets all their game pieces on the board first is the winner! 4) Word problems Word problems for fraction equations provide real-life examples of the questions students are answering, helping them understand the purpose of such questions. Word problem cards and worksheets are a great way to provide these questions. If you want your class more involved, you can use manipulatives or even the students themselves. For example, “if three people are wearing green and two are wearing blue, what’s the fraction of people wearing green or blue in the class?” 4) Equation architects This activity involves students drawing or building equations to visualize what adding fractions looks like. Get your students to draw out equations or use manipulatives for a better idea of what it really means to add fractions. Fraction bars or a fraction dial are both great options to make this abstract concept more digestible and concrete. 5) Math mates This active game gets students out of their seats, collaborating with classmates and practicing math… all at once! View this post on Instagram Today, we borrowed an idea from @themathchick5 and found a new way to practice adding and subtracting fractions with unlike denominators. Students wear a name tag with a fraction on it and partner up with other students to create new number sentences! It was a fun way to get out of our seats! #teachersofthegram #iteachfifth #iteach5th #mathteacher #teachersfollowteachers #fractions #fractionsarefun #teachersofig Each student will have a different fraction. Players go around the room finding partners and working together to add their fractions. This game is great for practicing skills learned in class and encouraging teamwork. Final thoughts on adding fractions Moving from basic fraction skills to addition is certainly intimidating, but adding fractions can be made easy using the three simple steps above. Use the information in this guide to conquer your next math lesson and make adding fractions a breeze for your students. Next up: subtracting, multiplying and dividing. Oh my! >>Create or log in to your teacher account on Prodigy – a free, game-based learning platform for math that’s easy to use for educators and students alike. Aligned with curricula across the English-speaking world, it’s loved by more than a million teachers and 50 million students. 🏫
When low- to middleweight stars like our Sun approach the end of their life cycles they eventually cast off their outer layers, leaving behind a dense, white dwarf star. These outer layers became a massive cloud of dust and gas, which is characterized by bright colors and intricate patterns, known as a planetary nebula. Someday, our Sun will turn into such a nebula, one which could be viewed from light-years away. This process, where a dying star gives rise to a massive cloud of dust, was already known to be incredibly beautiful and inspiring thanks to many images taken by Hubble. However, after viewing the famous Ant Nebula with the European Space Agency’s (ESA) Herschel Space Observatory, a team of astronomers discovered an unusual laser emission that suggests that there is a double star system at the center of the nebula. The study, titled “Herschel Planetary Nebula Survey (HerPlaNS): hydrogen recombination laser lines in Mz 3“, recently appeared in the Monthly Notices of the Royal Astronomical Society. The study was led by Isabel Aleman of the the Institute of Astronomy and Astrophysics, the and multiple universities. The Ant Nebula (aka. Mz 3) is a young bipolar planetary nebula located in the constellation Norma, and takes its name from the twin lobes of gas and dust that resemble the head and body of an ant. In the past, this nebula’s beautiful and intricate nature was imaged by the NASA/ESA Hubble Space Telescope. The new data obtained by Herschel also indicates that the Ant Nebula beams intense laser emissions from its core. In space, infrared laser emissions are detected at very different wavelengths and only under certain conditions, and only a few of these space lasers are known. Interestingly enough, it was astronomer Donald Menzel – who first observed and classified the Ant Nebula in 1920 (hence why it is officially known as Menzel 3 after him) – who was one of the first to suggest that lasers could occur in nebula. According to Menzel, under certain conditions natural “light amplification by the stimulated emissions of radiation” (aka. where we get the term laser from) would occur in space. This was long before the discovery of lasers in laboratories, an occasion that is celebrated annually on May 16th, known as UNESCO’s International Day of Light. As such, it was highly appropriate that this paper was also published on May 16th, celebrating the development of the laser and its discoverer, Theodore Maiman. As Isabel Aleman, the lead author of a paper, described the results: “When we observe Menzel 3, we see an amazingly intricate structure made up of ionized gas, but we cannot see the object in its center producing this pattern. Thanks to the sensitivity and wide wavelength range of the Herschel observatory, we detected a very rare type of emission called hydrogen recombination line laser emission, which provided a way to reveal the nebula’s structure and physical conditions.” “Such emission has only been identified in a handful of objects before and it is a happy coincidence that we detected the kind of emission that Menzel suggested, in one of the planetary nebulae that he discovered,” she added. The kind of laser emission they observed needs very dense gas close to the star. By comparing observations from the Herschel observatory to models of planetary nebula, the team found that the density of the gas emitting the lasers was about ten thousand times denser than the gas seen in typical planetary nebulae, and in the lobes of the Ant Nebula itself. Normally, the region close to the dead star – in this case, roughly the distance between Saturn and the Sun – is quite empty because its material was ejected outwards after the star went supernova. Any lingering gas would soon fall back onto it. But as Professor Albert Zijlstra, from the Jodrell Bank Center for Astrophysics and a co-author on the study, put it: “The only way to keep such dense gas close to the star is if it is orbiting around it in a disc. In this nebula, we have actually observed a dense disc in the very center that is seen approximately edge-on. This orientation helps to amplify the laser signal. The disc suggests there is a binary companion, because it is hard to get the ejected gas to go into orbit unless a companion star deflects it in the right direction. The laser gives us a unique way to probe the disc around the dying star, deep inside the planetary nebula.” While astronomers have not yet seen the expected second star, they are hopeful that future surveys will be able to locate it, thus revealing the origin of the Ant Nebula’s mysterious lasers. In so doing, they will be able to connect two discoveries (i.e. planetary nebula and laser) made by the same astronomer over a century ago. As Göran Pilbratt, ESA’s Herschel project scientist, added: “This study suggests that the distinctive Ant Nebula as we see it today was created by the complex nature of a binary star system, which influences the shape, chemical properties, and evolution in these final stages of a star’s life. Herschel offered the perfect observing capabilities to detect this extraordinary laser in the Ant Nebula. The findings will help constrain the conditions under which this phenomenon occurs, and help us to refine our models of stellar evolution. It is also a happy conclusion that the Herschel mission was able to connect together Menzel’s two discoveries from almost a century ago.” Next-generation space telescopes that could tell us more about planetary nebula and the life-cycles of stars include the James Webb Space Telescope (JWST). Once this telescope takes to space in 2020, it will use its advanced infrared capabilities to see objects that are otherwise obscured by gas and dust. These studies could reveal much about the interior structures of nebulae, and perhaps shed light on why they periodically shoot out “space lasers”. Further Reading: University of Manchester, ESA, MNRAS
Diannah Kyara November 21, 2020 Worksheets 6th Grade Word Problems: Ratios, Fractions, Addition, Subtraction, Multiplication and Division, examples with step by step solutions, Fraction of a set word problems, Model Drawing for 6th Grade, fractions of fractions and fractions of remaining parts, Travel Rate Problems After reading a problem, children have to deduce a formula for finding the required fraction. This activity will work well as a supplementary math activity for children in 4th, 5th, 6th an 7th grades who need extra practice on their abilities to solve word problems involving fractions. This set of worksheets contains introductory lessons, step-by-step solutions to sample problems, a variety of different practice problems, reviews, and quizzes. When finished with this set of worksheets, students will be able to solve word problems involving ratios, fractions, mixed numbers, and fractional parts of whole numbers. Free worksheets for ratio word problems Find here an unlimited supply of worksheets with simple word problems involving ratios, meant for 6th-8th grade math. In level 1 , the problems ask for a specific ratio (such as, ” Noah drew 9 hearts, 6 stars, and 12 circles. Printable 5th Grade Math Word Problems Worksheets Pdf – Fifth are needed to prepare a worksheet for their math class. These sheets are basically nothing greater than a printable version of the test that will be given up the regular grade level. It is excellent technique to supply the trainees with a worksheet in the beginning that can assist. Worksheets > Math > Grade 5 > Word Problems > Mixed. Mixed word problem worksheets for 5th grade. Below are eight grade 5 math worksheets with mixed word problems including the 4 basic operations (addition, subtraction, multiplication and division), fractions, decimals, LCM / GCF and variables. Mixing different types of word problems encourages students to read and think about the questions. Free math minutes, weekly math skills practice, and reading with math word problems. Your fifth graders will be asking for more of these! Your Free 5th Grade Math PDF Worksheets You’d Actually Want to Print Math Worksheets according to Grades Interactive Zone Grade 2 Math Lessons Common Core Lesson Plans and Worksheets. These free interactive math worksheets are suitable for Grade2. Use them to practice and improve your mathematical skills. Writing Numbers up to 1,000, Number Words up to 1,000 Skip Counting by 2’s, 5’s, 10’s, Identify Even and Odd. Second Grade Math Worksheets. When students start 2nd grade math, they should already have good comprehension of addition and subtraction math facts. Many second graders will be ready to start working with early multiplication worksheets, perhaps with the help of a Multiplication Chart, Multiplication Table or other memory aid. This math worksheet presents an equation and asks your child to use mental math skills to fill in the missing operation, either + or -. Adding 2-digit numbers (1st grade, 2nd grade) Adding 2-digit numbers (1st grade, 2nd grade) In this math worksheet, your child can practice adding 2-digit numbers. Sep 13, 2020 – Here is a selection of our printable math worksheets, math games and math resources for 2nd grade. See more ideas about 2nd grade math worksheets, Printable math worksheets, 2nd grade math. More Math Word Problems . Print the PDF: More Math Word Problems. This worksheet contains problems that are a bit more challenging than those on the previous printable. For example, problem No. 1 states: ”Four friends are eating personal pan pizzas. Jane has 3/4 left, Jill has 3/5 left, Cindy has 2/3 left and Jeff has 2/5 left. Multiple-Step Word Problems. Word problems where students use reasoning and critical thinking skill to solve each problem. Math Word Problems (Mixed) Mixed word problems (stories) for skills working on subtraction,addition, fractions and more. Math Worksheets – Full Index. A full index of all math worksheets on this site. The worksheets on this page combine the skills necessary to solve all four types of problems covered previously (addition word problems, subtraction word problems, multiplication word problems and division word problems) and they require students to determine which operation is appropriate for solving the each problem. 5th Grade Math Word Problems Worksheets Pdf Use these free 5th Grade Math Word Problems Worksheets Pdf for your personal projects or designs. GOWORKSHEETS.COM 1st Grade 2nd Grade 3rd Grade 4th Grade 5th Grade Fractions Math P&K Fifth Grade Math Curriculum: What Students Will Learn. Common Core Math Standards for 5th-grade students cover writing and interpreting numerical expressions; analyzing patterns and relationships; understanding the place-value system; performing operations with multi-digit whole numbers and decimals to the hundredths; using equivalent fractions as a strategy to add and subtract fractions. 5th grade math worksheets – Printable PDF activities for math practice. This is a suitable resource page for fifth graders, teachers and parents. These math sheets can be printed as extra teaching material for teachers, extra math practice for kids or as homework material parents can use. Free 5th Grade Math Word Problems Worksheets (PDF) for topics including estimating, rounding, fractions, and decimals. For all Grade 5 math teachers and parents. Enjoy! This page hosts a vast collection of multiplication word problems for 3rd grade, 4th grade, and 5th grade kids, based on real-life scenarios, practical applications, interesting facts, and vibrant themes. Featured here are various word problems ranging from basic single-digit multiplication to two-digit and three-digit multiplication. Showing top 8 worksheets in the category – Grade 5. Some of the worksheets displayed are Ab5 gp pe tpcpy 193604, Math mammoth grade 5 a worktext, Vocabulary 5th grade sentences fifth grade 5, Vocabulary 5th grade paragraphs fifth grade 5, Grade 5 math practice test, Grade 5 national reading vocabulary, Grade 5 reading practice test, Math 5th grade problem solving crossword name. Fifth Grade Measurement Worksheets and Printables This collection of worksheets is made to measure for fifth grade math students. Our fifth grade measurement worksheets provide practice with calculating solid and liquid volume, converting measurements to different units, and using various numerical operations to solve word problems. Use these measurement worksheets to compare various attributes of an object, measure length, weight and capacity and temperature of numerous objects and substances. Also learn to count money( coins and bills) and tell time from analog clocks. 2nd grade. Math. Worksheet. Our measurement worksheets educate and amuse kids of all ages with a plethora of interactive activities, from comparing the sizes of different objects to understanding how much liquid is in a pint versus a gallon. Whether it’s first graders learning the difference between inches, feet, and yards, or fifth.
How to Solve Systems of Equations Hey guys! Welcome to this video over systems of equations. A system of equations is a group of two or more equations, and each of the equations within the group have an unknown variable. When given a system of equation, the goal is to find the value for each of the unknown variables. In this video we will discuss two tools to help you solve for the unknown variables: substitution, and Elimination. Substitution is a way of solving the system by getting rid of all of the variables, except for one, and then solving for that equation. The best time to use substitution is when you have a variable that has a coefficient of 1 or negative one. The reason why is, because if it has a coefficient of one or negative one, then you don’t need to undo multiplication or division, you just need to undo addition or subtraction in order to isolate a variable. There are three steps that you need to follow in order to be able to solve the system using substitution. First, solve for x or y in one equation. Second, plug the x or y that you solved for into the other equation and then solve. Finally, use the number that you get when you solve to solve for the other variable or variables, depending on how many equations you have. So, let’s get started. In this example, we can see that our second equation has a variable with one as the coefficient. So, that let’s us know to solve for x. Now, you can actually solve for any of the variables… It will just always be easier to solve for one that has a coefficient of one or negative one. Now, to do this we just add 9y to both sides, to get x= 9y -19. So, we’ve done what our first step tells us to do by solving for one of our variables. Step two now tells us to plug in the variable that we have solved for into our other equations. So, we are going to plug our x into 4x+3y = 2 and solve. Because we know that x is equal to 9y -19 we can substitute this in for the x in our first equation. We now have 4(9y- 19) +3y =2. Now that we have it down to one variable, we are able to solve for the value of that variable, so in this case y. Let’s rewrite this by multiply our 4 by everything inside the parentheses. 36y – 76 + 3y = 2. Now we can add 76 to both sides… you can do this multiple ways. If you wanted to add your y’s together first then do that. I’m just doing what is easier in my mind. Once I add 76 to both sides and add my y’s together I get 39y= 78. Now, I divide both sides by 39 to get y is equal to 2. So, that was step 2. Now, what does step 3 tell us to do? “Use the number that you get when you solve to solve for the other variable or variables.” All we have to do here is take our y value, so 2, and plug it into either of our original equations to solve for x. You can plug it into either the first our second, but I am going to plug it into the second equation. So, x -9(2) = -19. Let me rewrite this as x – 18 = -19. Now we can add 18 to both sides to get x= -1… and we’re done! We have found the value of both of our variables using substitution. Now, let’s take a look at how to solve for a system using elimination. The reason this tool is called elimination, is because you add together the two equations in order to eliminate one of your variables. Look at this example: 3x - 4y = -27 7x + 4y = 57 We can tell just by looking, that our y’s will cancel out. So, once we add the two equations together. We have 10x = 30, and we divide both sides by 10 to give us x is equal to 3. Now, we can take our x value and plug it into either of our original equations. I’ll plug it into the first one here. So, we have 3(3) – 4y = -27, and I’ll rewrite this as 9 – 4y = -27. To solve we need to move the -4y and the -27; and to do this will add a positive 4y to both sides, and a positive 27 to both sides giving us 9+27 which is 36 which is equal to 4y (36=4y). So we divide both sides by 4, and that gives us y is equal to 9. That was a good example to learn how elimination works, but it may not always be that our terms cancel so easily. Like in this example: 9x - 3y = -57 2x + 6y = 34 None of our terms cancel right off the bat, so we will need to do a little manipulation in order to get them to do what we want them to do. So, what could we do to get our terms to cancel? Well, there a couple of things, but the easiest, as it appears to me, is to multiply our first equations by 2. That will allow us to cancel our y’s, because we will be adding a negative six and a positive six. So, let’s try that. We have (9x – 3y = -57) x 2 , so let’s rewrite this as 18x – 6y = -114. Now, we need to take this and add it to our other equations to cancel the y term. 18x - 6y = -114 2x + 6y = 34 +_______________ 20x = -80 Now, we need to divide both sides by 20 to give us x is equal to -4. So, now that we know that x is equal to -4, we can plug -4 into where are x is in either one of the original equations. I’ll plug it into the second one. 2(-4) + 6y = 34 I’ll simplify this as -8 + 6y =34, then add 8 to both sides to get 6y = 42, and finally divide by 6 to get y is equal to 7. So, our final answer to this problem is x= -4, and y = 7. I hope that this video on how to solve a system of equations using substitution and elimination has been helpful. For further help, be sure to subscribe to our channel below. See you next time! Provided by: Mometrix Test Preparation Last updated: 04/13/2018
Deforestation is known as one of the most important elements for changes in land use and land cover. It is recognised as a major driver of the loss of biodiversity and ecosystem services. Globally, it has been occurring at an alarming rate of 13 million hectares per year. It is believed that high population growth coupled with the rapid expansion of agriculture is responsible for the accelerated rates of deforestation, especially in developing countries. Deforestation & Land Categories Malawi is a developing country in which enormous pressure is being exerted on forest resources. Forest cover of the country reduced from 47 percent in 1975 to 36 percent in 2005. This is the highest deforestation rate in the Southern African Development Community (SADC) region, representing a net loss of some 30,000 to 40,000 hectares per year. This forest loss is mainly attributed to agriculture expansion and excessive use of biomass, such as wood, charcoal, and agricultural residues mostly used for cooking and heating. That is, biomass accounts for 88.5 percent of the country’s energy demand, 6.4 percent comes from petroleum, 2.8 percent from electricity (hydro power), and 2.4 percent from coal. Agriculture is a source of livelihood for more than 90 percent of the rural and urban population and represents more than three quarters of national exports. The expansion of subsistence agriculture to meet the food needs of the burgeoning population has been one of the main causes of deforestation in Malawi. While 62 percent of the land was agriculture in 1991, by 2008, the agriculture land had reached 70 percent. In commercial farming, tobacco is one of the major export crops and accounts for approximately 67 percent of the export earnings from agriculture in Malawi. However, the percentage of deforestation caused by tobacco farming is very high—it reached 26 percent by the early 2000s. Tobacco is further ranked as the highest user of wood among non-household users in Malawi. It involves the use of wood and twigs in construction of barns for air-cured tobacco and firewood for fuel-cured tobacco. The construction industry is also heavily reliant on wood energy for brick production and is ranked second from tobacco in this regard. The brick-making industry alone consumes approximately 850,000 metric tons of wood per year. That is, biomass in the form of wood fuel is the largest form of primary energy consumed in Malawi. Malawi obtains 88 percent of its total energy and 98 percent of its household energy from traditional biomass, while access to modern energy is less than 10 percent. Inefficient production and unsustainable use of biomass energy have contributed to environmental degradation, such as high deforestation, desertification, and soil erosion. There are three land categories in Malawi: public land, private land, and customary land. Public land is the land held in trust for the people of Malawi and managed by the government. Private land is the land that is registered as private under the Registered Land Act. Customary land is the land used for the benefit of the community as a whole within the boundaries of a traditional management area (Land Act 2016). Customary land is held or used by community members under customary law and is under the jurisdiction of the customary traditional authorities. The customary land makes up around 85 percent of the total land in Malawi. Forest resources on customary land are usually most accessible to the majority of the rural residents, and are also very important because they provide not only timbers/fuel wood but non-timber forest products for both rural and urban population. Although previous studies and projects have provided a fundamental understanding regarding the protection of forests and forests’ contribution to rural development, achieving a reduction in deforestation requires an understanding of how local people utilize and manage forest resources. That is, their behaviour and impact on the forests differ substantially, despite the fact that each local community operates under the same national legislation. Local-level data provide rich information on how people at the local level interact with forest resources. Conversely, country-level data on the rates of deforestation do little to help policymakers and scholars unravel the web comprising the causes of forest loss. Deforestation rates vary significantly within each country, and furthermore, an understanding of the causes of such dynamics and unique variation within the country is critical for the establishment of proper interventions. Factors Or Causes Several studies on agriculture expansion, benefits and trade-offs of tobacco, land tenure, biomass use, population, and poverty, and their impacts on forest resources have been conducted in Malawi. However, only a few studies have been conducted at the local level about the drivers of deforestation, especially on customary forest land. There are some anthropogenic proximate factors or causes of deforestation, which are human activities or immediate actions such as agriculture expansion that directly impact forest cover. In the case of Malawi, agriculture expansion, tobacco growing, and brick production are regarded as the major proximate factors of deforestation. Underlying driving factors or forces are fundamental social processes such as population dynamics that underpin the proximate causes. Classification of the underlying driving factors varies from area to area as that of the proximate factors do. Deforestation has been discussed in a research framework of land science with the focus on the proximate factors and underlying driving factors; however, there are no studies using such a research framework in Malawi. This study, adopting this research framework, aims to identify and analyse forest cover change and the underlying driving factors associated with the proximate factors of deforestation on customary land in the rural area of Mwazisi, Malawi, where no research about the drivers of deforestation has been conducted to date. Choosing Mwazisi Zone As Study Area The Mwazisi zone, which is the customary land, is located to the west of the Rumphi district in the northern region of Malawi. It consists of six Village Development Committees (VDCs) under the Traditional Authority Chikulamayembe. Traditional Authority is a form of leadership in which the authority of an organisation or a ruling regime is largely tied to tradition or custom. The Mwazisi zone is located along Vwaza Marsh Game Reserve (VMGR) and covers an area of 117 km², which contains 1126 households. The total population of the study area is estimated to be approximately 6570. The area is mostly covered by Miombo woodlands with an average temperature of 22.5 deg C in the hot dry season. The highest average annual precipitation falls in the month of January is 191 mm. Analysis On Forest Change & Drivers The results of the classification on forest and land cover show that forest was the dominant land cover in the year 1991; however, it has declined tremendously over the years. Forest covered 66 percent of the area in 1991 and decreased to 45.8 percent in 2017. The annual rate of forest cover loss between 1991 and 2004 was 1.3 percent and increased to 1.6 percent in the period between 2004 and 2017. Interviews show that most households (80.7 percent) depend on agriculture to support their daily livelihood while only 19.3 percent earn their living through business and employment. All households grow a crop of maize as a staple food, and for the past 15 years, 47.6 percent of the households have expanded their maize farm. On average, each household has expanded its agriculture land by approximately 0.57 hectares during the past 15 years. Most households (91.2 percent) expanded their maize farms due to an increase in family size (on average, each household has four children) and a lack of farm inputs. The Pearson product-moment correlation coefficient also shows a positive correlation between the frequency of the agriculture expansion and the number of children in a household. Tobacco is the main cash crop in the area and is grown by 45.4 percent of households, while the remaining households depend on subsistence farming. Of the tobacco farmers, 46.4 percent expanded their agriculture land by an average of approximately 0.39 hectares per year. These farmers expanded their agriculture land mainly to increase earnings or profit. The type of tobacco grown in the study area is burley, which is air-cured in barns. There are three types of building materials in Mwazisi: clay bricks, mud, and wood. Clay bricks are the main building material and are used by 65.7 percent of the households. Clay bricks are burned before their use in construction and the source of energy is wood. Of the brick-walled houses in the area, 68 percent used wood from the forests and 31 percent used wood left over after the clearing of land for agriculture. Field results show that each brick-walled house used 4 metric tons of wood, on average. An analysis of the market systems of various crops grown in the area shows that tobacco has a well-developed market structure designed to reach smallholder farmers in the rural areas. The tobacco crops are sold to international companies based in the capital city of Lilongwe. The tobacco growing is practiced as a form of contract farming, which helps smallholder farmers by providing access to the market, inputs, and extension services. That is, tobacco companies provide loans, expertise, and transportation of the farm produce to the tobacco market. However, it is more expensive and difficult for smallholder farmers to obtain expertise and loans on crops such as ground nuts, maize, and soybeans. This has resulted in an increase in the number of tobacco farmers despite its impact on the forests and environment. A comparison of the average price of tobacco crop with others grown in the area, such as maize, groundnuts, and soybeans has shown that tobacco has had the highest average price over the years. This motivated 71.3 percent of the farmers while the availability of loans and the market motivated 28.8 percent. However, a comparison of the average yield per hectare per year shows that maize has the highest average yield, followed by groundnuts. There are no population data for the study area; however, there are data for Rumphi. There is an increasing trend in population in the district and an annual population growth rate of Rumphi is 3.4 percent. This has resulted in an increase in demand for land for both settlement and agriculture. The Forest Act is a fundamental tool for proper forest use and management of private, customary, and public land in Malawi. The results from the field survey, however, show that 95.2 percent of households are unfamiliar with the Act. Most households (97.7 percent) are unaware of the prohibition of forest wood extraction for brick burning. Furthermore, 97.8 percent of tobacco farmers are unaware of the prohibition of forest wood extraction for tobacco processing. The focus group discussion and interviews with the officers from agriculture and forestry reveal the existence of financial and material constraints in the district. This has led to a reduction in field activities, such as monitoring, awareness campaigns, and law enforcement, especially on customary land forests. With few resources in the district, priority is mostly given to the forest reserves (one gazette and three proposed forest reserves). Tobacco companies have been involved in deforestation mitigation activities, notably tree planting. However, the initiative has yielded few results. Field survey data show that approximately 10,980 tree seedlings were distributed to tobacco farmers by four tobacco companies in 2016. The quantity of tree seedlings given to each farmer is determined by the size of the farm (i.e. 130 trees seedlings per 0.5 hectares). Almost all farmers (94 percent) planted the seedlings; however, only approximately 257 seedlings survived. The farmers complained about the late distribution of the seedlings (usually distributed towards the end of the rainy season), which resulted in the low survival of the planted seedlings. The focus group discussion and interviews identified that the four tobacco companies do not monitor their farmers while they are planting and caring for the distributed seedlings. Furthermore, there is lack of collaboration between the tobacco companies and governmental departments. That is, the companies rarely share information, resulting in officers’ failure to follow up on any activities conducted by the tobacco companies. Deforestation on Customary Land The majority of users of wood energy are found in the customary land in rural areas, where almost 90 percent of the population lives. According to the literature over 50 percent of the wood energy in Malawi comes from customary forests and woodlands. Forests on customary land are managed by the rural community; therefore, proper knowledge, support, and empowerment are required, although they are imbalanced between the rural and urban areas. According to Sillah, the awareness level of the local population concerning conservation and rational utilisation of forest resources must be augmented to acquire the active participation and commitment of communities and individuals. The findings of this study, however, show a low level of awareness among those in the local population regarding forest use and management. The lack of resources at the district level has partly contributed to the problem. For example, the forestry budget for one year (2016–2017) for Rumphi was US$9366.87 with a monthly budget of US$780.57. This has resulted in a reduction in law enforcement, awareness campaigns, and monitoring, especially for customary land forests as the interviewees accounted for. Developing countries barely meet the financial, material, and personnel requirements for sustainable forest management. People continue to illegally extract wood from customary land forests for either commercial or non-commercial purposes. Measures to Mitigate Pressure on Forests Tobacco is an important cash crop in Malawi, as it accounts for 35 percent of the gross domestic product. The results of this study suggest the existence of a number of factors that motivate farmers to grow tobacco over other cash crops, which include: (1) the availability of loans facilitated by the tobacco companies, (2) a better price for tobacco compared to that of other cash crops, and (3) easy access to tobacco information and the availability of a market for the crop. These results are similar to those of the research conducted by the Centre for Agricultural Research and Development. Hall reported that most governments send out mixed messages regarding their concern for people and the environment, while actively and assiduously promoting the very economic sectors that drive deforestation. If the supply chains for alternative crops were developed to the level of tobacco supply chains, the prices and profitability of these crops would also grow and eclipse those of contract tobacco. This, in turn, would help to reduce the pressure on forest resources exerted by tobacco farming. The alternative for brick burning in the study area would be an introduction of stabilised soil bricks (SSBs) and promotion of its use. This method involves the use of either soil alone or a mixture of soil and a minimum amount of 10 percent cement. The mixed soil and cement are compressed at high pressure and are cured under a shade. This method produces bricks using very little or no energy; therefore, this alternative would lead to reduction of deforestation. Landsat images were used to assess forest cover changes of the study area. Forest cover in Mwazisi was reduced from 66 percent in 1991 to 45.8 percent in 2017. Qualitative and quantitative methods were used to assess socioeconomic conditions, forest dependency, and the underlying driving factors of deforestation. Households continue to depend on forest resources for (1) agriculture expansion, (2) tobacco curing, and (3) brick burning. The underlying factors towards these anthropogenic factors are the market system, poverty, and population growth, expensive alternative building materials, lack of awareness, lack of resources, and lack of commitment. Each of these underlying drivers of deforestation interacts with single or multiple proximate factors. Additionally, there are multiple underlying driving factors working together to underpin each proximate factor of deforestation, thereby impacting the forest cover reduction in Mwazisi. Synergies also exist between some underlying driving factors, such as a lack of awareness and resources. A set of economic, institutional, and demographic factors underpin agriculture expansion, tobacco growing, and brick burning in Mwazisi, Malawi. The following recommendations would facilitate the reduction in the deforestation rate: Providing technical support to the village heads and Community-Based Natural Resources Management Committee on forest management, and monitoring the tobacco companies operating in the district.
Presentation on theme: " Objective: To look for relationships between two quantitative variables."— Presentation transcript: Objective: To look for relationships between two quantitative variables Scatterplots may be the most common and most effective display for data. o In a scatterplot, you can see patterns, trends, relationships, and even the occasional extraordinary value sitting apart from the others. Scatterplots are the best way to start observing the relationship and the ideal way to picture associations between two quantitative variables. When looking at scatterplots, we will look for direction, form, strength, and unusual features. Direction: o A pattern that runs from the upper left to the lower right is said to have a negative direction (just like the graph of a line with a negative slope). o A trend running the other way has a positive direction (just like the graph of a line with a positive slope). Direction (cont.) Can the NOAA predict where a hurricane will go? The figure shows a negative direction and a negative association between the year since 1970 and the and the prediction errors made by NOAA. As the years have passed, the predictions have improved (errors have decreased). Form: o If the relationship isn’t straight, but curves gently, while still increasing or decreasing steadily, we can often find ways to make it more nearly straight. Strength: o At one extreme, the points appear to follow a single stream (whether straight, curved, or bending all over the place). Strength (cont.): o At the other extreme, the points appear as a vague cloud with no discernible trend or pattern: o Note: we will quantify the amount of scatter soon. It is important to determine which of the two quantitative variables goes on the x-axis and which on the y-axis. This determination is made based on the roles played by the variables. When the roles are clear, the explanatory or predictor variable goes on the x-axis, and the response variable (variable of interest) goes on the y-axis. What do you expect the scatterplot to look like? Remember direction, form, strength, and unusual features. 1. Drug dosage and degree of pain relief 2. Calories consumed and weight loss Data collected from students in Statistics classes included their heights (in inches) and weights (in pounds): Here we see positive association and a fairly straight form, there seems to a high outlier. Outlier How strong is the association between weight and height of Statistics students? If we had to put a number on the strength, we would not want it to depend on the units we used. A scatterplot of heights (in centimeters) and weights (in kilograms) doesn’t change the shape of the pattern: Note that the underlying linear pattern seems steeper in the standardized plot than in the original scatterplot. That’s because we made the scales of the axes the same. Equal scaling gives a neutral way of drawing the scatterplot and a fairer impression of the strength of the association The points in the upper right and lower left (those in green) strengthen the impression of a positive association between height and weight. The points in the upper left and lower right where z x and z y have opposite signs (those in red) tend to weaken the positive association. Points with z-scores of zero (those in blue) don’t vote either way, because their product is zero. correlation coefficient (r) The correlation coefficient (r) gives us a numerical measurement of the strength of the linear relationship between the explanatory and response variables. Calculating this by hand can be time consuming and redundant. Below are the steps to calculating it with the use of a calculator: o Make sure your diagnostics are ON (2 nd Catalog, scroll to Diagnostics ON Enter) o Store your values into L1 and L2 (x and y respectively) o Stat Calc 8: LinReg(a+bx) o Before pressing Enter, define the lists: L1, L2 Enter Correlation Correlation measures the strength of the linear association between two quantitative variables. Before you use correlation, you must check several conditions: o Quantitative Variables Condition o Straight Enough Condition o Outlier Condition Quantitative Variables Condition: o Correlation applies only to quantitative variables. o Don’t apply correlation to categorical data camouflaged as quantitative (zip codes, ID #s, area codes, etc.). o Check that you know the variables’ units and what they measure. Straight Enough Condition: o You can calculate a correlation coefficient for any pair of variables. o But correlation measures the strength only of the linear association, and will be misleading if the relationship is not linear. Outlier Condition: o Outliers can distort the correlation dramatically. o An outlier can make an otherwise small correlation look big or hide a large correlation. o It can even give an otherwise positive association a negative correlation coefficient (and vice versa). o When you see an outlier, it’s often a good idea to report the correlations with and without the point. The sign of a correlation coefficient gives the direction of the association. Correlation is always between –1 and +1. o Correlation can be exactly equal to –1 or +1, but these values are unusual in real data because they mean that all the data points fall exactly on a single straight line. o A correlation near zero corresponds to a weak linear association. Correlation treats x and y symmetrically: o The correlation of x with y is the same as the correlation of y with x. Correlation has no units. Correlation is not affected by changes in the center or scale of either variable. o Correlation depends only on the z-scores, and they are unaffected by changes in center or scale. Correlation measures the strength of the linear association between the two variables. o Variables can have a strong association but still have a small correlation if the association isn’t linear. Correlation is sensitive to outliers. A single outlying value can make a small correlation large or make a large one small. caused Whenever we have a strong correlation, it is tempting to explain it by imagining that the predictor variable has caused the response to help. never Scatterplots and correlation coefficients never prove causation. lurking variable A hidden variable that stands behind a relationship and determines it by simultaneously affecting the other two variables is called a lurking variable. It is common in some fields to compute the correlations between each pair of variables in a collection of variables and arrange these correlations in a table. Sketch a scatterplot of the following information. Discuss the direction, form, and strength of the association. If the data meet the appropriate conditions, find the correlation coefficient (r).
Earth will not be able to support and sustain life forever. As our Sun ages, it is becoming more luminous, meaning that in the future Earth will receive more solar energy. Interestingly, both scenarios produced broadly similar results : oxygen levels fall drastically at around 1 billion years in the future. As carbon dioxide levels fall, plant photosynthesis will begin to suffer, resulting in reduced production of oxygen. Although oxygen production, and gradually oxygen concentrations in Earth’s atmosphere will drop, creating a crisis for other forms of future life. In the new model simulation, the researchers from Japan and the US used computer simulations to model the future evolution of the carbon, oxygen, phosphorous, and sulfur cycles on the surface of the Earth. They modeled two theoretical scenarios : an Earth-like planet with an active biosphere, and a planet without an active biosphere. In short, it is the balance between the geochemistry of which rocks enter the mantle during subduction, and which gases are emitted from the mantle via volcanoes, that seems to mostly affect how long Earth’s atmosphere will remain oxygen-rich. It remains uncertain how the ancient upper atmosphere could have stayed oxygen-rich while the ancient lower atmosphere remained oxygen-poor. As the authors of the new study suggest, using Earth as an analogue we might need to think more broadly about which gases with them. The first time we find evidence of life is to broaden our search from focusing on planets like our own to include those with a hydrogen atmosphere. For a rocky planet like Earth-like temperatures that could support life on Earth. But what if alien life uses somewhat different chemistry to ours? Our planet’s atmosphere and release oxygen to make carbon dioxide. “With this project we have opened up a new way of investigating Earth’s ancient atmosphere”, said study lead author Andrew Tomkins, a geoscientist at Monash University in Melbourne, Australia. Scientists want to know more about Mars’ ancient atmosphere in the distant past. To arrive at their results, the researchers analyzed 2.5 billion-year-old black shales from Western Australia. Without plant life, oxygen levels will drop, causing a mass extinction event among animals. They found that Earth’s oxygenated atmosphere will not be a permanent feature. We know how an oxygen atmosphere. “If we were to view the ancient Earth through a telescope would we recognise a habitable world? The Great Oxidation Event reminds us”, says Locmelis. While this facilitated the eventual evolution of complex life like humans, it changed the course of Earth history forever. According to geological evidence, oxygen-rich atmosphere between 2.4 billion years ago. Atmospheric carbon dioxide emissions increase, the temperature rise correlates with increasing temperatures and solar brightening. Earth’s oceans are of course host to myriad forms of life, thus it seems compelling that Mars’ early surface environment was a place contemporary Earth life could have lived, but it remains a mystery as to why evidence of life on Mars is so hard to find. The study was led by Kazumi Ozaki with Japan’s Toho University and Christopher Reinhard of Georgia Tech. Ozaki and his research team concluded that climate stability in the future. the sun constitutes 99.86 percent of the solar system’s mass. FLUXNET towers around the world measure the exchanges of carbon dioxide, water vapor. They found in ancient rocks suggest that about 2.7 billion to 2.8 billion ago for the first time oxygen was released into Earth’s atmosphere, forming new minerals like iron oxide. One exciting thing about our discovery of sulfidic conditions occurring before the GOE is that it might shed light on ocean chemistry during other periods in the geologic record, such as a poorly understood 400 million-year interval between the GOE and around 1.8 billion years ago, a point in time when the deep oceans stopped showing signs of high iron concentrations, Reinhard said. “Sulfate is the key ingredient in hydrogen sulfide formation in the ocean.” Who’s any large amount of free oxygen in the atmosphere were even a tiny fraction of what they are losing oxygen. The evolution of eukaryotes had to take place first. A planetary atmosphere with abundant oxygen would provide a very promising biosignature. Other mechanisms could help stir Europa’s crust, explained researcher Richard Greenberg, a planetary scientist at the University of Arizona’s Lunar and Planetary Laboratory at Tucson. In only 12 million years, oxidant concentrations would reach the minimum oxygen concentration seen in Earth’s oceans, enough to support small crustaceans, Greenberg found. The predictions are based on a computer simulation of the impact long-term changes to the Sun are likely to have on Earth. “Our new hypothesis ought to trigger a geological rethink regarding the long period of cooling in the run-up to the last ice age”, said von Blanckenburg. A better understanding of how gravity waves in the upper atmosphere interact with the jet stream, polar vortex and other phenomena could be key to improved weather predictions and climate models. reduction – oxidation reaction – is a type of chemical reaction in which the oxidation states of atoms are changed. Thus, atmospheric composition, oxygen, the question is can you have a level of atmospheric oxygen without life. The sun’s location in the Milky Way also makes it a good representative of the entire galaxy. In 1989, the standard oxygen abundance was 8.93, which meant there were 1,175 hydrogen atoms for every oxygen atom. The scientists analyzed the micrometeorites using electron microscopes and high-energy X-rays from the Australian Synchrotron. The general thought is that the methane, combined with carbon dioxide, may have created an organic haze if the conditions were right. “As we try to better understand Earth’s long periods of geological time”, Tomkins said. The missing atmosphere eliminates the possibility of surface water to be liquid form. The new study was published this week in the open – access journal Science Advances. The carbon dioxide, a heat-trapping greenhouse gas, would counteract the sun’s temperature. In Russia, analysis of ancient sedimentary rock indicates that for millions of years after the GOE, conditions on Earth were more than suitable for the continued evolution of complex life. Of course, even if the findings of the new research are true, this doesn’t mean oxygen levels never went down. the future if global warming continues? The pressures, temperatures remain at this high level until the end of the simulation. In coastal water bodies, including estuaries and seas, low – oxygen sites have increased… The scientists found chemical fingerprints of the oxygen level by measuring trace metals in the sediments. Coauthor Robert Anderson, a geochemist at Columbia University of Bern, Switzerland, the study “finally provides the long-sought smoking gun that there was increased deep sea by the buildup of decaying organic matter from above”. There is a still a major discovery to be made to find out exactly how the catalysis works, and now knowing where this machinery comes from may open new perspectives into its function – an understanding that could help target technologies for energy production from artificial photosynthesis. Proximity to an appropriate star, likely liquid water, things like that. The Nature study found that over the same period, the proportion of fossil-fuel emissions absorbed by the oceans has fallen by as much as 10 %. But the problem was that their chemical composition doesn’t closely match our planet’s rocks. The carbonaceous chondrites also formed in the outer Solar System, making it less likely they could have pelted the early Earth. We found the hydrogen isotopic composition of enstatite chondrites-more proof these were responsible for the bulk of Earth’s water. The outlook for any life form large enough to be seen with the naked eye seems pretty grim to me. But in a study published Wednesday in the journal Nature, researchers found that under a warming climate change. To reveal these kind of climate-carbon cycle feedback mechanisms under natural circumstances, David De Vleeschouwer and colleagues exploited isotopic data from deep-ocean sediment cores. The evident conclusion of this experiment is that with the current growth model, the planet Earth would suffer a rise in temperature on the Earth surface difficult to specify but with a high risk.
History of Poland (1939–45) Part of a series on the |History of Poland| |Prehistory and protohistory| The history of Poland from 1939 to 1945 encompasses primarily the period from the Invasion of Poland by Nazi Germany to the end of World War II. The outbreak of the war followed the period of intense armament by Nazi Germany and other neighbors of Poland, with which Poland was unable to keep up because of the country's limited economic resources. Following the German-Soviet non-aggression treaty, Poland was invaded by Nazi Germany on 1 September 1939 and by the Soviet Union on 17 September. The campaigns ended in early October with Germany and the Soviet Union dividing and annexing the whole of Poland. After the Axis attack on the Soviet Union in the summer of 1941, all of Poland was occupied by Germany. Under the two occupations, Polish citizens suffered enormous human and material losses. It is estimated that about 5.7 million Polish citizens died as a result of the German occupation and about 150,000 Polish citizens died as a result of the Soviet occupation. Ethnic Poles were subjected to both the Nazi and Soviet persecution. The Jews were singled out by the Germans for a quick and total annihilation and about 90% of Polish Jews (close to three million people) were murdered. Jews and others were killed en masse at Nazi extermination camps, such as Auschwitz, Treblinka and Sobibór. Ethnic cleansing and massacres of civilian populations, mostly Poles, were perpetrated in western Ukraine from 1943. The historically unprecedented war crimes committed in Poland were divided at the postwar Nuremberg trials into three main categories of wartime criminality: waging a war of aggression, war crimes, and crimes against humanity. A Polish resistance movement began organizing soon after the invasions in 1939. Its largest military component was a part of the Polish Underground State network of organizations and activities and became known as the Home Army. The whole clandestine structure was formally directed by the Polish government-in-exile through its delegation resident in Poland. There were also peasant, right wing, leftist and Jewish partisan organizations. Among the anti-German uprisings waged were the Warsaw Ghetto Uprising and the Warsaw Uprising. The latter was a late (August–September 1944), large-scale and ill-fated attempt to prevent the Soviet Union from dominating Poland's postwar government. Collaboration with the occupiers was limited. The Nazis planned a permanent elimination of any form of Polish statehood and even a longer-term destruction of the Polish nation. In September 1939, the Polish government officials sought refuge in Romania, but their subsequent internment there prevented the intended continuation abroad as the government of Poland. General Władysław Sikorski, a former prime minister, arrived in France, where a replacement government in exile was soon formed. After the fall of France the Polish government was evacuated to Britain. It was torn by a conflict between the post-Sanation and anti-Sanation elements, with the latter, led by Prime Minister Sikorski, gaining the upper hand because of the support of the French and then the British government. The Polish armed forces had been reconstituted and fought alongside the Western Allies in France, Britain and elsewhere. In order to cooperate with the Soviet Union, after the German attack an important war ally of the West, Sikorski negotiated in Moscow with Joseph Stalin and the formation of a Polish army in the Soviet Union was agreed, intended to fight on the Eastern Front alongside the Soviets. The "Anders' Army" was indeed created, but with the Soviet and British permission was instead taken to the Middle East. Further attempts at continuing Polish-Soviet cooperation were made, but they failed because of the disagreements over the borders, the discovery of the Katyn massacre of Polish POWs perpetrated by the Soviets, and the death of General Sikorski. Stalin pursued a strategy of facilitating the formation of a Polish government independent of (and in opposition to) the exile government in London. He empowered the Polish communists, whose party he eliminated in 1938 by murdering most of its activists (they had limited popular support in Poland). Among the new communist organizations were the Polish Workers' Party in occupied Poland and the Union of Polish Patriots in Moscow. A new Polish army was being formed in the Soviet Union to fight together with the Soviets. At the same time Stalin worked on co-opting the Western Allies (the United States led by President Franklin D. Roosevelt and the United Kingdom led by Prime Minister Winston Churchill), who in reality conformed to Stalin's views on Poland's borders and future government (he promised free elections). A series of negotiations included the conferences in Tehran, Yalta, and finally at Potsdam. The Polish government in exile approved and the underground in Poland undertook unilateral political and military actions aimed at establishing an independent Polish authority, but they were not successful. The government ceased being a recognized partner in the Allied coalition. The Polish communists founded the State National Council in 1943/44 in occupied Warsaw and the Polish Committee of National Liberation in July 1944 in Lublin, after the arrival of the Soviet army. The Soviet Union did not return the prewar Polish Kresy (the eastern lands), granting Poland instead the greater southern portion of the eliminated German East Prussia and shifting the country west to the Oder–Neisse line, under Stalin's plan to prevent Germany's future re-emergence as a great military power.[t] Poland was still to experience much internal turbulence and power struggle, but barring the West's war with the Soviet Union, the Soviet domination was a foregone conclusion. - 1 Before the war - 2 German and Soviet invasions of Poland - 3 Occupation of Poland - 4 Resistance in Poland - 5 The Holocaust in Poland - 6 Polish-Ukrainian conflict - 7 Government in exile and communist victory - 8 Polish state reestablished with new borders and under Soviet domination - 9 See also - 10 Notes - 11 Citations - 12 References - 13 External links Before the war Rearmament and first annexations After the death of Józef Piłsudski in 1935, the Sanation government of his political followers, along with President Ignacy Mościcki, embarked on a military reform and rearmament of the Polish Army in the face of the changing political climate in Europe. Thanks in part to a financial loan from France, Poland's new Central Industrial Region participated in the project from 1936 in an attempt to catch-up with the advanced weapons development by Poland's richer neighbors. Foreign Minister Józef Beck continued to resist the growing pressure on Poland from the West to cooperate with the Soviet Union in order to contain Germany. Against the rapidly growing German military force, Poland not only possessed no comparable quantity of technical resources, but also lacked the knowledge and concepts of developing modern warfare. The officially pursued German rearmament began in 1935 under Adolf Hitler, contrary to the provisions of the Treaty of Versailles – the foundation of the post-World War I international order. Unable to prevent Hitler's remilitarization of the Rhineland, both the United Kingdom and France also pursued rearmament. Meanwhile, the German territorial expansion into central Europe began in earnest with the Anschluss of Austria in March 1938. Poland dispatched special diversionary groups to the disputed Zaolzie (Czech Silesia) area in hope of expediting the breakup of Czechoslovakia and regaining the territory. The Munich Agreement of 30 September 1938 was followed by Germany's incorporation of the Sudetenland. Faced with the threat of a total annexation of Czechoslovakia, the Western Powers endorsed the German partition of the country. Poland insistently sought a great power status, but was not invited to participate in the Munich conference. Minister Beck, disappointed with the lack of recognition, issued an ultimatum on the day of the Munich Agreement to the government of Czechoslovakia, demanding an immediate return to Poland of the contested Zaolzie border region. The distressed Czechoslovak government complied, and the Polish military units took over the area. The move was negatively received in the West and contributed to the worsening of the geopolitical situation of Poland. In November, the Polish government also annexed a small border region in dispute with the newly autonomous state of Slovakia and gave its support to Hungary's expansion into Carpatho-Ukraine, located within the now federal Czechoslovakia. Aftermath of the Munich Agreement The Munich Agreement of 1938 did not last for long. In March 1939 the German occupation of Czechoslovakia began with the invasion of Bohemia and Moravia, leaving Slovakia as a German puppet state. Lithuania was forced to give up its Klaipėda Region (Memelland). Formal demands were made for the return of the Free City of Danzig to Germany, even though its status was guaranteed by the League of Nations. In early 1939 Hitler proposed Poland an alliance on German terms, with an expectation of compliance. The Polish government would have to agree to Danzig's incorporation by the Reich and to an extraterritorial highway passage connecting East Prussia with the rest of Germany through the so-called Polish Corridor (an area linking the Polish mainland with the Baltic Sea). Poland would join an anti-Soviet alliance and coordinate its foreign policy with Germany, thus becoming a client state. The independence-minded Polish government was alarmed and a British guarantee of Poland's independence was issued on 31 March 1939. Reacting to this act and to Poland's effective rejection of the German demands, Hitler renounced the existing German–Polish Non-Aggression Pact on April 28. In August 1939 negotiations took place in Moscow, launched by the competing Allied-Soviet and Nazi-Soviet working groups, each attempting to enlist Stalin's powerful army on their side. By the evening of 23 August 1939, Germany's offer was accepted by default, because the Polish leaders' refusal to cooperate militarily with the Soviets prevented the possibility of the alternate outcome. The Molotov–Ribbentrop Pact of non-aggression was signed. In anticipation of the surprise attack and occupation of Poland by Nazi Germany, the pact included top secret provisions carving up Eastern Europe into spheres of influence of the two signatories, with the dividing line running through the territory of east-central Poland. The "desirability of the maintenance of an independent Polish State" was left to mutually agreed "further political developments" read the text, which was discovered years later.[l] The Soviet Union, having its own reasons to fear the German eastward expansionism, repeatedly negotiated with France and the United Kingdom, and through them made an offer to Poland of an anti-German alliance, similar to the earlier one made to Czechoslovakia. The British and the French sought the formation of a powerful political-military bloc, comprising the Soviet Union, Poland and Romania in the east, and France and Britain in the west. As of May 1939, the Soviet conditions for signing an agreement with Britain and France were as follows: the right of the Red Army troops to pass through Polish territory, the termination of the Polish–Romanian Alliance, and the limitation of the British guarantee to Poland to cover only Poland's western frontier with Germany. The Polish leaders believed that once on Polish territory the Soviet troops would not leave and throughout 1939 refused to agree to any arrangement which would allow Soviet troops to enter Poland. The Polish unwillingness to accept the Soviet dangerous offer of free entry is illustrated by the quote of Marshall Edward Rydz-Śmigły, commander-in-chief of the Polish armed forces, who said: "With the Germans we run the risk of losing our liberty. With the Russians we will lose our soul". The attitude of the Polish leadership was also reflected by Foreign Minister Józef Beck, who apparently confident in the French and British declarations of support, asserted that the security of Poland was not going to be guaranteed by a "Soviet or any other Russia". The Soviets then turned to concluding the German offer of a treaty and the Molotov–Ribbentrop Pact was signed. The Soviet-Nazi cooperation had been making progress since May 1939, when Vyacheslav Molotov became the Soviet minister of foreign affairs. The German military used a system of automated code for secret transfer of messages based on the Enigma machine. The constantly generated and altered code scheme was broken by Polish mathematicians led by Marian Rejewski and the discovery was shared with the French and the British before the outbreak of the war. Cryptanalysis of the Enigma was an immensely important Polish contribution to the war effort, as it was continued throughout the war in Britain and deprived the unsuspecting Germans of secrecy in their crucial communications. At the end of August the Polish-British and Polish-French alliance obligations were updated. Poland, surrounded by the Nazi-led coalition, was under partial military mobilization but poorly prepared for war.[p] Full (general) mobilization was prevented by the pressure from the British and the French, who sought a last-minute peaceful solution to the imminent Polish-German conflict. On 1 September 1939, Poland was invaded by Nazi Germany. Both Britain and France, bound by a military alliance with Poland, declared war on Germany two days later. German and Soviet invasions of Poland On 1 September 1939, without a formal declaration of war, Nazi Germany invaded Poland with the immediate pretext being the Gleiwitz incident, a provocation (one of many) staged by the Germans claiming that Polish troops attacked a post along the German–Polish border. During the following days and weeks the technically, logistically and numerically superior German forces rapidly advanced into the Polish territory. Secured by the Molotov–Ribbentrop Pact, the Soviet troops also invaded Poland on 17 September 1939. Before the end of the month most of Poland was divided between the Germans and the Soviets. The German attack was not anticipated in a timely manner. Defense preparations of the western border were discontinued under Józef Piłsudski's leadership after 1926 and resumed only in March 1939. Afterwards the Polish Armed Forces were organized for the defense of the country. Their technical and organizational level, according to the historian Andrzej Leon Sowa, corresponded to that of the World War I period. The armed forces' strategic position was made more hopeless by the recent German occupation of Czechoslovakia. Poland was now surrounded on three sides by the German territories of Pomerania, Silesia and East Prussia, and the German-controlled Czechoslovakia. The newly formed Slovak state assisted their German allies by attacking Poland from the south. The Polish forces were blockaded on the Baltic Coast by the German navy. The Polish public, conditioned by government propaganda, was not aware of the gravity of the situation and expected a quick and easy victory of the Polish-French-British alliance. The German "concept of annihilation" (Vernichtungsgedanke) that later evolved into the Blitzkrieg ("lightning war") provided for rapid advance of Panzer (armoured) divisions, dive bombing (to break up troop concentrations and destroy airports, railways and stations, roads, and bridges, which resulted in the killing of large numbers of refugees crowding the transportation facilities), and aerial bombing of undefended cities to sap civilian morale. Deliberate bombing of civilians took place on a massive scale from the first day of the war, also in areas far removed from any other military activity. The German forces, ordered by Hitler to act with the harshest cruelty, massively engaged in murder of Polish civilians. The Polish army, air force and navy had insufficient modern equipment to match the onslaught. Each of Germany's five armies involved in attacking Poland was accompanied by a special security group charged with terrorizing the Polish population; some of the Polish citizens of German nationality had been trained in Germany to help with the invasion, forming the so-called fifth column. Many German leaders in Poland and communist activists were interned by the Polish authorities after 1 September. 10-15,000 ethnic Germans were arrested and force marched toward Kutno soon after the beginning of the hostilities. Of them about 2,000 were killed by angry Poles, and other instances of killing ethnic Germans took place elsewhere. Many times greater numbers of Polish civilians had been killed by the Wehrmacht throughout the "September Campaign". 58 German divisions, including 9 Panzer divisions, were deployed against Poland. Germany commanded 1.5 million men, 187,000 motor vehicles, 15,000 artillery pieces, 2,600 tanks, 1,300 armored vehicles, 52,000 machine guns and 363,000 horses. 1,390 Luftwaffe warplanes were used to attack Polish targets. On 1 September the German navy positioned its old battleship Schleswig-Holstein to shell Westerplatte, a section of the Free City of Danzig, an enclave separate from the main city and awarded to Poland by the Treaty of Versailles in 1919. 53 navy ships were designated for action against Poland. According to Antoni Czubiński, 1.2 million Polish troops had been mobilized, but some did not even have rifles. There were 30 infantry divisions, 11 cavalry brigades, 31 light artillery regiments, 10 heavy artillery regiments and 6 aerial regiments. They possessed 3,600 artillery pieces (mostly regular, with only a few hundred of anti-armor or anti-aircraft units), and 600 tanks, of which 120 were of the advanced 7TP-type. The air force regiments included 422 aircraft, including 160 PZL P.11c, 31 PZL P.7a and 20 P.11a fighters, 120 PZL.23 Karaś reconnaissance-bombers, and 45 PZL.37 Łoś medium bombers. The Polish-made P-series fighter planes were becoming obsolete; state-of-the art P-24s were built but sold abroad to generate currency. Łoś bombers were modern and fast. The navy's participation was limited by the withdrawal of major ships to the United Kingdom to prevent their destruction, and their linking up with the Royal Navy (known as the Peking Plan). The navy consisted of four destroyers (of which three had left for England), one minelayer, five submarines, and some smaller vessels, including six new minesweepers. Although the UK and France on 3 September declared war on Germany, little movement took place on the western front. The offensive in the West that the Poles understood they were promised was not materializing, and, according to Norman Davies, it was not even immediately feasible or practical. Because of the Western inaction, of the secret protocols of the German-Soviet treaty, and other factors including its own poor intelligence, the Polish government was initially not fully aware of the degree of the country's isolation and the hopelessness of its situation. The combined British and French forces were strong in principle, but not ready for an offensive for a number of reasons. The few limited air raids attempted by the British were ineffective and caused losses of life and equipment. Dropping propaganda leaflets had henceforth become their preferred course of action, to the dismay of the Polish public, which was led to believe that a real war on two fronts and a defeat of the Third Reich were coming. The several Polish armies were defending the country in three main concentrations of troops, which had no territorial command structure of their own and operated directly under orders from Marshal Edward Rydz-Śmigły; it turned out to be a serious logistical shortcoming. The armies were positioned along the border in a semicircle, which provided for weak defense, because the Germans concentrated their forces in the chosen directions of attacks. The German armored corps quickly thwarted all attempts of organized resistance and by 3–4 September the Polish border defenses were broken along all the axes of attack. Crowds of civilian refugees fleeing to the east blocked roads and bridges. The Germans were also able to circumvent other concentrations of the Polish military and arrive in the rear of Polish formations. As the Polish armies were being destroyed or in retreat, the Germans took Częstochowa on 4 September, Kraków and Kielce on 6 September. The Polish government was evacuated to Volhynia and the supreme military commander Rydz-Śmigły left Warsaw on the night of 6 September and moved in the eastern direction toward Brześć. General Walerian Czuma took over and organized the defense of the capital city. According to Halik Kochanski, Rydz-Śmigły fled the capital and the Polish high command failed its army. Rydz-Śmigły's departure had disastrous effects on both the morale of the Polish armed forces and on his ability to exercise effective overall command. The Germans began surrounding Warsaw on 9 September. City president Stefan Starzyński played an especially prominent role in its defense. The campaign's greatest Battle of the Bzura was fought west of the middle Vistula on 9–21 September. Heavy fighting took place also at a number of other locations, including the area of Tomaszów Lubelski (until 26 September), and a determined defense of Lwów was mounted (against the German forces until 22 September, when the defenders surrendered to the Soviets upon their arrival). On 13 September, Marshal Rydz-Śmigły ordered all Polish forces to withdraw toward the so-called Romanian Bridgehead in southeastern Poland next to the Romanian and Soviet borders, the area he designated to be the final defense bastion. On 11 September, foreign minister Józef Beck asked France to grant asylum to the Polish government and Romania to allow the transfer of the government members through its territory. On 12 September, the Allied war council deliberating in Abbeville, France concluded that the Polish military campaign had already been resolved and that there was no point in launching an anti-German relief expedition. The Polish leaders were unaware of the decision and still expected a Western offensive. Germany urged the Soviet Union from 3 September to engage its troops against the Polish state, but the Soviet command was stalling, waiting for the outcome of the German-Polish confrontation and to see what the French and the British were going to do. The Soviet Union assured Germany that the Red Army advance into Poland would follow later at an appropriate time. For the optimal "political motivation" (a collapse of Poland having taken place), Molotov wished to hold the Soviet intervention until the fall of Warsaw, but the city's capture by the Germans was being delayed due to its determined defense effort (until September 27). The Soviet troops marched on 17 September into Poland, which the Soviet Union claimed to be by then non-existent anyway (according to the historian Richard Overy, Poland was defeated by Germany within two weeks from September 1). Concerns about the Soviets' own security were used to justify the invasion. The Soviet entry was also rationalized by the need to protect the ethnically Belarusian and Ukrainian populations. The invasion was coordinated with the movement of the German army, and met limited resistance from the Polish forces. The Polish military formations available in the eastern part of the country were ordered by the high command, who were now at the Romanian border, to avoid engaging the Soviets,[c] but some fighting between Soviet and Polish units did take place (such as the Battle of Szack fought by the Border Protection Corps). The Soviet forces moved west (to the Bug River) and south to fill the area assigned to them by the secret protocol of the Molotov–Ribbentrop Pact. They took steps to block the potential Polish evacuation routes into Lithuania, Latvia, Romania and Hungary. About 13.4 million Polish citizens lived in the areas seized by the Soviet Union. Of those, about 8.7 million were Ukrainians, Belarusians and Jews. The minorities' relations with the Polish authorities were generally bad and many of their members greeted and supported the arriving Red Army troops as liberators. The British and French responses to the "not unexpected" Soviet encroachment were muted. Had it not been for the Soviet-German treaty and the Soviet invasion, all of prewar Poland would have likely been captured by Nazi Garmany already in 1939. End of campaign The Nazi-Soviet treaty process was continued with the German-Soviet Frontier Treaty signed on 28 September. It adjusted and finalized the territorial division, placing Lithuania within the Soviet sphere and moving the Soviet-German agreed boundary east from the Vistula to the Bug River, and authorized further joint action to control occupied Poland. An idea of retaining a residual Polish state, considered earlier, was abandoned. The Polish government and military high command retreated to the southeast Romanian Bridgehead territory and crossed into neutral Romania on the night of 17 September. From Romania on 18 September President Ignacy Mościcki and Marshal Rydz-Śmigły issued declarations and orders, which violated their status of persons passing through a neutral country. Germany pressured Romania not to allow the Polish authorities to depart (toward France) and the group was interned. The Polish ambassador in Romania helped General Władysław Sikorski, a member of the Polish opposition who was refused a military assignment and also entered Romania, to acquire departure documents and the general left for France. Resistance continued in many places. Warsaw was eventually bombed into submission. The event that served as a trigger for its surrender on 27 September was the bombing damage to the water supply system caused by deliberate targeting of the waterworks. Warsaw suffered the greatest damage and civilian losses (40,000 killed), already in September 1939.[s] The Modlin Fortress capitulated on 29 September, the Battle of Hel continued until 2 October, and the Battle of Kock was fought until 4 October. In the country's woodlands, army units began underground resistance almost at once. Major "Hubal" and his regiment pioneered this movement. During the September Campaign, the Polish Army lost about 66,000 troops on the German front; about 400,000 became prisoners of Germany and about 230,000 of the Soviet Union.[e] 80,000 managed to leave the country. 16,600 German soldiers were killed and 3,400 were missing. 1000 German tanks or armored vehicles and 600 planes were destroyed. The Soviet Army lost between 2,500 and 3,000 soldiers, while 6,000 to 7,000 Polish defenders were killed in the east. Over 12,000 Polish citizens executed by the Nazis were among the approximate 100,000 civilian victims of the campaign. Several Polish Navy ships reached the United Kingdom and tens of thousands of soldiers escaped through Hungary, Romania, Lithuania and Sweden to continue the fight. Many Poles took part in the Battle of France, the Battle of Britain, and, allied with the British forces, in other operations (see Polish contribution to World War II). Occupation of Poland The greatest extent of depredations and terror inflicted on and suffered by the Poles resulted from the German occupation. The most catastrophic series of events was the extermination of the Jews known as the Holocaust. About 1⁄6 of Polish citizens lost their lives in the war, most of the civilians targeted by various deliberate actions. The German plan involved not only the annexation of Polish territory, but also a total destruction of Polish culture and the Polish nation (Generalplan Ost). Under the terms of two decrees by Hitler (8 October and 12 October 1939), large areas of western Poland were annexed to Germany. These included all the territories which Germany had lost under the 1919 Treaty of Versailles, such as the Polish Corridor, West Prussia and Upper Silesia, but also a large area of indisputably Polish territory east of these territories, including the city of Łódź. The annexed areas of Poland were divided into the following administrative units: - Reichsgau Wartheland (initially Reichsgau Posen), which included the entire Poznań Voivodeship, most of the Łódź Voivodeship, five counties of the Pomeranian Voivodeship, and one county of the Warsaw Voivodeship; - the remaining area of the Pomeranian Voivodeship, which was incorporated into the Reichsgau Danzig-West Prussia (initially Reichsgau Westpreussen); - Ciechanów District (Regierungsbezirk Zichenau) consisting of five northern counties of the Warsaw Voivodeship (Płock, Płońsk, Sierpc, Ciechanów and Mława), which became a part of East Prussia; - Katowice District (Regierungsbezirk Kattowitz) or, unofficially, East Upper Silesia (Ost-Oberschlesien), which included the Silesian Voivodeship, Sosnowiec, Będzin, Chrzanów, Oświęcim, and Zawiercie counties, and parts of Olkusz and Żywiec counties, which became a part of the Province of Upper Silesia. The area of these annexed territories was 92,500 square kilometres and the population was about 10.6 million, a great majority of whom were Poles. In Pomeranian districts German summary courts sentenced to death 11,000 Poles in late 1939 and early 1940. Jews were expelled from the annexed areas and placed in ghettos such as the Warsaw Ghetto or the Łódź Ghetto. Catholic priests became targets of campaigns of murder and deportation on a mass scale. The population in the annexed territories was subjected to intense racial screening and Germanisation. The Poles experienced property confiscations and severe discrimination; 100,000 were removed from the port city of Gdynia alone already in October 1939. In 1939–40, many Polish citizens were deported to other Nazi-controlled areas, especially the General Government, or to concentration camps. With the clearing of some western Poland regions for German resettlement, the Nazis initiated the policies of ethnic cleansing. (see also: Expulsion of Poles by Nazi Germany) Under the terms of the Molotov–Ribbentrop Pact and the German-Soviet Frontier Treaty, the Soviet Union annexed all Polish territory east of the line of the rivers Pisa, Narew, Bug and San, except for the area around Vilnius (known in Polish as Wilno), which was given to Lithuania, and the Suwałki region, which was annexed by Germany. These territories were largely inhabited by Ukrainians and Belarusians, with minorities of Poles and Jews (for numbers see Curzon Line). The total area, including the area given to Lithuania, was 201,000 square kilometres, with a population of 13.2 million. A small strip of land that was a part of Hungary before 1914 was given to Slovakia. After the German attack on the Soviet Union in June 1941, the Polish territories previously occupied by the Soviets were organized as follows: - Bezirk Bialystok (District of Białystok), which included the Białystok, Bielsk Podlaski, Grajewo, Łomża, Sokółka, Wołkowysk, and Grodno counties, was "attached" to (but not incorporated into) East Prussia; - Bezirke Litauen und Weißrussland — the Polish part of White Russia (today western Belarus) and the Vilnius province were incorporated into the Reichskommissariat Ostland; - Bezirk Wolhynien-Podolien — the Polish Province of Volhynia, was incorporated into the Reichskommissariat Ukraine; - Distrikt Galizien, East Galicia, was incorporated into the General Government and became its fifth district. The remaining block of territory was placed under a German administration called the General Government (in German Generalgouvernement für die besetzten polnischen Gebiete), with its capital at Kraków. It became a part of Greater Germany (Grossdeutsches Reich). The General Government was originally subdivided into four districts, Warsaw, Lublin, Radom, and Kraków, to which East Galicia and a part of Volhynia were added as a district in 1941. (For more detail on the territorial division of this area see General Government.) The General Government was the nearest to Germany proper part of the planned Lebensraum or German "living space" in the east, and constituted the beginning of the implementation of the Nazi grandiose and genocidal human engineering scheme. A German lawyer and prominent Nazi, Hans Frank, was appointed Governor-General of the General Government on 26 October 1939. Frank oversaw the segregation of the Jews into ghettos in the larger cities, including Warsaw, and the use of Polish civilians for compulsory labour in German war industries. Some Polish institutions, including the police, were preserved in the General Government. Political activity was prohibited and only basic Polish education was allowed. University professors in Kraków were sent to a concentration camp and in Lviv were shot.[d] Ethnic Poles were to be gradually eliminated. The Jews, intended for a more immediate extermination, were herded into ghettos and severely repressed. The Jewish councils in the ghettos had to follow the German policies. Many Jews escaped to the Soviet Union (they were among the estimated 300,000 to 400,000 refugees that arrived there from German-occupied Poland) and some were sheltered by Polish families. The population in the General Government's territory was initially about 11.5 million in an area of 95,500 km², but this increased as about 860,000 Poles and Jews were expelled from the German-annexed areas and "resettled" in the General Government. After Operation Barbarossa, the General Government's area was 141,000 km², with 17.4 million inhabitants. Tens of thousands were murdered in the German campaign of extermination of the Polish intelligentsia and other elements thought likely to resist (e.g. Operation Tannenberg and Aktion AB). Catholic clergy were commonly imprisoned or otherwise persecuted and many ended up sent to their death in concentration camps. Tens of thousands of members of the resistance and others were tortured and executed at the Pawiak prison in Warsaw. From 1941, disease and hunger also began to reduce the population, as the exploitation of resources and labor, terror and Germanisation reached greater intensity after the attack on the Soviet Union. Poles were also deported in large numbers to work as forced labor in Germany, or taken to concentration camps. About two million were transported to Germany to work as slaves and many died there.[i] Łapanka or random roundup, on streets or elsewhere, was one of the methods practiced by the Nazis to catch prisoners for labor. Several hundred Wehrmacht brothels, for which local non-German women were forcibly recruited, operated throughout the Reich. In contrast to Nazi policies in occupied Western Europe, the Germans treated the Poles with intense hostility and all Polish state property and private industrial concerns were taken over by the German state. Poland was plundered and subjected to extreme economic exploitation throughout the war period. The future fate of Poland and Poles was decided in Generalplan Ost, a Nazi plan to engage in genocide and ethnic cleansing of the territories occupied by Germany in Eastern Europe in order to exterminate the Slavic peoples. Tens of millions were to be eliminated, others resettled in Siberia or turned into slave populations. The cleared territories were to be resettled by Germans, and a trial evacuation of all Poles was attempted in the Zamość region in 1942 and 1943. 121,000 Poles were removed from their villages and replaced with 10,000 German settlers. Under the Lebensborn program, about 200,000 Polish children were kidnapped by the Germans to be tested for racial characteristics that would make them suitable for Germanisation. Of that number (many were found unsuitable and killed), only between 15% and 20% were returned to Poland after the war. By the end of the Soviet invasion, the Soviet Union took 50.1% of the territory of Poland (195,300 km²), with 12,662,000 people. Population estimates vary; one analysis gives the following numbers in regard to the ethnic composition of these areas at the time: 38% Poles, 37% Ukrainians, 14.5% Belarusians, 8.4% Jews, 0.9% Russians and 0.6% Germans. There were also 336,000 refugees from the areas occupied by Germany, most of them Jews (198,000). Areas occupied by the Soviet Union were annexed to Soviet territory, with the exception of the Wilno/Vilnius region, which was transferred to the Republic of Lithuania. Lithuania itself was soon incorporated by the Soviet Union and, including the contested Wilno area, became the Lithuanian Soviet Socialist Republic. The Soviets considered the Kresy territories (prewar eastern Poland) to be colonized by the Poles and the Red Army was proclaimed a liberator of the conquered nationalities. Many Jews, Ukrainians, Belarusians and Lithuanians shared that point of view and cooperated with the new authorities in repressing the Poles. The Soviet administrators used slogans about class struggle and dictatorship of the proletariat, as they applied the policies of Stalinism and Sovietization in occupied eastern Poland. On 22 and 26 October 1939, the Soviets staged elections to Moscow-controlled Supreme Soviets (legislative bodies) of the newly created provinces of Western Ukraine and Western Byelorussia to legitimize the Soviet rule. The new assemblies subsequently called for the incorporation into the Soviet Union, and the Supreme Soviet of the Soviet Union annexed the two territories to the already existing Soviet republics (the Ukrainian Soviet Socialist Republic and the Byelorussian Soviet Socialist Republic) on 2 November. All institutions of the dismantled Polish state were closed down and reopened with new directors who were mostly Russian and in rare cases Ukrainian or Polish. Lviv University and other schools restarted anew as Soviet institutions. Some departments, such as law and humanities were abolished; new subjects, including Darwinism, Leninism and Stalinism were taught by the reorganized departments. Tuition was free and monetary stipends were offered to students. The Soviet authorities attempted to remove all signs of Polish existence and activity in the area. On 21 December, the Polish currency was withdrawn from circulation without any exchange to the newly introduced ruble. In schools, Polish language books were burned. All the media became controlled by Moscow. Soviet occupation implemented a police state type political regime, based on terror. All Polish parties and organisations were disbanded. Only the communist party and subordinate organisations were allowed to exist. Soviet teachers in schools encouraged children to spy on their parents. Ukrainian and Belarusian social organizations, closed by the Polish government in the 1930s, were reopened. In schools the language of instruction was changed to Ukrainian or Belarusian. Organized religions were persecuted. Most churches were closed; priests and ministers were discriminated against by the authorities and subjected to high taxes, drafts into military service, arrests and deportations. Many enterprises were taken over by the state or failed, while much of agriculture was made collective. Among the industrial installations dismantled and sent east were most of the Białystok textile industry factories. The results of the Soviet economic policies soon resulted in serious difficulties, as shops lacked goods, food was scarce and people were threatened by famine. According to the Soviet law of 29 November 1939, all residents of the annexed area, referred to as citizens of former Poland, automatically acquired the Soviet citizenship. Residents were still required and pressured to consent and those who opted out (most Poles did not want to give up the Polish citizenship) were threatened with repatriation to Nazi controlled territories of Poland. The Soviets exploited past ethnic tensions between Poles and other ethnic groups, inciting and encouraging violence against Poles by calling upon the minorities to "rectify the wrongs they had suffered during twenty years of Polish rule". The hostile propaganda resulted in instances of bloody repression. Parts of the Ukrainian population initially welcomed the end of Polish rule and the phenomenon was strengthened by a land reform. However, the Soviet authorities soon started a campaign of forced collectivisation, which largely nullified the reform gains. There were large groups of prewar Polish citizens, notably Jewish youth and, to a lesser extent, Ukrainian peasants, who saw the Soviet power as an opportunity to start political or social activity outside of their traditional ethnic or cultural groups. Their enthusiasm faded with time as it became clear that the Soviet repressions affected everybody. The organisation of Ukrainians desiring independent Ukraine (the OUN) was persecuted as "anti-Soviet". A rule of terror was started by the NKVD and other Soviet agencies. The first victims were the approximately 230,000 Polish prisoners of war. The Soviet Union had not signed any international convention on rules of war and they were denied the status of prisoners of war. When the Soviets conducted recruitment activities among the Polish military, an overwhelming majority of the captured officers refused to cooperate; they were considered enemies of the Soviet Union and a decision was made by the Soviet Politburo (5 March 1940) to secretly execute them (22,000 officers and others). The officers and a large number of ordinary soldiers were then murdered (see Katyn massacre) or sent to Gulag. Of the 10,000-12,000 Poles sent to Kolyma in 1940–41, most POWs, only 583 men survived, released in 1941–42 to join the Polish Armed Forces in the East. Among the Poles who decided to cooperate with the Soviet authorities were Wanda Wasilewska, who was allowed to publish a Polish language periodical in Lviv, and Zygmunt Berling, who from 1940 led a small group of Polish officers working on a concept of formation of a Polish division in the Soviet Union. Wasilewska and Berling pushed for the Polish division again in September 1942, and the issue then gained traction. Terror policies were also applied to the civilian population. The Soviet authorities regarded service for the prewar Polish state as a "crime against revolution" and "counter-revolutionary activity", and subsequently started arresting large numbers of Polish intelligentsia, politicians, civil servants and scientists, but also ordinary people suspected of posing a threat to the Soviet rule. Schoolchildren as young as 10 or 12 years old who laughed at Soviet propaganda presented in schools were sent into prisons, sometimes for as long as 10 years. The prisons soon became severely overcrowded with detainees suspected of anti-Soviet activities and the NKVD had to open dozens of ad hoc prison sites in almost all towns of the region. The wave of arrests led to forced resettlement of large categories of people (kulaks, Polish civil servants, forest workers, university professors or osadniks, for instance) to the Gulag labor camps. The Polish and formerly Polish citizens, a large proportion of whom were ethnic minorities, were deported mostly in 1940, typically to northern Russia, Kazakhstan and Siberia. Following the Operation Barbarossa and the Sikorski–Mayski agreement, in the summer of 1941 the exiled Poles were released under the declared amnesty. Many thousands trekked south to join the newly formed Polish Army, but thousands were too weak to complete the journey or perished soon afterwards. Collaboration with the occupiers In occupied Poland, there was no official collaboration at either the political or economic level. At an early stage of the war, the imprisoned former prime minister of Poland, Wincenty Witos, was offered by the Germans to lead a collaborationist government, but declined; and so did Prince Janusz Radziwiłł. Some insignificant ONR members sought official collaboration along with Professor Władysław Studnicki, subsequently interned for delivering in December 1939 a protest letter to the German government about the brutal Nazi conduct in Poland. The occupying powers intended permanent elimination of Polish governing structures and ruling elites and therefore did not seek this kind of cooperation. The Poles were not given positions of authority. The vast majority of the prewar citizenry collaborating with the Nazis came from the German minority in Poland, the members of which were offered several classes of the German Volksdeutsche ID. During the war, there were about 3 million former Polish citizens of German origin who signed the official Deutsche Volksliste. Depending on a definition of collaboration (and of a Polish citizen, including the ethnicity and minority status considerations), scholars estimate number of "Polish collaborators" at around several thousand in a population of about 35 million (that number is supported by the Israeli War Crimes Commission). The estimate is based primarily on the number of death sentences for treason by the Special Courts of the Polish Underground State. The underground courts sentenced 10,000 Poles, including 200 death sentences. John Connelly quoted a Polish historian (Leszek Gondek) calling the phenomenon of Polish collaboration "marginal" and wrote that "only relatively small percentage of Polish population engaged in activities that may be described as collaboration when seen against the backdrop of European and world history". In October 1939, the Nazis ordered a mobilization of the prewar Polish police to the service of the occupational authorities. The policemen were to report for duty or face a death penalty. The so-called Blue Police was formed. At its peak in 1943, it numbered around 16,000. Its primary task was to act as a regular police force and to deal with criminal activities, but they were also used by the Germans in combating smuggling and patrolling the Jewish ghettos. Many individuals in the Blue Police followed German orders reluctantly, often disobeyed them or even risked death acting against them. Many members of the Blue Police were double agents for the Polish resistance; a large percentage cooperated with the Home Army. Some of its officers were ultimately awarded the Righteous Among the Nations awards for saving Jews. However, the moral position of Polish policemen was often compromised by a necessity for cooperation, or even collaboration, with the occupier. According to Timothy Snyder, acting in their capacity as a collaborationist force, the Blue Police may have killed more than 50,000 Jews. The police assisted the Nazis at tasks such as rounding up Poles for forced labor in Germany. During Nazi Germany's Operation Barbarossa against the Soviet Union in June 1941, the German forces quickly overran the eastern half of Poland controlled by the Red Army since 1939. New Reichskommissariats were formed across the Kresy macroregion. As the Soviet-German war progressed, the Home Army fought against both invaders, including the Soviet partisans in Poland, who often considered the Polish underground as enemies on a par with the Germans and from June 1943 were authorized by their command to denounce them to the Nazis. Due to the intensified, by the fall of 1943, warfare between the Home Army and the Soviet partisans, a few Polish commanders accepted some weapons and ammunition from the Germans to fight the communist forces. Tadeusz Piotrowski quotes Joseph Rothschild as saying: "The Polish Home Army (AK) was by and large untainted by collaboration" and that "the honor of AK as a whole is beyond reproach". In 1944, the Germans clandestinely armed a few regional AK units operating in the area of Vilnius in order to encourage them to act against the Soviet forces in the region. AK turned these weapons against the Nazis during the Operation Ostra Brama. Such arrangements were purely tactical and did not evidence the type of ideological collaboration as shown by the Vichy regime in France, the Quisling regime in Norway, or the OUN leadership in Distrikt Galizien. The Poles' main motivation was to gain intelligence on German morale and preparedness and to acquire much needed equipment. Former prime minister of Poland Leon Kozłowski was released from a Soviet prison and crossed into the German zone of occupation in October 1941. However, his reasons and the context of his action are not known. Historian Gunnar S. Paulsson estimates that in Warsaw the number of Polish citizens collaborating with the Nazis during the occupation might have been around "1 or 2 percent". Fugitive Jews (and members of the resistance) were handed over to the Gestapo by the so-called "szmalcowniks", who received financial rewards. The denunciators of various ethnicities, according to Isaiah Trunk and Rubin Katz, included members of the Jewish criminal underworld taking advantage of their inside knowledge. In the territories occupied by the Soviets before Operation Barbarossa some members of the Jewish community collaborated with the NKVD. Soon after the German takeover of Jedwabne in July 1941, the Jedwabne pogrom, the exact circumstances of which are not clear, took place. Possibly about 300 members of Jewish families were rounded up by Nazi Germans and locked in a barn, which was then set on fire by Polish men in their presence. Resistance in Poland Armed resistance and the Underground State The Polish resistance movement in World War II was the largest in all of occupied Europe. Resistance to the German occupation began almost at once and included guerrilla warfare. Centrally commanded military conspiratorial activity was started with the Service for Poland's Victory (Służba Zwycięstwu Polski) organization, established on 27 September 1939. Poland's prewar political parties also resumed activity. The Service was replaced by the Polish government-in-exile in Paris with the Union of Armed Struggle (Związek Walki Zbrojnej), placed under the command of General Kazimierz Sosnkowski, a minister in that government. In June 1940 Władysław Sikorski, prime minister in exile and chief military commander, appointed General Stefan Rowecki, resident in Poland, to head the underground forces. Bataliony Chłopskie, a partisan force of the peasant movement, was active from August 1940. The Home Army (Armia Krajowa or AK), loyal to the government in exile then in London and a military arm of the Polish Underground State, was formed from the Union of Armed Struggle and other groups in February 1942. In July its forces approached 200,000 sworn soldiers, who undertook many successful anti-Nazi operations. Gwardia Ludowa and its successor Armia Ludowa were the much smaller leftist formations, backed by the Soviet Union and controlled by the Polish Workers' Party. The ultra-nationalist National Armed Forces also operated separately. By mid-1944, the AK had some 400,000 members but was poorly armed. According to Czubiński, the AK counted 300,000 committed soldiers, who performed about 230,000 actions of sabotage and diversion throughout the war. According to Zbigniew Mikołejko, 200,000 soldiers and civilians participated in AK activities during the war. The attacks were hampered by the Nazi policy of retaliation against the civilian population, including mass executions of randomly rounded up individuals. The occupiers would typically kill one hundred Polish civilians for each German killed by the resistance. The AK encountered difficulties establishing itself in the eastern provinces (Kresy) and in the western areas annexed to Germany. General Rowecki was betrayed and arrested by the Gestapo in June 1943. The Underground State originated in April 1940, when the exile government planned to establish its three "delegates" in occupied Poland: for the General Government, the German-annexed areas and the Soviet-occupied zone. After the fall of France, the structure was revised to include only a single delegate. The Underground State was endorsed by Poland's main prewar political blocks, including the peasant, socialist, nationalist and Catholic parties and absorbed many supporters of the Sanation rule, humbled by the 1939 defeat. The parties established clandestine cooperation in February 1940 and dedicated themselves to a future postwar parliamentary democracy in Poland. From autumn 1940, the "State" was led by a delegate (Cyryl Ratajski) appointed by the Polish government in London. The Underground State maintained the continuity of the Polish statehood in Poland and conducted a broad range of political, military, administrative, social, cultural, educational and other activities, within practical limits of the conspiratorial environment. In November 1942 Jan Karski, a special emissary, was sent to London and later to Washington, to warn the Western Allies of the imminent extermination of the Jews in Poland. Karski was able to convey his personal observations to American Jewish leaders and he met with President Roosevelt. After Operation Barbarossa Leopold Trepper, a Polish-Jewish communist, worked as a master spy and was the chief of the Red Orchestra network in Western Europe. He became aware and informed Stalin of the Nazi-planned Operation Barbarossa, but the Soviet leader did not take his — nor the similar alerts from his top intelligence officer in Japan, Richard Sorge — advance warnings seriously regarding the imminent Nazi invasion. In Poland the communists, more active after the 1941 Nazi invasion of the Soviet Union, and the right wing extremists, neither joined the broad coalition nor recognized the Government Delegate. The situation of the Polish armed resistance was made more difficult by the fact that the Allies now assigned Poland to the Soviet sphere of operations, and Britain refrained from or limited direct support of resistance movements in central-eastern Europe. With Stalin's encouragement, Polish communist institutions rival to the government-in-exile and the Underground State were established. They included the Polish Workers' Party (from January 1942) and the State National Council in occupied Poland, as well as the Union of Polish Patriots in the Soviet Union. The Jewish Combat Organization groups undertook armed resistance activities in 1943. In April, the Germans began deporting the remaining Jews from the Warsaw Ghetto, provoking the Warsaw Ghetto Uprising (19 April–16 May). The Polish-Jewish leaders knew that the rising would be crushed but they preferred to die fighting than wait to be deported to their deaths in the death camps. After the Operation Barbarossa, the Soviet partisans also developed and became militarily active in the General Government and, often aligned with the Polish leftist Armia Ludowa, posed a significant threat to the authority of the AK, which had not adopted a policy of more direct and widespread confrontations with the Nazis until 1943. The Soviet partisans were especially prevalent in Belarus and elsewhere in Kresy. The presence of the various partisan formations, including also the Jewish, National Armed Forces, Bataliony Chłopskie (some right-, some left-wing), and of criminal armed bands preying on local populations, led to armed clashes and a climate of uncertainty, just as the Soviet armies, having established their superiority on the Eastern Front, were about to approach Poland's prewar eastern boundaries. In August 1943 and March 1944, the Underground State announced its long-term plan, partially designed to counter the attractiveness of some of the communist proposals. It promised parliamentary democracy, land reform, nationalization of the industrial base, more powerful trade unions, demands for territorial compensation from Germany, and re-establishment of the pre-1939 eastern border. Thus, the main difference between the Underground State and the communists, in terms of politics, amounted not to radical economic and social reforms, which were advocated by both sides, but to their attitudes towards national sovereignty, borders, and Polish-Soviet relations. Operation Tempest and the Warsaw Uprising In early 1943, the Home Army built up its forces in preparation for a national uprising. The situation was soon complicated by the continuing strength of Germany and the threat presented by the advance of the Soviets, who promoted a territorial and political vision of a future Poland at odds with what the Polish leaders were striving for. The Council of National Unity, a quasi-parliament, was instituted in occupied Poland on 9 January 1944; it was chaired by Kazimierz Pużak, a socialist. The plan for the establishment of Polish state authority ahead of the arrival of the Soviets was code-named Operation Tempest and began in late 1943. Its major elements were the campaign of the 27th Home Army Infantry Division in Volhynia (from February 1944), Operation Ostra Brama in Vilnius and the Warsaw Uprising. In the earlier cases, the Soviets and their allies ruthlessly imposed their rule; in the case of the Warsaw Uprising, the Soviets waited for the Germans to defeat the insurgents. The forces of the Polish right-wing called for stopping the war against Germany and concentrating on fighting the communists and the Soviet threat. As the Operation Tempest failed to achieve its goals in the disputed eastern provinces, the Soviets demanded that the Home Army be disbanded there and its underground soldiers enlist in the Soviet-allied First Polish Army. The AK commander Tadeusz Bór-Komorowski complied, disbanding in late July 1944 his formations east of the Bug River and ordering the fighters to join the Zygmunt Berling-led army. Some partisans obeyed, other refused, and many were arrested and persecuted by the Soviets. In the summer of 1944, as the Soviet forces approached Warsaw, the AK prepared an uprising in the city to try to prevent a communist takeover of the Polish government. The supreme Polish commander in London General Sosnkowski was pessimistic about the uprising's chances and sent to Warsaw General Leopold Okulicki, instructing him not to allow the uprising to proceed. In Warsaw Okulicki soon developed ideas of his own and became the uprising's most ardent proponent, pushing for a quick commencement of anti-German hostilities. The government in exile approved the uprising and on 27 July Prime Minister Stanisław Mikołajczyk cabled Jan Stanisław Jankowski, the government delegate, authorizing an uprising proclamation at a moment chosen by the authorities in Warsaw. To some of the Polish underground commanders the German collapse and the entry of the Soviets appeared imminent and the AK, led by Bór-Komorowski, launched the Warsaw Uprising on August 1. The insurgents' equipment and supplies would suffice for only several days of fighting and the Uprising was planned to last no longer than that. On August 3 Mikołajczyk, conferring with Stalin in Moscow, announced an upcoming "freeing of Warsaw any day now" and asked for military help. However, the Germans turned out to be still overwhelmingly strong and the Soviet leaders and their forces nearby, not consulted in advance, contrary to the insurgents' expectations gave little assistance. Stalin had no interest in the Uprising's success and Moscow Radio denounced the leaders of the rising as a "gang of criminals". The Poles appealed to the Western Allies for help. The Royal Air Force and the Polish Air Force based in Italy dropped some arms but little could be accomplished without Soviet involvement. Urged by the communist Polish Committee of National Liberation and the Western leaders, Stalin eventually allowed airdrops for the Warsaw insurgents and provided limited military assistance. Soviet supply flights continued from 13 to 29 September and an American relief operation was now allowed to land on Soviet-controlled territory, but the area under insurgent control had been greatly reduced and much of the dropped material was lost. General Berling's failed but costly attempt to support the fighters on 15–23 September using his Polish forces (First Army units crossed the Vistula but were slaughtered in a battle over the bridgehead) derailed Berling's own career.[z] The Soviets halted their western push at the Vistula for several months, directing their attention south toward the Balkans. In the Polish capital desperate street-to-street and house-to-house fighting took place. The Warsaw AK district had 50,000 poorly armed members. They faced a massively reinforced German force of 22,000 SS and regular army units. The Polish command hoped to establish a provisional Polish administration to greet the arriving Soviets, but came nowhere close to meeting this goal. The Germans and their allies engaged in mass slaughter of the civilian population, including between 40,000 and 50,000 massacred in the districts of Wola, Ochota and Mokotów. The SS and auxiliary units recruited from the Soviet Army deserters (the Dirlewanger Brigade and the R.O.N.A. Brigade) were particularly brutal. After the Uprising's surrender on 2 October, the AK fighters were given the status of prisoners-of-war by the Germans but the civilian population remained unprotected and the survivors were punished and evacuated. The Polish casualties are estimated to be at least 150,000 civilians killed, in addition to fewer than 20,000 AK soldiers. The German forces suffered comparable losses[y] and the First Polish Army lost a few thousand. 150,000 civilians were sent to labour camps in the Reich or shipped to concentration camps such as Ravensbrück, Auschwitz, and Mauthausen. The city was almost totally demolished by the German punitive bombing raids, but only after being systematically looted of works of art and other property, which were then taken to Germany. General Sosnkowski, who criticized the Allied inaction, was relieved of his command. Following the defeat of Operation Tempest and the Warsaw Uprising, the remaining resistance in Poland (the Underground State and the AK) ended up greatly destabilized, weakened and with damaged reputation, when the international decision making processes impacting Poland's future were about to enter their final phase. The Warsaw Uprising allowed the Germans to largely destroy the AK as a fighting force, but the main beneficiaries were the Soviets and the communists, who were able to impose a communist government on postwar Poland with reduced risk of armed resistance. The Soviets and the allied First Polish Army, having resumed their offensive, entered Warsaw on 17 January 1945. In January 1945, the Home Army was officially disbanded. The AK, placed under General Okulicki after General Bór-Komorowski became a German prisoner, in the later part of 1944 had become extremely demoralized. Okulicki issued the order dissolving the AK on 19 January, having been authorized to do so by President Raczkiewicz. The civilian Underground State structure remained in existence and hoped to participate in the future government of Poland. The Holocaust in Poland Jews in Poland In 1938, the Polish government passed a law depriving of the Polish citizenship those who had lived outside of Poland for over five years. The laws was aimed at and used to prevent the tens of thousands of Polish Jews in Austria and Germany, threatened or expelled by the Nazi regime, from returning to Poland. The Polish courier Jan Karski wrote on the Jewish, Polish and German relations in December 1939. In his opinion, some Poles felt contempt and dismay observing the barbarian anti-Jewish methods of the Nazis, while other watched their activities with interest and admiration. He warned of the threat of demoralization of broad segments of Polish society because of the narrow common ground that the Nazis shared with many ethnic Poles on the Jewish issue. According to Laurence Weinbaum who quotes Aleksander Smolar, "in wartime Polish society (...) there was no stigma of collaboration attached to acting against the Jews". Nazi persecution and elimination of ghettos Persecution of the Jews by the Nazi occupation government, particularly in the urban areas, began immediately after the commencement of the occupation. In the first year and a half, the Germans confined themselves to stripping the Jews of their property, herding them into ghettos (approximately 400 were established beginning in October 1939) and putting them into forced labor in war-related industries. Thousands of Jews survived by managing to stay outside the ghettos. During this period, a Jewish so-called community leadership, Judenrat, was required by the Germans in every town with a substantial Jewish population and was able to some extent to bargain with the Germans. Already during this initial stage tens of thousands of Jews died because of factors such as overcrowding, disease and starvation. Others survived, supported by the Jewish social self-help agency and the informal trading and smuggling of food and necessities into the ghetto. The ghettos were eliminated when their inhabitants were shipped to slave labor and extermination camps; the Łódź Ghetto lasted the longest (until August 1944) because goods were manufactured there for the Nazi war economy. The deportations from the Warsaw Ghetto began in July 1942. They were facilitated by collaborators, such as the Jewish police, and opposed by the resistance, including the Jewish Fighting Organization (ŻOB). While many Jews reacted to their fate with disbelief and passivity, revolts did take place, including at the Treblinka and Sobibór camps and at a number of ghettos. The leftist ŻOB was established in the Warsaw Ghetto in July 1942 and was soon commanded by Mordechai Anielewicz. As the final liquidation of the remaining ghetto population was commenced by the Nazis on 19 April 1943, hundreds of Jewish fighters revolted. The Warsaw Ghetto Uprising lasted until May 16 and resulted in thousands of Jews killed and tens of thousands transported to Treblinka. The Polish underground and some Warsaw residents assisted the ghetto fighters. Extermination of Jews After the German attack on the Soviet Union in June 1941, special extermination squads (the Einsatzgruppen) were organised to kill Jews in the areas of eastern Poland which had been annexed by the Soviets in 1939. The Nazi anti-Jewish persecutions assumed the characteristics and proportions of genocide, and, from the fall of 1941, of the organized Final Solution. About two million Jews were killed after the beginning of Operation Barbarossa, mostly by the Germans, in areas where Soviet presence was replaced with Nazi occupation. Especially in the early weeks of the German offensive, many thousands of Jews were murdered by members of local communities in the western parts of the previously Soviet zone, such as the Baltic countries, eastern Poland, and western Ukraine. The pogroms, encouraged by the Germans, were sometimes perpetrated primarily or exclusively by the locals, including Lithuanians, Belarusians, Ukrainians and Poles. In 1942, the Germans engaged in systematic killing of the Jews, beginning with the Jewish population of the General Government. The General Government had the largest in Europe population of Jews and was designated to be the primary location of Nazi installations for the elimination of Jews. Six extermination camps (Auschwitz, Bełżec, Chełmno, Majdanek, Sobibór and Treblinka) were established in which the most extreme measure of the Holocaust, the mass murder of millions of Jews from Poland and other countries, was carried out between 1942 and 1945. Nearly three million Polish Jews were killed, most in death camps during the so-called Operation Reinhard. Prisoners of many nationalities were kept at Auschwitz and parts of the complex were used as a brutal and deadly labor camp, but about 80% of the arriving Jews were directly selected for death (some 900,000 people). Auschwitz, unlike Treblinka or Bełżec, was not strictly a death camp, but it still might have produced the highest number of Jewish victims.[k] Of Poland's prewar Jewish population of 3 million, only about 10% survived the war. Davies wrote of some 150,000 Jews surviving the war in Poland. Between 50,000 and 100,000 survived in hiding helped by other Poles, according to Kochanski. About 250,000 escaped German-occupied Poland and went mostly to the Soviet Union. At Treblinka (a site that, together with Auschwitz, produced the highest number of victims) and other extermination locations, Heinrich Himmler ordered measures intended to conceal the Nazi crimes and prevent their future detection. Efforts to save Jews Some Poles tried to save Jews. In September 1942, the Provisional Committee to Aid Jews (Tymczasowy Komitet Pomocy Żydom) was founded on the initiative of Zofia Kossak-Szczucka. This body later became the Council to Aid Jews (Rada Pomocy Żydom), known by the code-name Żegota. Żegota is particularly known for its children-saving operation led by Irena Sendler. About 2,500 Jewish children were smuggled out of the Warsaw Ghetto before the ghetto was eliminated and thus saved. (See also an example of the village that helped Jews: Markowa). Because of such actions, Polish citizens have the highest number of Righteous Among the Nations awards at the Yad Vashem Museum. Thousands of Jews were saved with the help of the Greek Catholic Metropolitan Andrey Sheptytsky in western Ukraine. Helping Jews was extremely dangerous because people involved exposed themselves and their families to a Nazi punishment by death. The official policies of the Polish government in exile and the Polish Underground State called for providing assistance to the Jews. Right-wing organizations such as the National Radical Camp (ONR) and the National Armed Forces (NSZ) remained virulently antisemitic throughout the occupation period and some people preyed on the Jewish victims. Bloody ethnic conflict exploded during World War II in areas of today's western Ukraine, inhabited at that time by Ukrainians and a Polish minority (and until recently by Jews, most of whom had been killed by the Nazis before 1943). The Ukrainians, who blamed the Poles for preventing the emergence of their national state and for Poland's nationality policies (such as military colonization in Kresy), undertook during the interwar years a campaign of terror led by the Organization of Ukrainian Nationalists (OUN). Under Piłsudski and his successors the Polish state authorities responded with harsh pacification measures. The events that unfolded in the 1940s were a legacy of this bitterness and also a result of other factors, such as the activities of Nazi Germany and the Soviet Union. Ukrainians, generally assigned by the Nazis the same inferior status as Poles, in many practical respects received a more favorable treatment. However, the Germans thwarted the Ukrainian attempts to establish a Ukrainian state, imprisoned Ukrainian leaders, and split the occupied lands that Ukrainians considered theirs into two administrative units. Following the Soviet victory at Stalingrad, the Ukrainian nationalists feared a repeat of the post-World War I scenario: a power vacuum left by the exhausted great powers and a Polish armed takeover of western Ukraine. Aiming for a country without any Poles or Polish interests left, the Ukrainian Insurgent Army (UPA) undertook to create an ethnically homogenous Ukrainian society by physically eliminating the Poles and the German occupiers for the most part did not intervene in the resulting campaigns of ethnic cleansing. The wartime Polish-Ukrainian conflict commenced with the massacres of Poles in Volhynia (Polish: Rzeź wołyńska, literally: Volhynian slaughter), a campaign of ethnic mass murder in western Reichskommissariat Ukraine, which was the Polish Volhynian Voivodeship before the war. The entire conflict took place mainly between late March 1943 and August 1947, extending beyond World War II. The actions, orchestrated and conducted largely by the UPA together with other Ukrainian groups and local Ukrainian peasants in three former Polish provinces (voivodeships), resulted in between 30,000 and 40,000 Polish civilians killed in Volhynia alone. Other major regions of the slaughter of Poles were eastern Galicia (10,000–20,000 killed) and eastern Lublin area (10,000–20,000 killed). The peak of the massacres took place in July and August 1943, when Dmytro Klyachkivsky, a senior UPA commander, ordered the extermination of the entire ethnically Polish population between 16 and 60 years of age. Tens of thousands of Poles fled the affected areas. The massacres committed by the UPA led to ethnic cleansing and retaliatory killings by Poles against local Ukrainians both east and west of the Curzon Line. Estimates of the number of Ukrainians killed in Polish reprisals vary from 10,000 to 20,000, in all areas affected by the conflict. The reprisal killings were committed by the Home Army and Polish self-defense units. They were restrained from mounting indiscriminate attacks by the Polish government in exile, whose goal was to retake and govern western Ukraine after the war. The ethnic cleansing and securing ethnic homogeneity reached its full scale with the post-war Soviet and Polish communist removal of the Polish and Ukrainian populations to the respective sides of the Poland-Soviet Ukraine border and the implementation of the Operation Vistula, the dispersing of the remaining Ukrainians in remote regions of Poland. Due in part to the successive occupations of the region, ethnic Poles and Ukrainians were brutally pitted against each other, first under the German occupation, and later under the Soviet occupation. Tens or hundreds of thousands on both sides (estimates differ widely) lost their lives over the course of this conflict. Government in exile and communist victory Polish government in France and Britain Because of the Polish government leaders' internment in Romania, a practically new government was assembled in Paris as a government in exile. Under French pressure, On 30 September 1939 Władysław Raczkiewicz was appointed as president and General Władysław Sikorski became prime minister and commander-in-chief of the Polish armed forces, reconstructed in the West and as an underground activity in occupied Poland. The exile government was authorized by the Sanation government leaders interned in Romania and was conceived as a continuation of the prewar government, but was beset by strong tensions between the sympathizers of the Sanation regime, led by President Raczkiewicz and General Kazimierz Sosnkowski, and anti-Sanation opposition, led by Prime Minister Sikorski, General Józef Haller, and politicians from the Polish parties persecuted in the past in Sanation Poland. The 1935 April Constitution of Poland, previously rejected by the opposition as illegitimate, was retained for the sake of continuity of the national government. President Raczkiewicz agreed not to use his extraordinary powers, granted by that constitution, except in agreement with the prime minister. There were calls for a war tribunal prosecution of the top leaders deemed responsible for the 1939 defeat. Sikorski blocked such attempts, but allowed forms of persecution of many exiles, people seen as compromised by their past role in Poland's ruling circles. A quasi-parliamentary and advisory National Council was established in December 1939. It was chaired by the Polish senior statesman Ignacy Paderewski. The vice-chairmen were Stanisław Mikołajczyk, a peasant movement leader, Herman Lieberman, a socialist, and Tadeusz Bielecki, a nationalist. The war was expected to end soon in an Allied victory and the government's goal was to reestablish the Polish state in pre-1939 borders, augmented by East Prussia, Danzig, and the planned significant adjustments of the western border, all to be obtained at the expense of Germany. The government considered Poland to be in a state of war with Germany, but not with the Soviet Union, the relationship with which was not clearly specified.[f] The eastern border problem placed the Polish government on a collision course not only with the Soviets, but also with the Western Allies, whose many politicians, including Winston Churchill, kept thinking of Poland's proper eastern boundary in terms of the "Curzon Line". The exile government in Paris was recognized by France, Britain, and many other countries and was highly popular in occupied Poland. By the spring of 1940, an 82,000 strong army was mobilized in France and elsewhere. Polish soldiers and ships fought in the Norwegian Campaign. France was invaded and defeated by Germany. The Polish Army units, dispersed and attached to various French formations, fought in the defense of France and covered the French retreat, losing 1,400 men. On 18 June 1940, Sikorski went to England and made arrangements for the evacuation of the Polish government and armed forces to the British Isles. Only 19,000 soldiers and airmen could be evacuated, which amounted to less than a quarter of the Polish military personnel established in France.[h] The infighting within the exile government circles continued. On July 18 President Raczkiewicz dismissed Prime Minister Sikorski because of the disagreements concerning possible cooperation with the Soviet Union. Sikorski's supporters in the Polish military and the British government intervened and Sikorski was reinstated, but the internal conflict among the Polish émigrés intensified. Polish pilots became famous because of their exceptional contributions during the Battle of Britain. Polish sailors, on Polish and British ships, served with distinction in the Battle of the Atlantic. Polish soldiers participated in the North African Campaign. Departure of Polish army from the Soviet Union After Germany attacked the Soviet Union on 22 June 1941, the British government allied itself with the Soviet Union on 12 July and Churchill pressed Sikorski to also reach an agreement with the Soviets. The Sikorski–Mayski treaty was signed on 30 July despite strong resistance from Sikorski's opponents in the exile government (three cabinet ministers resigned, including Foreign Minister August Zaleski and General Sosnkowski) and Polish-Soviet diplomatic relations were restored. The territorial aspects of the Molotov–Ribbentrop Pact had been invalidated. Polish soldiers and others imprisoned in the Soviet Union since 1939 were released and the formation of a Polish army there was agreed, intended to fight on the Eastern Front, help the Red Army to liberate Poland and establish a sovereign Polish state. Other issues, including Poland's borders, were left to be determined in the future. A Polish-Soviet military agreement was signed on 14 August; it attempted to specify the political and operational conditions for the functioning of the Polish army. Sikorski's preference, stated around 1 September, was for the Polish army to be deployed in defense of the Caucasus oil fields, which would allow it to maintain close contacts with the British forces. To resolve the various problems that surfaced during the recruitment and training of the Polish divisions and concerning their planned use, Sikorski went to the Soviet Union, where he negotiated with Stalin and the two announced a common declaration "of friendship and mutual assistance" on 4 December 1941. But political and practical difficulties continued (the Soviets were unable or unwilling to properly feed and supply the Poles); ultimately the chief of the Polish army in the Soviet Union Władysław Anders and Sikorski obtained with British help Stalin's permission to move the force to the Middle East. According to one source, 78,631 Polish soldiers and tens of thousands of civilians left the Soviet Union and went to Iran in the spring and summer of 1942. The majority of General Anders' men formed the II Corps in the Middle East, from where the corps was transported to Italy in early 1944, to participate in the Italian Campaign. Its 60,000 soldiers grew to 100,000 by mid-1945. Overall, the Polish soldiers were taken from where they conceivably could have affected the fate of Poland and enhance the faltering standing of the Polish government in exile, to where, as it turned out, they could not.[g] In the shadow of Soviet offensive, death of Prime Minister Sikorski As the Soviet forces began their westward offensive with the victory at Stalingrad, it had become increasingly apparent that Stalin's vision of a future Poland and of its borders was fundamentally different from that of the Polish government in London and the Polish Underground State, and Polish-Soviet relations kept deteriorating. Polish communist institutions rival to those of the main national independence and pro-Western movement were established in Poland in January 1942 (the Polish Workers' Party) and in the Soviet Union (the Union of Polish Patriots). Early in 1943, the Polish communists (their delegation led by Władysław Gomułka) engaged in Warsaw in negotiations with the Delegation of the government in exile, but no common understanding was arrived at and the Delegation terminated the talks after the Soviet-Polish breach in diplomatic relations caused by the dispute concerning the Katyn massacre. The Polish Workers' Party formulated its separate program and from November was officially under Gomułka's leadership. On the initiative of the Union of Polish Patriots, presided by Wanda Wasilewska, in the spring of 1943 the Soviets began recruiting for a leftist Polish army led by Zygmunt Berling, a Polish Army colonel, to replace the "treacherous" Anders' army that left. The Kościuszko Division was rushed to its first military engagement and fought at the Battle of Lenino on 12–13 October. The Soviet-based communist faction, organized around the Central Bureau of Polish Communists (activated January 1944), directed by such future Stalinist Poland's ruling personalities as Jakub Berman, Hilary Minc, and Roman Zambrowski, was increasingly influential. They also had a prevailing sway on the formation of Berling's First Polish Army in 1943–44. In April 1943, the Germans discovered the graves of 4,000 or more Polish officers at Katyn near Smolensk. The Polish government, suspecting the Soviets to be the perpetrators of an atrocity, requested the Red Cross to investigate. The Soviets denied involvement and the request was soon withdrawn by Sikorski under British and American pressure, but Stalin reacted by "suspending" diplomatic relations with the government in exile on 25 April. The Katyn massacre information was suppressed during and after the war by the British, to whom the revelation was an embarrassment and presented a political difficulty. Prime Minister Sikorski, the most prominent of the Polish exile leaders, was killed in an air crash near Gibraltar on 4 July 1943. Sikorski was succeeded as head of the government in exile by Stanisław Mikołajczyk and by Kazimierz Sosnkowski as the top military chief. Sikorski had been willing to work closely with Churchill, including on the issue of cooperation with the Soviets. The prime minister believed that Poland's strategic and economic weaknesses would be eliminated by a takeover of German East Prussia, Pomerania and Silesia and that Polish territorial concessions in the east were feasible. However, Sikorski was also credited with preventing the Soviet territorial demands from being granted in the Anglo-Soviet Treaty of 1942. After his death, the Polish government's position within the Allied coalition deteriorated further and the body splintered into quarreling factions. Decline of government in exile At the Moscow Conference of three foreign ministers in October 1943, at the request of the Polish government borders were not discussed, but US President Franklin D. Roosevelt had already expressed his support for Britain's approval of the Curzon Line as the future Polish-Soviet boundary. The great powers represented divided Europe into spheres of influence and Poland was placed within the Soviet sphere. The Poles were also disappointed by a lack of progress regarding the resumption of Polish-Soviet diplomatic ties, an urgent issue because the Soviet armies were moving toward Poland's 1939 frontiers. In November–December 1943, the Allied Tehran Conference took place. President Roosevelt and Prime Minister Churchill agreed with Stalin on the issue of using the Curzon Line as the basis of Poland's new eastern border and compensating Poland with lands taken from Germany. The strategic war alliance with the Soviets inevitably outweighed the Western loyalty toward the Polish government and people. The Poles were not consulted or properly informed of the three Allied leaders' decisions. With the Western Allies stalling a serious offensive undertaking from the west,[j] it was clear that it would be the Soviet Union who would enter Poland and drive off Nazi Germans. The Soviet offensive aimed at taking the Vistula basin commenced in January 1944. Churchill applied pressure to Mikołajczyk, demanding accommodation with the Soviets, including on the borders issue. As the Red Army was marching into Poland defeating the Nazis, Stalin toughened his stance against the Polish exiled government, wanting not only the recognition of the proposed frontiers, but also a resignation from the government of all elements 'hostile to the Soviet Union', which meant President Raczkiewicz, armed forces commander Sosnkowski, and other ministers. The Underground State governing structures were formed by the Peasant Alliance, the Socialist Party, the National Alliance and the Labour Alliance. They acted as rivals in a fragile coalition, each defining its own identity and posturing for the expected post-war contest for power. The Polish Government in London was losing its already weak influence on the views of the British and American governments. The British and Soviet demands on the exile government were made in January 1944, in the context of a possible renewal of Polish-Soviet diplomatic relations and, contingent on the Polish agreement, a Soviet consent for an independent, presumably "Finlandized" Polish state. Following a refusal to accept the conditions by the Polish government in London, the Soviets engaged only in supporting the leftist government structures they were in process of facilitating, allowing contacts with Prime Minister Mikołajczyk, but already within the framework of communist control.[q] In the aftermath of the controversial visit of Oskar R. Lange to the Soviet Union, the Polish American Congress was established in the USA in May 1944; among the organization's goals was promoting interests of independent Poland before the US Government. Mikołajczyk visited the USA in June and on several occasions met with President Roosevelt, who urged him to travel to Moscow and talk to the Soviet leaders directly. Mikołajczyk, subsequently engaged in negotiations with Stalin and the emerging Polish communist government (PKWN), eventually resigned his post and Tomasz Arciszewski became the new prime minister in exile in November 1944. Mikołajczyk's disagreements with his coalition partners (he was unable to convince the ministers that restoration of the prewar eastern border was no longer feasible and further compromises were necessary) and his departure created a vacuum, because the British and the Americans were practically unwilling to deal with the Polish government that followed.[o] In 1944, the Polish forces in the West were making a substantial contribution to the war. In May, participating in the Italian Campaign, the Second Corps under General Anders stormed the fortress of Monte Cassino and opened a road to Rome. In the summer and fall, the Corps participated in the Battle of Ancona and the Gothic Line offensive, finishing the campaign with the Battle of Bologna in April 1945. In August 1944, after the Normandy landings, General Stanisław Maczek's 1st Armoured Division distinguished itself at the Battle of Falaise. After fighting the Battle of Chambois and defending Hill 262, the division crossed into Belgium, where it took Ypres. In October, heavy fighting helped secure Antwerp and resulted in taking of the Dutch city of Breda. In April 1945 the division concluded its combat in Germany, where it occupied Wilhelmshaven and liberated a war prisoner camp that held many Polish female POWs captured by the Nazis after the Warsaw Uprising. In September General Stanisław Sosabowski's Parachute Brigade fought hard at the Battle of Arnhem. The Polish Air Force, comprising 15 warplane squadrons and 10,000 pilots, fully participated in the Western offensive, as did the Polish Navy ships. Soviet and Polish communist victory The Bug River was crossed by the Soviets (1st Belorussian Front) on 19 July 1944 and their commander Konstantin Rokossovsky headed for Warsaw, together with the allied Polish forces. As they approached the Polish capital German panzer divisions counterattacked, while the Poles engaged in the Warsaw Uprising. After the German attack was brought under control, Rokossovsky informed Stalin on 8 August that his forces would be ready to engage in an offensive against the Germans in Warsaw around 25 August, but received no reply. The Soviets secured their Vistula bridgeheads and, with the First Polish Army, established control over the Praga east-bank districts of Warsaw.[z] The situation on the ground, combined with political and strategic considerations, resulted in the Soviet decision to pause at the Vistula for the remainder of 1944. The government in exile in London was determined that the Home Army would cooperate with the advancing Red Army on a tactical level, as Polish civil authorities from the Underground State took power in Allied-controlled Polish territory to ensure that Poland remained an independent country after the war. However, the failure of Operation Tempest and the Warsaw Uprising laid the country open to the establishment of communist rule and Soviet domination. The Soviets performed arrests, executions and deportations of the Home Army and Underground State members, although AK partisans were generally encouraged to join the communist-led Polish armies. In January 1945, the Soviet and the allied Polish armies undertook a massive offensive, aiming at the liberation of Poland and the defeat of Nazi Germany. Marshal Ivan Konev's 1st Ukrainian Front broke out of its Sandomierz Vistula bridgehead on 11 January and rapidly moved west, taking Radom, Częstochowa and Kielce on 16 January. Kraków was liberated on 18 January, a day after Hans Frank and the German administration fled the city. Marshal Konev's forces then advanced toward Upper Silesia, freeing the remaining survivors of the Auschwitz concentration camp on 27 January. In early February, the 1st Ukrainian Front reached the Oder River in the vicinity of Breslau. North of the Ukrainian Front, the 1st Belorussian Front under Marshal Georgy Zhukov went to the Oder along the Łódź and Poznań route. Still further north operated the 2nd Belorussian Front commanded by Marshal Konstantin Rokossovsky. The First Polish Army fought on the 1st and 2nd Belorussian Fronts. It entered the rubble of Warsaw on 17 January, formally liberating the city. Poznań was taken by Soviet formations after a bloody battle. In the context of the westbound offensive but also to support the clearing of East Prussia and the forces engaged in the Battle of Königsberg, the First Polish Army was directed northwards to the Pomeranian region, where its drive began at the end of January. The heaviest battles fought by the Poles included the breaching of the Pomeranian Wall, accomplished by the badly battered First Polish Army and the Soviets on 5 February, during their East Pomeranian Offensive. The Poles, commanded by General Stanisław Popławski, then led the assault on Kolberg, completed on 18 March. Gdynia and Danzig were taken over by the 2nd Belorussian Front by the end of March, with the participation of the Polish 1st Armoured Brigade. The First Polish Army's campaign continued as it forced the Oder in April and finally reached the Elbe River in early May. The Second Polish Army was led by Karol Świerczewski and operated with the 1st Ukrainian Front. The soldiers, who were recently conscripted, poorly taken care of and badly commanded, advanced toward Dresden from 16 April and suffered huge losses as they struggled in the Battle of Bautzen. Subsequently, the Second Army took part in the capture of Dresden and then crossed into Czechoslovakia to fight in the final Prague Offensive, entering the city on 11 May. The Polish Army, placed under the overall command of Michał Rola-Żymierski, was ultimately expanded to 400,000 people, and, helping to defeat Germany all the way to the Battle of Berlin (elements of the First Polish Army), suffered losses equal to those experienced during the 1939 defense of the country (over 60,000 soldiers killed). Over 600,000 Soviet soldiers died fighting German troops in Poland. Terrified by the reports of Soviet-committed atrocities, masses of Germans fled in the westerly direction. It is estimated that in the final stages of the war, the Polish armed forces were the fourth largest on the Allied side, after the armies of the Soviet Union, the United States, and the United Kingdom. Polish state reestablished with new borders and under Soviet domination War losses of Poland Numerical dimensions of Polish World War II losses are difficult to ascertain. According to the 1946 Polish War Reparations Bureau (Biuro Odszkodowań Wojennych) official data, 644,000 Polish citizens died as a result of military action and 5.1 million died as a result of the occupants' repressions and extermination policies. According to Czubiński, the Soviet Union is responsible for the deaths of about 50,000 of the exterminated people. Approximately 90% of Polish Jews perished and most of those who survived did so by fleeing to the Soviet Union. 380,000 Polish Jews were estimated to have survived the war. According to an estimate of the Central Committee of Polish Jews, 50,000 Jews survived in Poland. Close to 300,000 Jews found themselves in Poland soon after the war. For a number of reasons, including antisemitic activities such as the Kielce pogrom of 1946, Żydokomuna accusations, loss of families, communities and property, desire to emigrate to Palestine or to places in the West deemed more advantageous than post-war Poland, most of the surviving Jews left Poland in several stages after the war. The goal of Polish communist authorities was a state populated by ethnic Poles and the officials often informally facilitated departures of the Jews. The heaviest losses among ethnic Poles were experienced by people with secondary and higher education, who were targeted by the occupiers and of whom a third or more had not survived. Academics and professional people suffered the most. According to Kochanski, only about 10% of the human losses of Poland were a result of military action; the rest came from intentional exterminations, persecutions, war and occupation hardships and the attendant attrition. 800,000 Poles became permanently disabled and large numbers failed to return from abroad, which further reduced the manpower potential of Poland. 105,000 service people, or about one-half of the soldiers enlisted in the Polish Armed Forces in the West, returned to Poland after the war.[x] The war destroyed 38% of Poland's national assets. A substantial majority of Polish industrial installations and agricultural infrastructure had been lost. Warsaw and a number of other cities were for the most part destroyed and required complete or large scale rebuilding. Beginnings of communist government The State National Council (KRN), chaired by Bolesław Bierut, was established in Warsaw by the Polish Workers' Party (PPR) on January 1, 1944. The Armia Ludowa was its army. The Polish communist centers in Warsaw and in Moscow initially operated separately and had different visions of cooperation with the Soviet Union and regarding other issues. In the spring of 1944, the KRN sent a delegation to the Soviet Union, where it gained Stalin's recognition and the two branches began working together. In intense negotiations the two Polish communist groups agreed to establish the Polish Committee of National Liberation (PKWN), a sort of temporary government. As the Soviets advanced through Poland in 1944 and 1945, the German administration collapsed. The communist-controlled PKWN was installed in July 1944 in Lublin, the first major Polish city within the new boundaries to be seized by the Soviets from the Nazis, and began to take over the administration of the country as the Germans retreated. The Polish government in London formally protested the establishment of the PKWN. The PKWN was led by Edward Osóbka-Morawski, a socialist, and included other non-communists. The PKWN Manifesto was proclaimed in Chełm on July 22, initiating the crucial land reform. The agrarian reform, according to Norman Davies, was moderate and very popular.[b] The communists constituted only a small, but highly organized and influential minority in the forming and gaining strength Polish pro-Soviet camp, which also included leaders and factions from such main political blocks as the agrarian, socialist, Zionist, and nationalist movements. The Polish Left in particular, with considerable support from the peasant movement leaders, both critical in respect to the Second Republic's record, was inclined to accept the Soviet territorial concepts and called for the creation of a more egalitarian society. They became empowered and commenced the formation of the new Polish administration, disregarding the existing Underground State structures. The so-called Provisional Government of the Republic of Poland was established at the end of 1944 in Lublin and was recognized by the Soviet Union, Czechoslovakia and Yugoslavia. It was headed by the socialist Osóbka-Morawski, but the communists held a majority of key posts. In April 1945, the provisional government signed a mutual friendship, alliance and cooperation pact with the Soviet Union. The Poles in late 1944 and early 1945 on the one hand tended to resent the Soviet Union and communism and feared Poland's becoming a Soviet dependency, while on the other the leftist viewpoints were increasingly popular among the population. There was little support for a continuation of the prewar policies. By the time of the Yalta Conference in February 1945, the Soviets were at the height of their power, while the fronts in Western Europe and Italy had not advanced as quickly as expected. The Allies continued their discussions and informally finalized decisions on the postwar order in Europe. Churchill and Roosevelt accepted the Curzon Line as the basis of Poland's eastern border, but disagreed with Stalin on the extent of Poland's western expansion, at the expense of Germany.[n] Poland was going to get a compromise provisional (until the agreed free elections) government of national unity including both the communist elements established in Lublin, now unofficially considered principal, and pro-Western forces. There was a disagreement regarding the issue of inclusion of the London-based government in exile as the main pro-Western faction in the government of national unity. The Polish government in exile reacted to the Yalta announcements (unlike the Tehran Conference outcomes, Yalta results were made public) with a series of fervent protests. The Underground State in Poland, through its operating in hiding Council of National Unity issued a more measured and pragmatic response, regretting the sacrifices imposed on Poland but expecting a representative government established and committing itself to adapt to the situation and promote "friendly and peaceful relations" with the Soviet Union. The tripartite Allied commission made up of Vyacheslav Molotov and the British and American ambassadors in Moscow worked on the composition of the Polish government of national unity from 23 February, but the negotiations soon stalled because of different interpretations of the Yalta Conference agreements. The former prime minister in exile Stanisław Mikołajczyk, approached by representatives of the communist-controlled Provisional Government refused to make a separate deal with that body, but he did make a statement of acceptance of the Yalta decisions on 15 April. Because of the continuing disagreement on the composition of the government of national unity, Churchill convinced Mikołajczyk to take part in a conference in Moscow in June 1945, where he and other Polish democrats agreed with Stalin to a temporary deal (until the elections promised to take place soon, but with no specific time frame provided or even discussed) excluding the government in exile. Mikołajczyk was perceived in the West as the only reasonable Polish politician. Based on the understanding reached in Moscow by the three powers with Mikołajczyk's help, the Government of National Unity was constituted on 28 June 1945 with Osóbka-Morawski as prime minister, Władysław Gomułka and Mikołajczyk as deputy prime ministers. Mikołajczyk returned to Poland with Stanisław Grabski in July and was enthusiastically greeted by large crowds in several Polish cities. The new government was quickly recognized by the United Kingdom, the United States, and most other countries. The formally coalition government was in reality controlled entirely by Gomułka's Polish Workers' Party and other Polish politicians convinced of the inevitability of Soviet domination. The government was charged with conducting elections and normalizing the situation in Poland. The exile government in London, no longer recognized by the great powers, remained in existence until 1991. Persecution of opposition Persecution of the opposition intensified in October 1944, when the PKWN authorities encountered widespread loyalty problems among the now conscripted military personnel and other sections of Polish society. The enforcement of the communist rule was undertaken by the NKVD and the Polish security services, all backed by the massive presence of the Red Army in Poland. Potential political opponents of the communists were subjected to Soviet terror campaigns, with many of them arrested, executed or tortured. According to one estimate, 25,000 people lost their lives in labour camps created by the Soviets as early as 1944. A conspiratorial AK-related organization known as NIE (for Niepodległość or Independence) was set up in 1944 by Emil Fieldorf. General Okulicki became its commander and NIE remained in existence after the AK was dissolved in January 1945. Its activities were directed against the communist Provisional Government. However, as a result of Okulicki's arrest by the NKVD in March and the general persecution, NIE faded away. The Armed Forces Delegation for Poland was established instead in May, to be finally replaced by the Freedom and Independence (WiN) formation, whose goal was to organize political rather than military resistance to the communist domination. Government Delegate Jan Stanisław Jankowski, chairman of the Council of National Unity Kazimierz Pużak and thirteen other Polish Underground State leaders were invited to and on 27 March 1945 attended talks with General Ivan Serov of the NKVD. They were all arrested and taken to Moscow to await a trial. The Polish communist Provisional Government and the Western leaders were not informed by the Soviets of the arrests. The British and the Americans were notified by the Polish government in exile and, after the Soviet belated admission, unsuccessfully pressured the Soviet government for the release of the captives. In June 1945, the Trial of the Sixteen was staged in Moscow. They were accused of anti-Soviet subversion and, presumably because of the ongoing negotiations on the formation of Polish government and the Western interventions, received lenient by Soviet standards sentences. Okulicki was condemned to ten years in prison. Post-German industrial and other property was looted by the Soviets as war reparations, even though the former lands of eastern Germany were coming under permanent Polish administration.[v] As the Soviets and the pro-Soviet Poles solidified their control of the country, a political struggle with the suppressed and harassed opposition ensued, accompanied by a residual but brutally fought armed rebellion waged by the unreconciled elements of the former, now officially disbanded underground and the nationalistic right wing. Thousands of militiamen, PPR members and others were murdered before the communist authorities brought the situation under control.[r] A "Democratic Bloc" comprising the communists and their socialist, rural and urban allies was established. Mikołajczyk's Polish People's Party (PSL), which refused to join the Bloc, was the only legal opposition; they counted on winning the promised legislative elections. Other contemporary Polish movements, including the National Democracy, Sanation, and Christian Democracy were not allowed to function legally and were dealt with by the Polish and Soviet internal security organs. The Western Allies and their leaders Roosevelt and Churchill in particular, have been criticised by Polish writers and some Western historians for what most Poles see as the abandonment of Poland to Soviet rule. Decisions were made at the Tehran, Yalta and Potsdam conferences and on other occasions that amounted, according to such opinions, to Western complicity in Stalin's takeover of Eastern Europe.[a] According to Czubiński, blaming the Western powers, especially Winston Churchill, for a "betrayal" of the Polish ally, "seems a complete misunderstanding". Soviet-controlled Polish state The postwar Poland was a state of reduced sovereignty, strongly dependent on the Soviet Union, but the only one possible under the existing circumstances and internationally recognized. The Polish Left's cooperation with the Stalin's regime made the preservation of a Polish state within favorable borders possible. The dominant Polish Workers' Party had a strictly pro-Soviet branch, led by Bierut and a number of internationalist in outlook Jewish communist activists, and a national branch, willing to take a "Polish route to socialism", led by Gomułka. As agreed by the Allies in Yalta, the Soviet Union incorporated the lands in eastern Poland (Kresy, east of the Curzon Line), previously occupied and annexed in 1939 (see Territories of Poland annexed by the Soviet Union). Deferring to Stalin's territorial schemes, the Allies compensated Poland with the German territories east of the Oder–Neisse line, parts of Pomerania, Silesia and East Prussia (in Polish communist government's propaganda referred to as the Recovered Territories).[m] The deal was practically, but in principle not permanently, finalized at the Potsdam Conference (17 July to 2 August 1945).[u] The entire country was shifted to the west and resembled the territory of Medieval early Piast Poland. Per the Potsdam agreement, several million Germans were expelled and forced to relocate their families to the new Germany. A large proportion of them had already fled not waiting for the Potsdam decrees. Davies wrote that the resettlement of Germans was not merely an act of wartime revenge, but a result of decades old Allied policy. The Russians as well as the British saw the German East Prussia as a product of German militarism, the "root of Europe's miseries", and the Allies therefore intended to eradicate it. The new western and northern territories of Poland were repopulated with Poles "repatriated" from the eastern regions now in the Soviet Union (2-3 million people) and from other places.[w] The precise Soviet-Polish border was delineated in the Polish–Soviet border agreement of 16 August 1945. The new Poland emerged 20% smaller (by 77,700 km² or 29,900 mi²) in comparison to the 1939 borders. Eastern poorly developed regions were lost and western industrialized regions were gained, but the emotional impact for many Poles was clearly negative. The population transfers included also the moving of the Ukrainians and the Belarusians from Poland into their respective Soviet republics. In particular, the Soviet and Polish communist authorities expelled between 1944 and 1947 nearly 700,000 Ukrainians and Lemkos, transferring most of them into Soviet Ukraine, and then spreading the remaining groups in the Polish Recovered Territories during the Operation Vistula, thus ensuring that postwar Poland would not have significant minorities or any minority concentrations to contend with. Thousands were killed in the attendant strife and violence. After the war, many displaced Poles and some of those living in Kresy, now in the Soviet Union, did not end up in the new Poland. The population within the respective official borders decreased from 35.1 million in 1939 to 23.7 million in 1946. Poland's western borders were soon questioned by the Germans and many in the West, while the planned peace conference did not materialize because the Cold War replaced the wartime cooperation. The borders, essential to Poland's existence, were in practice guaranteed by the Soviet Union, which only increased the dependence of Polish government leaders on their Soviet counterparts. - List of Polish cities damaged in World War II - Polish culture during World War II - World War II casualties of Poland - History of Poland (1945–89) a.^ According to Davies, the Grand Alliance (Britain, USA and the Soviet Union) decided in the meetings of its three leaders that the unconditional defeat of the Reich was the Alliance's overriding priority (principal war aim). Once this definition was accepted, the two Western powers, having obliged themselves not to withdraw from the conflict for any reason (including pressuring the Soviets), had lost their ability to meaningfully influence Soviet actions. b.^ PKWN's land reform decree was issued on September 6, 1944. The Polish communists were reluctant to get going with the land reform, a fundamental departure from old Polish legal systems (they claimed adherence to the 1921 March Constitution of Poland). Polish peasants were reluctant to take over landowners' possessions. Joseph Stalin summoned to Moscow in late September the KRN and PKWN leaders, including Bierut and Gomułka, and inquired about the progress of the land reform. The Soviet leader asked how many estates had already been parceled and was very unhappy to find out that the answer was zero. He repeatedly lectured the Polish leaders, appealing to their communist convictions and patriotism. Stalin urged them to start implementing the land reform without any further delay, not to worry excessively about legal proprieties, because it was a revolutionary action, and to take advantage of the fact that the Red Army was still in Poland to help. c.^ Marshal Rydz-Śmigły made a final radio broadcast to Polish troops from Romania on September 20. He stressed the Polish army's involvement in fighting the Germans and told the commanders to avoid pointless bloodshed of fighting the Bolsheviks. f.^ Kochanski contradicts Czubiński, stating that the exile government did consider itself at war with the Soviet Union. Sikorski's position was that Germany was the principal enemy and that cooperation with the Soviet Union was conditionally possible. There were rival factions in the government and probably no official proclamations on that issue. g.^ The British wanted the Polish forces moved to the Middle East because they expected a German offensive in that direction, through the Caucasus. Churchill asked Stalin to permit the Poles to leave the Soviet Union and thanked him when the agreement was secured. Sikorski was opposed to the removal of Polish soldiers from the Soviet Union, but eventually relented. Sikorski wanted Polish armies engaged against Germany in Western Europe, in the Middle East and in the Soviet Union, because of the uncertain outcomes of military campaigns and because of the need for a Polish (exile government affiliated) military presence, fighting along whichever power would eventually liberate Poland. General Anders, earlier characterized by the Soviets in internal documents as a loyal pro-Soviet Polish officer (he was a strong supporter of the Sikorski–Mayski agreement of July 1941), by the spring of 1942 became convinced of the inevitability of Soviet defeat. Anders then insisted on taking the Polish formations out of the Soviet Union and opposed Sikorski. Eventually Anders became known for anti-Soviet views and he demanded a dismissal of the government led by Sikorski, his commander-in-chief. When it was decided that the Polish army would take a southern route out of the Soviet Union, it was not yet apparent that the war with Germany would be resolved mainly by a victorious Soviet westbound offensive on the Eastern Front and that other war theaters would be relegated to a more peripheral role. In particular, it was not known that Poland would be liberated by the Soviets. j.^ After the abortive Dieppe Raid in Normandy in 1942 the Allies exercised extra caution and would not risk any more failed operations. In general, the Americans demanded accelerated offensive action in Europe, while the British wanted to delay the landing in France, which they judged for the time being impractical, and focus instead on the much easier to execute Italian Campaign. k.^ Expecting the arrival of the Red Army, in December 1944 the Nazis at the last moment closed down the Auschwitz slave labor operation, demolished the main compound and force-marched some 60,000 prisoners toward camps in Germany. A smaller number of sick people remained on the premises until the Soviets arrived. n.^ The Polish communists attempted to obtain modifications of the Curzon Line that would result in Poland retaining Vilnius, Lviv and the oil fields of Eastern Galicia. Similar territorial conditions were postulated by the Polish government in London in August 1944, after Prime Minister Mikołajczyk's visit to Moscow. Joseph Stalin decided to satisfy the Lithuanian demands for Vilnius, Ukrainian for Lviv, and to annex for the Soviet Union Eastern Galicia, a region that had never been a part of the Russian Empire. o.^ The Polish government in exile had to cope with a number of instances of negative media and other publicity. In one particularly damaging case, about one third of the Jewish soldiers in the Polish Army in Britain deserted, claiming antisemitism in the institution. Some of them joined a British corps and some were court-martialed, but eventually granted amnesty by President Raczkiewicz. p.^ During the 1930s, the relations between the ruling Sanation camp and the various opposition groups and parties were tense, often hostile. From 1938, the growing external threat was clearly perceived by many and there were voices (mainly from the opposition) calling for the formation of a unified Government of National Defense and for taking other steps to promote a defense-minded consolidation of society. The Sanation ruling circle was not inclined to broaden the government's base and in June 1939 ultimately rejected any power-sharing ideas, apparently because they did not believe in the seriousness of German hostile intentions. The delegations that paid visits to President Mościcki and presented petitions on the issue of coalition government and general war preparedness, representing the agrarian and socialist parties and Polish intellectuals, were not well received. The regime did appeal to citizens' patriotism and generosity and several major fund raising efforts, often led by opposition groups and politicians (some of whom returned at that time of danger from political exile), resulted in donations of considerable magnitude, which by and large ended up not utilized. q.^ In late February 1945, referring to the post-Yalta Conference protests of the Polish government-in-exile, Winston Churchill said the following in the House of Commons: "Let me remind them that there would have been no Lublin Committee or Lublin Provisional Government in Poland if the Polish Government in London had accepted our faithful counsel given to them a year ago. They would have entered into Poland as its active Government, with the liberating Armies of Russia." r.^ The right-wing anti-communist National Armed Forces (NSZ) stopped cooperating with the AK in November 1944. Being highly antisemitic, they attacked Jewish partisans in German-occupied Poland. They fought the incoming Soviet troops and the Polish security forces. The Holy Cross Mountains Brigade of the NSZ avoided the Soviet advance and, collaborating with the German military authorities, entered Czechoslovakia in February 1945. As the war ended, it came in contact with the US 3rd Army. The British refused to agree to the Brigade's incorporation into the Polish Armed Forces in the West and the Brigade was disarmed by the US Army in August. t.^ The size of post-war Poland was determined by Joseph Stalin alone, because the Western Allies, as shown by the record of British diplomacy, would not have objected to a much smaller Polish state being established. u.^ The communist Provisional Government of Poland demanded the establishment of the post-war Polish-German border at the Oder–Neisse line, that is along the Lusatian Neisse (Western Neisse) and further north the Oder rivers. Joseph Stalin indicated his support for the Polish position and the Provisional Government administered the region as soon as it was cleared of the German forces. The American and especially the British governments had a long-standing preference for the border to run further east in its southern portion, along the Nysa Kłodzka (Eastern Neisse) and the upper Oder rivers, which would keep a large portion of Lower Silesia and of the city of Breslau in post-war Germany. At the Potsdam Conference, the delegation of what was now the Polish Provisional Government of National Unity continued lobbying aimed at keeping all of Lower Silesia under Polish jurisdiction, rather than letting some of it be a part of the Soviet occupation zone of Germany. Taking advantage of the British delegation's disruption by the results of the British election, the Americans engaged in dealing with the Soviets on their own. Its outcome stated in the conference protocols was that, until the final peace settlement, the area all the way west to the Lusatian Neisse would by administered by Poland and not be a part of the Soviet zone of occupation. The planned peace conference never took place and the border has remained where it was provisionally placed in 1945. It was confirmed in the treaties that Poland signed with West Germany in 1970 and with the unified Germany in 1990. w.^ There was a total of 1,517,983 'repatriates' from the east, according to Halik Kochanski. Others give different figures. Of the several million ethnic Poles living in Kresy, a few million were repatriated to Poland as reestablished within new borders, while perhaps a million stayed in what had become the Soviet territory. y.^ Based on the numbers from the cited sources by Norman Davies and Halik Kochanski. The historian Marcin Zaremba gave an entirely different figure of over two thousand Wehrmacht soldiers killed. z.^ The liberation of the Praga right-bank districts of Warsaw took over a month of fighting at the cost of eight thousand soldiers killed on each side. After the area was cleared of the Germans in mid-September, General Zygmunt Berling's forces crossed the Vistula and the failed Czerniaków operation (a limited Warsaw Uprising rescue attempt) commenced. - Norman Davies, Europe: A History, p. 978. HarperCollins, New York 1998, ISBN 0-06-097468-0 - Antoni Czubiński, Historia Polski XX wieku [The History of 20th Century Poland], Wydawnictwo Nauka i Innowacje, Poznań 2012, ISBN 978-83-63795-01-6, pp. 153-156 - Antoni Czubiński, Historia Polski XX wieku [The History of 20th Century Poland], pp. 156-159 - Antoni Czubiński, Historia Polski XX wieku [The History of 20th Century Poland], pp. 163-167 - Overy, Richard (2010). The Times Complete History of the World (8th ed.), pp. 294-295. London: Times Books. ISBN 0007315694. - Czesław Brzoza, Andrzej Leon Sowa, Historia Polski 1918–1945 [History of Poland: 1918–1945], pp. 483–490. Kraków 2009, Wydawnictwo Literackie, ISBN 978-83-08-04125-3. - Zgórniak, Marian; Łaptos, Józef; Solarz, Jacek (2006). Wielka historia świata, tom 11, wielkie wojny XX wieku (1914–1945) [The Great History of the World, vol. 11: Great Wars of the 20th century (1914–1945)]. Kraków: Fogra. ISBN 83-60657-00-9, p. 409 - Zgórniak, Marian; Łaptos, Józef; Solarz, Jacek (2006). Wielka historia świata, tom 11, wielkie wojny XX wieku (1914–1945) [The Great History of the World, vol. 11: Great Wars of the 20th century (1914–1945)], pp. 410-412 - Norman Davies, Europe: A History, pp. 991-998. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 44-48. Cambridge, MA: Harvard University Press. ISBN 978-0-674-06814-8. - Boris Meissner, "The Baltic Question in World Politics", The Baltic States in Peace and War (The Pennsylvania State University Press, 1978), 139–148 - Norman Davies, Europe at War 1939–1945: No Simple Victory, pp. 38-40. Penguin Books, New York 2006, ISBN 978-0-14-311409-3 - Zgórniak, Marian; Łaptos, Józef; Solarz, Jacek (2006). Wielka historia świata, tom 11, wielkie wojny XX wieku (1914–1945) [The Great History of the World, vol. 11: Great Wars of the 20th century (1914–1945)], pp. 418-420 - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 56-58. - Antoni Czubiński, Historia Polski XX wieku [The History of 20th Century Poland], pp. 171-174 - Antoni Czubiński, Historia Polski XX wieku [The History of 20th Century Poland], pp. 180-183 - Antoni Czubiński, Historia Polski XX wieku [The History of 20th Century Poland], pp. 183-189 - Antoni Czubiński, Historia drugiej wojny światowej 1939–1945 [History of World War II 1939–1945], Dom Wydawniczy REBIS, Poznań 2009, ISBN 978-83-7177-546-8, pp. 37–38 - Czesław Brzoza, Andrzej Leon Sowa, Historia Polski 1918–1945 [History of Poland: 1918–1945], pp. 495–498. - Norman Davies, Europe: A History, pp. 1000-1013. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 59-66. - Norman Davies, No Simple Victory, pp. 229-230. - Antoni Czubiński, Historia Polski XX wieku [The History of 20th Century Poland], pp. 174-177 - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 69-76. - Norman Davies, No Simple Victory, p. 215. - Czesław Brzoza, Andrzej Leon Sowa, Historia Polski 1918–1945 [History of Poland: 1918–1945], pp. 499–504. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 52-56. - Norman Davies, Europe: A History, pp. 995, 1000-1001. - Antoni Czubiński, Historia Polski XX wieku [The History of 20th Century Poland], pp. 177-180 - Zgórniak, Marian; Łaptos, Józef; Solarz, Jacek (2006). Wielka historia świata, tom 11, wielkie wojny XX wieku (1914–1945) [The Great History of the World, vol. 11: Great Wars of the 20th century (1914–1945)], p. 448 - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 86-90. - Czesław Brzoza, Andrzej Leon Sowa, Historia Polski 1918–1945 [History of Poland: 1918–1945], pp. 504–511. - Friedrich Werner von der Schulenburg. "The German Ambassador in the Soviet Union (Schulenburg) to the German Foreign Office". The Avalon Project. Yale Law School. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 76-80. - Tadeusz Piotrowski (1997). Poland's Holocaust: Ethnic Strife, Collaboration with Occupying Forces and Genocide... McFarland & Company. pp. 88–90, 295. ISBN 0-7864-0371-3. - Мельтюхов М.И. (2000). "Упущенный шанс Сталина. Советский Союз и борьба за Европу: 1939–1941 (Dropped chance of Stalin: USSR and the struggle for Europe)". Militera.ru (in Russian). Moscow, Veche. - Antoni Czubiński, Historia Polski XX wieku [The History of 20th Century Poland], pp. 189-191 - Jan Czuła, Pożytki z Jałty [The benefits of Yalta], Przegląd #13 (795), 23-29 March 2015 - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 94-97. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 80-84. - Antoni Czubiński, Historia Polski XX wieku [The History of 20th Century Poland], pp. 193-198 - Jerzy Lukowski; Hubert Zawadzki. A Concise History of Poland. pp. 255–256. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, p. 257. - AFP/Expatica, Polish experts lower nation's WWII death toll, expatica.com, 30 August 2009 - Polska 1939–1945. Straty osobowe i ofiary represji pod dwiema okupacjami, ed. Tomasz Szarota and Wojciech Materski, Warszawa, IPN 2009, ISBN 978-83-7629-067-6 (Introduction reproduced here) - Norman Davies, No Simple Victory, pp. 167-168. - Norman Davies, No Simple Victory, pp. 309-311. - Norman Davies, No Simple Victory, pp. 376-377. - Norman Davies, Europe: A History, pp. 1034-1035. - Norman Davies, No Simple Victory, p. 165. - Overy, Richard (2010). The Times Complete History of the World (8th ed.), pp. 298-299. - Antoni Czubiński, Historia Polski XX wieku [The History of 20th Century Poland], pp. 207-209 - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 99, 261. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 119-124. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 112-119. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 124-128. - Norman Davies, No Simple Victory, p. 337. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 263-268. - Norman Davies, No Simple Victory, p. 339. - Norman Davies, No Simple Victory, pp. 344-345. - Norman Davies, No Simple Victory, p. 407. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 97-103. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 268-271. - Norman Davies, No Simple Victory, pp. 323-324. - Elżbieta Trela-Mazur (1997). Włodzimierz Bonusiak; Stanisław Jan Ciesielski; Zygmunt Mańkowski; Mikołaj Iwanow, eds. Sowietyzacja oświaty w Małopolsce Wschodniej pod radziecką okupacją 1939-1941 (Sovietization of education in eastern Lesser Poland during the Soviet occupation 1939-1941) (in Polish). Kielce: Wyższa Szkoła Pedagogiczna im. Jana Kochanowskiego. p. 294. ISBN 83-7133-100-2., also in Wrocławskie Studia Wschodnie, Wrocław, 1997 - Wojciech Roszkowski (1998). Historia Polski 1914–1997 (in Polish). Warsaw: Wydawnictwa Naukowe PWN. p. 476. ISBN 83-01-12693-0. - Various authors (1998). Adam Sudoł, ed. Sowietyzacja Kresów Wschodnich II Rzeczypospolitej po 17 września 1939 (in Polish). Bydgoszcz: Wyższa Szkoła Pedagogiczna. p. 441. ISBN 83-7096-281-5. - various authors (2001). "Stalinist Forced Relocation Policies". In Myron Weiner; Sharon Stanton Russell. Demography and National Security. Berghahn Books. pp. 308–315. ISBN 1-57181-339-X. - Jan Tomasz Gross (2003). Revolution from Abroad. Princeton: Princeton University Press. p. 396. ISBN 0-691-09603-1. - Karolina Lanckorońska (2001). "I — Lwów". Wspomnienia wojenne; 22 IX 1939 – 5 IV 1945 (in Polish). Kraków: ZNAK. p. 364. ISBN 83-240-0077-1. - Craig Thompson-Dutton (1950). "The Police State & The Police and the Judiciary". The Police State: What You Want to Know about the Soviet Union. Dutton. pp. 88–95. - Michael Parrish (1996). The Lesser Terror: Soviet State Security, 1939-1953. Praeger Publishers. pp. 99–101. ISBN 0-275-95113-8. - Peter Rutland (1992). "Introduction". The Politics of Economic Stagnation in the Soviet Union. Cambridge: Cambridge University Press. p. 9. ISBN 0-521-39241-1. - Victor A. Kravchenko (1988). I Chose Justice. Transaction Publishers. p. 310. ISBN 0-88738-756-X. - various authors; Stanisław Ciesielski; Wojciech Materski; Andrzej Paczkowski (2002). "Represje 1939–1941". Indeks represjonowanych (in Polish) (2nd ed.). Warsaw: Ośrodek Karta. ISBN 83-88288-31-8. Retrieved 2006-03-24. - Jan Tomasz Gross (2003). Revolution from Abroad. Princeton: Princeton University Press. p. 396. ISBN 0-691-09603-1. - Jan T. Gross, op cit, p188 - Zvi Gitelman (2001). A Century of Ambivalence: The Jews of Russia and the Soviet Union, 1881 to the Present. Indiana University Press. p. 116. ISBN 0-253-21418-1. - Jan Tomasz Gross, Revolution from Abroad: The Soviet Conquest of Poland's Western Ukraine and Western Belorussia, Princeton University Press, 2002, ISBN 0-691-09603-1, p. 35 - "O Sowieckich represjach wobec Polaków" IPN Bulletin 11(34) 2003 page 4–31 - Piotrowski, Tadeusz (1988). "Ukrainian Collaborators". Poland's Holocaust: Ethnic Strife, Collaboration with Occupying Forces and Genocide in the Second Republic, 1918-1947. McFarland. pp. 177–259. ISBN 0-7864-0371-3. - Militärgeschichtliches Forschungsamt; Gottfried Schramm (1997). Bernd Wegner, ed. From Peace to War: Germany, Soviet Russia and the World, 1939–1941. Berghahn Books. pp. 47–79. ISBN 1-57181-882-0. - Kużniar-Plota, Małgorzata (30 November 2004). "Decision to commence investigation into Katyn Massacre". Departmental Commission for the Prosecution of Crimes against the Polish Nation. Retrieved 12 August 2014. - Antoni Czubiński, Historia drugiej wojny światowej 1939–1945 [History of World War II 1939–1945], p. 68 - "Decision to commence investigation into Katyn Massacre". Institute of National Remembrance website. Institute of National Remembrance. 2004. Archived from the original on May 27, 2005. Retrieved 2006-03-15. - Marek Jan Chodakiewicz (2004). Between Nazis and Soviets: Occupation Politics in Poland, 1939–1947. Lexington Books. ISBN 0-7391-0484-5. - beanbean (2008-05-02). "A Polish life. 5: Starobielsk and the trans-Siberian railway". My Telegraph. Retrieved 2012-05-08. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 376-383. - Gustaw Herling-Grudziński (1996). A World Apart: Imprisonment in a Soviet Labor Camp During World War II. Penguin Books. p. 284. ISBN 0-14-025184-7. - Władysław Anders (1995). Bez ostatniego rozdziału (in Polish). Lublin: Test. p. 540. ISBN 83-7038-168-5. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 136-139. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 153-162. - Carla Tonini, The Polish underground press and the issue of collaboration with the Nazi occupiers, 1939-1944, European Review of History: Revue Europeenne d'Histoire, Volume 15, Issue 2 April 2008 , pages 193 - 205 - Klaus-Peter Friedrich. Collaboration in a "Land without a Quisling": Patterns of Cooperation with the Nazi German Occupation Regime in Poland during World War II. Slavic Review, Vol. 64, No. 4, (Winter, 2005), pp. 711-746. JSTOR - Antoni Czubiński, Historia Polski XX wieku [The History of 20th Century Poland], pp. 192-193 - John Connelly, Why the Poles Collaborated so Little: And Why That Is No Reason for Nationalist Hubris, Slavic Review, Vol. 64, No. 4 (Winter, 2005), pp. 771-781, JSTOR - Richard C. Lukas, Out of the Inferno: Poles Remember the Holocaust University Press of Kentucky 1989 - 201 pages. Page 13; also in Richard C. Lukas, The Forgotten Holocaust: The Poles Under German Occupation, 1939-1944, University Press of Kentucky 1986 - 300 pages - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 275-276. - Hempel, Adam (1987). Policja granatowa w okupacyjnym systemie administracyjnym Generalnego Gubernatorstwa: 1939–1945 (in Polish). Warsaw: Instytut Wydawniczy Związków Zawodowych. p. 83. - Encyclopedia of the Holocaust entry on the Blue Police, Macmillan Publishing Company, New York NY, 1990. ISBN 0-02-864527-8. - Gunnar S. Paulsson (2004). "The Demography of Jews in Hiding in Warsaw". The Holocaust: Critical Concepts in Historical Studies. London: Routledge. ISBN 0-415-27509-1. - Hempel, Adam (1990). Pogrobowcy klęski: rzecz o policji "granatowej" w Generalnym Gubernatorstwie 1939-1945 (in Polish). Warsaw: Państwowe Wydawnictwo Naukowe. p. 456. ISBN 83-01-09291-2. - Paczkowski (op.cit., p.60) cites 10% of policemen and 20% of officers - <Please add first missing authors to populate metadata.> (2005). "Policja Polska Generalnego Gubernatorstwa". Encyklopedia Internetowa PWN (in Polish). Warsaw: Państwowe Wydawnictwa Naukowe. - The Righteous Among The Nations - Polish rescuer Waclaw Nowinski - Leszczyński, Adam (7 September 2012). "Polacy wobec Holocaustu" ["Poles and the Holocaust"]. (A conversation with Timothy Snyder). wyborcza.pl. Retrieved 11 June 2014. - Marek Jan Chodakiewicz (April 2006). "Review of Sowjetische Partisanen in Weißrußland by Bogdan Musial". Sarmatian Review. Archived from the original on July 18, 2012 – via Internet Archive. - (Lithuanian) Rimantas Zizas. Armijos Krajovos veikla Lietuvoje 1942–1944 metais (Acitivies of Armia Krajowa in Lithuania in 1942–1944). Armija Krajova Lietuvoje, pp. 14–39. A. Bubnys, K. Garšva, E. Gečiauskas, J. Lebionka, J. Saudargienė, R. Zizas (editors). Vilnius – Kaunas, 1995. - Dieter Pohl. Hans Krueger and the Murder of the Jews in the Stanislawow Region (Galicia) (PDF). pp. 12/13, 17/18, 21 – via Yad Vashem.org. - Review by John Radzilowski of Yaffa Eliach's There Once Was a World: A 900-Year Chronicle of the Shtetl of Eishyshok, Journal of Genocide Research, vol. 1, no. 2 (June 1999), City University of New York. - Czesław Brzoza, Andrzej Leon Sowa, Historia Polski 1918–1945 [History of Poland: 1918–1945], pp. 521–535. - Norman Davies, Europe: A History, p. 1021. - Emanuel Ringelblum, Joseph Kermish, Shmuel Krakowski, Polish-Jewish relations during the Second World War. Page 226; quote from Chapter "The Idealists": "Informing and denunciation flourish throughout the country, thanks largely to the Volksdeutsche. Arrests and round-ups at every step and constant searches..." - Paul, Mark (September 2015). "Patterns of Cooperation, Collaboration and Betrayal: Jews, Germans and Poles in Occupied Poland during World War II" (PDF). Glaukopis. Foreign language studies. 159/344 in PDF. Retrieved 25 February 2016. From testimonies of survivors; Jewish Historical Institute Archive, record group 301, number 2932. - Tomasz Strzembosz (31 March 2001), “Inny obraz sąsiadów.” Rzeczpospolita Nr 77, archived by Internet Wayback Machine. - "Manslaughter of Jewish Inhabitants of Jedwabne." Institute of National Remembrance. Warsaw, Poland. Publication date: 18 November 2003. - Zamoyski, Adam. The Polish Way, p. 360. New York: Hippocrene Books, 1994. ISBN 0-7818-0200-8 - Antoni Czubiński, Historia Polski XX wieku [The History of 20th Century Poland], pp. 198-201 - Jerzy Lukowski; Hubert Zawadzki. A Concise History of Poland. pp. 264–269. - Antoni Czubiński, Historia Polski XX wieku [The History of 20th Century Poland], pp. 202-204 - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 278-285. - Antoni Czubiński, Historia Polski XX wieku [The History of 20th Century Poland], pp. 218-220 - Aleksandra Klich, Zbigniew Mikołejko: Jeden drugiemu wchodzi na głowę [Zbigniew Mikołejko: One steps on another one's head]. 25 June 2016. "Jeden drugiemu". A conversation with Zbigniew Mikołejko. wyborcza.pl. Retrieved 30 June 2016. - Norman Davies, No Simple Victory, p. 312. - Norman Davies, No Simple Victory, p. 417. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 285-290. - Norman Davies, No Simple Victory, pp. 317-318. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 384-386. - Antoni Czubiński, Historia Polski XX wieku [The History of 20th Century Poland], pp. 213-218 - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 392-402. - Norman Davies, Europe: A History, pp. 1040-1044. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 402-426. - Norman Davies, No Simple Victory, pp. 32, 117-118. - Norman Davies, No Simple Victory, pp. 119-121. - Norman Davies, No Simple Victory, p. 210. - Norman Davies, No Simple Victory, p. 316. - Norman Davies, God's Playground volume II, p. 355. Columbia University Press, New York 2005, ISBN 978-0-231-12819-3 - Norman Davies, No Simple Victory, p. 342. - Norman Davies, No Simple Victory, p. 320. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 499-515. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 520-527. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 27-32. - Jan Karski, Zagadnienie żydowskie w Polsce pod okupacjami [The Jewish Question in Poland Under the Occupations]. 15 November 2014. Zagadnienie żydowskie w Polsce pod okupacjami. wyborcza.pl. Retrieved 08 January 2015. - Weinbaum, Laurence (21 April 2015). Confronting chilling truths about Poland's wartime history. The Washington Post. Retrieved 01 December 2015. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 107-112. - Norman Davies, No Simple Victory, pp. 358-364. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 294-298. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 298-303. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 303-306. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 306-313. - Overy, Richard (2010). The Times Complete History of the World (8th ed.), pp. 300-301. - Dawid Warszawski, Pogromy w cieniu gigantów. Żydzi i ich sąsiedzi po ataku III Rzeszy na ZSRR [Pogroms in the shadow of the giants. The Jews and their neighbors after the Third Reich's attack on the Soviet Union]. 3 January 2015. Pogromy w cieniu gigantów. Żydzi i ich sąsiedzi po ataku III Rzeszy na ZSRR. wyborcza.pl. Retrieved 24 March 2015. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 291-294. - Norman Davies, No Simple Victory, pp. 327-328. - Jerzy Lukowski; Hubert Zawadzki. A Concise History of Poland. pp. 260–261. - Norman Davies, No Simple Victory, p. 374. - "The Righteous Among The Nations". yadvashem.org. January 1, 2012. Retrieved September 21, 2012. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 313-324. - Norman Davies, No Simple Victory, pp. 351-352. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 34-37. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 103-107. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 359-363. - Timothy Snyder. (2003)The Causes of Ukrainian-Polish Ethnic Cleansing 1943, The Past and Present Society: Oxford University Press. pg. 220 - Tadeusz Piotrowski, Poland's holocaust. Published by McFarland. Page 247 - Magosci, Motyka, Rossolinski - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 212-214. - Czesław Brzoza, Andrzej Leon Sowa, Historia Polski 1918–1945 [History of Poland: 1918–1945], pp. 512–521. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 214-219. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 219-221. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 231-234. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 221-224. - Antoni Czubiński, Historia Polski XX wieku [The History of 20th Century Poland], pp. 204-207 - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 163-170. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 170-173. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 182-187. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 190-193. - Brzoza, Czesław (2003). Polska w czasach niepodległości i II wojny światowej (1918–1945) [Poland in Times of Independence and World War II (1918–1945)], Kraków: Fogra, ISBN 978-8-385-71961-8, pp. 312–322. - Antoni Czubiński, Historia Polski XX wieku [The History of 20th Century Poland], pp. 210-213 - Jerzy Eisler, Siedmiu wspaniałych poczet pierwszych sekretarzy KC PZPR [The Magnificent Seven: First Secretaries of the KC PZPR], Wydawnictwo Czerwone i Czarne, Warszawa 2014, ISBN 978-83-7700-042-7, pp. 178–185 - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 338-344. - Norman Davies, No Simple Victory, pp. 182-183. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 325-333. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 349-354. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 354-357. - Norman Davies, Europe: A History, pp. 1036-1039. - Brzoza, Czesław (2003). Polska w czasach niepodległości i II wojny światowej (1918–1945) [Poland in Times of Independence and World War II (1918–1945)], pp. 364–374. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 445-454. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 439-445. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 456-460. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 472-480. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 480-486. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 486-495. - Czesław Brzoza, Andrzej Leon Sowa, Historia Polski 1918–1945 [History of Poland: 1918–1945], pp. 535–548. - Norman Davies, No Simple Victory, pp. 115-116. - The NKVD Against the Home Army (Armia Krajowa), Warsaw Uprising 1944 - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 426-433. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 515-520. - Czesław Brzoza, Andrzej Leon Sowa, Historia Polski 1918–1945 [History of Poland: 1918–1945], pp. 549–553. - Antoni Czubiński, Historia Polski XX wieku [The History of 20th Century Poland], pp. 223-226 - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 545-552. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 532-536. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 552-563. - Antoni Czubiński, Historia Polski XX wieku [The History of 20th Century Poland], pp. 229-233 - Antoni Czubiński, Historia Polski XX wieku [The History of 20th Century Poland], pp. 220-222 - Norman Davies, No Simple Victory, pp. 191-192. - Norman Davies, No Simple Victory, p. 408. - Antoni Czubiński, Historia Polski XX wieku [The History of 20th Century Poland], pp. 238-240 - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 536-537. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 569-577. - Polski Gułag - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 527-531. - Norman Davies, Europe: A History, pp. 1050-1051. - Norman Davies, Europe: A History, p. 1060. - Norman Davies, Europe: A History, pp. 1061-1062. - Kopp, Kristin; Niżyńska, Joanna (2012). Germany, Poland and Postmemorial Relations: In Search of a Livable Past. Palgrave Macmillan. p. 9. ISBN 978-0-230-33730-5. - Antoni Czubiński, Historia Polski XX wieku [The History of 20th Century Poland], pp. 233-236 - Norman Davies, No Simple Victory, pp. 347-348. - Forced migration in the 20th century - Jerzy Eisler, Siedmiu wspaniałych poczet pierwszych sekretarzy KC PZPR [The Magnificent Seven: First Secretaries of the KC PZPR], pp. 61–62 - Leszczyński, Adam (19 May 2014). "Z ziemi polskiej do włoskiej" ["From the Polish to the Italian land"]. (A conversation with Zbigniew Wawer). Gazeta Wyborcza wyborcza.pl. Retrieved 08 March 2015. - Norman Davies, No Simple Victory, pp. 483-486. - Norman Davies, No Simple Victory, pp. 160-161. - Norman Davies, No Simple Victory, p. 102. - Norman Davies, No Simple Victory, pp. 171-172. - Antoni Czubiński, Historia drugiej wojny światowej 1939–1945 [History of World War II 1939–1945], p. 32 - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 434-439. - Snyder, Timothy (2003). The Reconstruction of Nations: Poland, Ukraine, Lithuania, Belarus, 1569-1999. Yale University Press. pp. 88, 93. ISBN 9780300105865. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 460-463. - Czesław Brzoza, Andrzej Leon Sowa, Historia Polski 1918–1945 [History of Poland: 1918–1945], pp. 365–367. - Antoni Czubiński, Historia drugiej wojny światowej 1939–1945 [History of World War II 1939–1945], pp. 218, 226 - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 537-541. - Halik Kochanski (2012). The Eagle Unbowed: Poland and the Poles in the Second World War, pp. 541-545. - Marcin Zaremba, Biedni Polacy na żniwach [Poor Poles at the harvest] (17 January 2011). Biedni Polacy na żniwach. Gazeta Wyborcza wyborcza.pl. Retrieved 29 February 2016. - Krzysztof Wasilewski, Masakra żołnierzy Berlinga [Massacre of Berling's soldiers].Masakra. przeglad-tygodnik.pl. 29 September 2014. Retrieved 25 June 2016. - Chodakiewicz, Marek Jan. Between Nazis and Soviets: Occupation Politics in Poland, 1939-1947. Lanham: Lexington Books, 2004 ISBN 0-7391-0484-5. online review - Coutouvidis, John, and Reynolds, Jaime. Poland, 1939-1947 (1986) - Davies, Norman (1982), God's Playground. New York: Columbia University Press. ISBN 0-231-05353-3 and ISBN 0-231-05351-7. - Davies, Norman Rising '44: The Battle for Warsaw (2004) - Douglas, R.M. Orderly and Humane. The Expulsion of the Germans after the Second World War. Yale University Press, 2012. ISBN 978-0-300-16660-6. - Fritz, Stephen G. (2011). Ostkrieg: Hitler's War of Extermination in the East. University Press of Kentucky. - Gross, Jan Tomasz, Revolution from Abroad: The Soviet Conquest of Poland's Western Ukraine and Western Belorussia, Princeton University Press, 2002, ISBN 0-691-09603-1. - Gross, Jan T. Polish Society under German Occupation: The Generalgouvernement, 1939-1944 (Princeton UP, 1979) - Hiden, John. ed. The Baltic and the Outbreak of the Second World War, Cambridge University Press, 2003, ISBN 0-521-53120-9 - Kochanski, Halik. The Eagle Unbowed: Poland and the Poles in the Second World War. Harvard U.P., 2012, ISBN 0674071050, with word search by Amazon. - Koskodan, Kenneth K. No Greater Ally: The Untold Story of Poland's Forces in World War II, Osprey Publishing 2009, ISBN 978-1-84908-479-6. - Lukas, Richard C. Did the Children Cry: Hitler's War Against Jewish and Polish Children, 1939-1945 (1st ed.; N.Y.:Hippocrene, 1994). ISBN 0-7818-0242-3 - Lukas, Richard C. Forgotten Holocaust:The Poles under German Occupation, 1939-1944 (3rd rev. ed.; N.Y.:Hippocrene, 2012). ISBN 978-0-7818-1302-0 - Lukas, Richard C. Forgotten Survivors:Polish Christians Remember the Nazi Occupation (1st ed.; Lawrence, KS: University Press of Kansas, 2004). ISBN 0-7818-0242-3 - Sword, Keith (1991). The Soviet Takeover of the Polish Eastern Provinces, 1939-41. Palgrave Macmillan. ISBN 0-312-05570-6. - Snyder, Timothy. Bloodlands: Europe Between Hitler and Stalin (2010) - Terlecki, Olgierd. (1972), Poles in the Italian Campaign, 1943-1945, Interpress Publishers. - Steven J. Zaloga, Poland 1939: The birth of Blitzkrieg, Osprey Publishing 2002, ISBN 1-84176-408-6. - University of New York in Buffalo Info Poland: World War II - Polish Losses in World War II, Witold J. Lukaszewski, Sarmatian Review, April 1998 - Zmagania o kształt powojennej Polski w latach 1944 - 1947, Bryk.pl - Walka o kształt państwa polskiego w latach 1944 - 1947, Sciaga.pl - Polska w latach 1944-1947. Tworzenie i rozbudowa struktur władzy komunistycznej, Sciaga.pl
Graphing Exponential Functions Print and Go Lesson introduces your students to graphing exponential functions. Students should already be comfortable evaluating exponential functions, and she be comfortable graphing by completing a table of values. Through this lesson, students will fill in tables for different exponential functions, graph, and observe the changes to "discover" how each part of an exponential function transforms the graph. This file includes the following: - Evaluating Exponential Problems “Warm-up” Problems - Graphing Exponential Functions Two Day Guided Lesson to allow students to discover transformations of exponential functions (4 pg.) - Two Graphing Exponential Functions practice assignments for homework (3 pg.) - Transformations of Exponential Functions Review (1 pg.) - PDF + Word Document for easy editing - Answer Key Please see the preview for more information and sample pages. If you have any questions about this product, please email me at email@example.com You might also be interested in: Algebra Beginning of the Year Common Core Pre-Assessment & Intervention Tool This purchase is for personal use only. © Brittany Kiser. Please note - this resource is for use by one teacher only. If other teachers at your school would like to use the materials, please purchase additional licenses. Thank you!
For about a century now, scientists have theorized that the metals in our Universe are the result of stellar nucleosynthesis. This theory states that after the first stars formed, heat and pressure in their interiors led to the creation of heavier elements like silicon and iron. These elements not only enriched future generations of stars (“metallicity”), but also provided the material from which the planets formed. More recent work has suggested that some of the heaviest elements could actually be the result of binary stars merging. In fact, a recent study by two astrophysicists found that a collision which took place between two neutron stars billions of years ago produced a considerable amount of some of Earth’s heaviest elements. These include gold, platinum The research was conducted by Prof. Szabolcs Márka from Columbia University and Prof. Imre Bartos of the University of Florida. Their findings were published in a study titled “Nearby Neutron-Star Mergers Explain Actinide Abundance in the Early Solar System”, which recently appeared in the May issue of the scientific journal Nature. Remove All Ads on Universe Today Join our Patreon for as little as $3! Get the ad-free experience for life According to the scientific consensus, asteroids and comets are composed of material left over from the formation of the Solar System. When bits of these come to Earth in the form of meteorites, they carry traces of radioactive isotopes whose decay is used to determine when the asteroids were created. The study of these space rocks can also shed light on what materials existed in our Solar System billions of years ago. For the sake of their study, Bartos and Márka ran numerical simulations of the Milky Way and compared the results to the composition of meteorites that were retrieved on Earth. What they found was that a single neutron-star collision could have occurred within our cosmic neighborhood – ~1,000 light years from our Solar System – roughly 4.65 billion years ago. At the time, our Solar System was still a massive cloud of dust and gas that would soon undergo gravitational collapse at its center, thus giving birth to our Sun. Roughly 100 million years later, the Earth and other Solar Planets would form from the proto-planetary debris disk that fell into orbit around our young Sun. This single cosmic event, they estimate, gave birth to elements that would become part of this disk – and which now make up roughly 0.3% of the Earth’s heaviest elements. Most of these are in the form on iodine, an element which is essential to biological processes. In this respect, this event may have played a role in the emergence of life here in the Solar System as well. To put this event in perspective, consider that the Milky Way galaxy is an estimated 100,000 light years in diameter. This collision and the resulting explosion, therefore, took place roughly 1/100th the distance away. In fact, the research team indicated that if a similar event happened at the same distance today, the resulting radiation would outshine every star in the sky. What is especially interesting about this study is the way it provides insight into an event that was both unique and highly consequential in the history and formation of Earth and our Solar System. “It sheds bright light on the processes involved in the origin and composition of our Solar System, and will initiate a new type of quest within disciplines, such as chemistry, biology and geology, to solve the cosmic puzzle,” Bartos summarized. And as Márka indicated, it also addresses some of the deeper questions scientists have about the origins of life as we know it: “Our results address a fundamental quest of humanity: Where did we come from and where are we going? It is very difficult to describe the tremendous emotions we felt when we realized what we had found and what it means for the future as we search for an explanation of our place in the universe.” It also reaffirms what Carl Sagan famously said: “We are a way for the universe to know itself. Some part of our being knows this is where we came from. We long to return. And we can, because the cosmos is also within us… The nitrogen in our DNA, the calcium in our teeth, the iron in our blood, the carbon in our apple pies were made in the interiors of collapsing stars. We are made of starstuff.” Further Reading: Columbia News
Chapter 12 Transformations in Asia, 220-1350 A.D. Reunited: the Sui, Tang, and Song Dynasties the fall of the Han dynasty, China was divided for more than three centuries. Finally a new dynasty, the Sui, united the country once again. Although this dynasty lasted only a short time, it paved the way for two longer-lived dynasties, the Tang and Song. Under these rulers, China enjoyed another golden age of prosperity and cultural brilliance. Division to Reunification the collapse of Han rule, China fragmented into various warring kingdoms. Nomadic tribes from Central Asia dominated the north, while southern China came under the control of a series of weak Chinese kingdoms centered on the Yangtze Six Dynasties Period. This era of disunity, from 220-589, is known as the Six Dynasties period, named after the six kingdoms of southern China. These southern dynasties maintained many Han traditions, but failed to establish firm political control. Wealthy landlords and generals vied for power and undermined the authority of the central government. The upheaval of the period was most notable in northern China. Nomadic warriors raided and pillaged the heart of Chinese civilization. The old Han capital, Chang’an, was sacked and left in ruins. As one writer noted in the early 300s, “At this time in the city. . . there were not more than one hundred families. Weeds and thorns grew thickly as if in a forest. Only four carts could be found in the city.” Eventually, however, the nomadic tribes settled down, established kingdoms, and adopted the Chinese way of life. They also encouraged the growth of Buddhism, which first entered China from India during Han times. By promoting Buddhism, the new rulers hoped to undercut the power of China’s Confucian aristocracy. Buddhism’s promise of spiritual salvation also gave comfort to many Chinese during a time of chaos and instability. As a result, Buddhism soon spread throughout China. Although the southern dynasties claimed to be the true rulers of China, it was a northern leader, Wendi, who reunited the country and forged a new empire. By 589 he had conquered the south and formed the Sui dynasty. An able ruler, Wendi set out to build a powerful, centralized state. He created a new legal code, reformed the bureaucracy, and strengthened the northern border against invasion. He also established a system of “ever-ready granaries”—state-owned deposits of grain that could be dispensed in times of famine or to help stabilize prices. Wendi was succeeded by his son Yangdi, who at first continued the policies of his father. Yangdi’s greatest accomplishment was the building of the Grand Canal, a 1000-mile-long waterway linking northern and southern China. However, this and other grandiose schemes, including a failed effort to conquer Korea, strained the state’s resources. As discontent grew, rebellions broke out. In 618 Yangdi was overthrown and the Sui dynasty came to an end. short-lived, the Sui dynasty had re-established the principle of strong imperial rule in China. On this foundation their successors, the Tang, built an empire that lasted for three centuries. The greatest ruler of the Tang period was Tai Zong, son of the dynasty’s founder. Tai Zong was efficient, wise, and compassionate. According to one story, the sight of locusts devouring crops drove him to cry out, “Miserable creatures, must you eat the grain? If you are hungry, come feed upon my heart.” expansion and foreign relations. Under Tai Zong and his successors, China expanded rapidly. Chinese armies defeated Turkish nomads in Central Asia and extended China’s frontiers far to the west, making contact with India and the Islamic world. China also conquered portions of Korea, Manchuria, and Vietnam, and Chinese culture began to have a strong influence on Japan. Contact with other lands and peoples also affected China. During the 700s and 800s the Tang capital of Chang`an became a center of world culture. The largest city in the world at the time, with some 2 million people, Chang`an was home to Persians, Arabs, Jews, Greeks, and many other non-Chinese residents. The influence of these diverse peoples helped revitalize Chinese culture. Under Tang rule, China became the most sophisticated, powerful, and wealthy country in the world. A major reason for Tang success was the revival of the Han civil service system, staffed by a scholar-gentry class chosen on the basis of competitive examinations. This class of officials helped to hold the government together and to keep it working smoothly from one ruler to another. As in Han times, the Tang civil service exam was based on Confucian learning. The test was difficult and few candidates managed to pass. Although it was open to all Chinese, usually only the wealthy could afford the education necessary to pass the test. Over time, however, the civil service system created the chance for advancement in Chinese society. under the Tang. The resurgence of the scholar-gentry brought a revival of Confucian thought, but Buddhism remained the state religion in China for the first two centuries of Tang rule. Many different sects developed, the most famous of which is known by its Japanese name, Zen. Zen Buddhism stressed meditation as a means to enlightenment and showed a marked similarity to Daoism. One Chinese ruler who gave great support to Buddhism was the Empress Wu, who ruled China from 654-705. The only woman ever to hold the Chinese throne in her own right, Wu first ruled through her husband, the third Tang emperor, and eventually took power directly. A tough, authoritarian ruler, Wu was also highly capable and held the empire together. A devout Buddhist, she believed herself to be an incarnation of the Buddha and had temples built in her name. Under Empress Wu, Buddhist monasteries amassed great wealth. Eventually, however, Buddhism lost official favor. The growing power of the monasteries came to be regarded as a threat to the state. One imperial edict claimed: “The hearts of our people have been seduced by it. . . . The monasteries and temples are . . . beautifully decorated, daring to rival palaces In the mid-800s, officials launched a major crackdown, destroying temples and forcing monks to abandon their faith. Buddhism never regained its former influence in Chinese life. of the Tang dynasty. The Tang dynasty reached its height around 750 and then gradually declined under weak emperors. By 900, tax revenues had diminished, nomadic peoples had invaded, and bloody revolts had weakened the empire. In 907 a powerful warlord killed the emperor and seized the throne, putting an end to the half century of political upheaval followed the fall of the Tang dynasty. Finally, in 960, a leading general, Zhao Kuangyin, seized power and declared a new dynasty, the Song. One of the new emperor’s first acts was to curb the power of the military by forcefully retiring key officers. At the same time, he reinforced the position of the scholar-gentry in government. These measures helped ensure With a weakened military, however, the Song were unable to regain control over the vast empire created by the Tang. Soon invaders from the north threatened the capital at Kaifeng. To ward off attacks, the Song agreed to pay tribute in silver and trade goods to the nomadic raiders. Over time, this tribute became a crushing burden that caused resentment, and eventually rebellion, within China. One rebel leader complained: government exacts from us everything it can and presents it to the . . . barbarians. Our enemies have become richer each day, and not showing gratitude, they have become more aggressive and more insulting instead. Why does not the government stop paying them annual tribute, in view of the fact that it is constantly insulted? . . . Though we work hard all year round, we have never had a full stomach, and our wives and children suffer constantly from cold and In 1126 the Khitans, a nomadic people from Mongolia, invaded and captured the Song capital. The Song court fled to southern China, where it established a new capital at Hangchow. Though reduced in size, the southern Song continued to flourish for another 150 years. While the Song continued to rule in south China, in the north the nomadic invaders did little to disrupt the normal patterns of Chinese life. Continuing to use the local Chinese bureaucracy, Khitan rulers collected taxes from the Chinese peasants. They used these taxes to buy the allegiance of other nomadic groups, giving them “gifts” and other forms of patronage. The Khitan established their capital at Beijing, which was far enough north to give them easy access to the steppes, while also being connected by canals with the Yellow River and the rest of China. In 1123, a Manchurian tribe, the Jurchens, overthrew the Khitans and established their own Chin Empire. Where the Khitans had preferred to maintain their strong ties with the nomadic life, the Chin embraced Chinese civilization more wholeheartedly. They also swept much further south than their predecessors. Eventually, the Chin Empire stretched from Manchuria to the Yangtze River. move of the Song dynasty to southern China highlighted a shift in the balance of Chinese civilization that had been occurring for some time. The creation of the Grand Canal, for example, had signaled the growing importance of the south in national development. By the 600s, increasing numbers of Chinese were migrating south to settle on the rich, rice-growing lands of the Yangtze Valley. By the 1000s the south, which was relatively protected from the raids of the northern steppe nomads, had surpassed the north in population and economic power. The Tang and Song dynasties also presided over a period of great economic activity in China. Tang expansion into Central Asia increased trade on the Silk Route, while China’s extensive canal and river system promoted the growth of a large internal market. China also became a major overseas trading nation. At first, Muslim ships plying China’s coastal waters handled the bulk of maritime trade. By the late Song period, however, China had become a major maritime power in its own right. Growing commerce led to the beginnings of a money and banking system. The first currency, copper coins arranged in strings, eventually gave way to paper money, as transactions became larger and more complex. By the late Song period, commerce and tax revenues brought the government a huge income. life. As trade expanded, regional trading centers became thriving cities, bustling with activity. City streets were filled with traffic and lined with shops selling everything from noodles and candles to silk and pearls. Amusement quarters featured puppet shows, plays, and performances by dancers and acrobats. Hangchow became a sophisticated metropolis of nearly a million people. There, officials and wealthy merchants lived in luxurious homes surrounded by gardens. The Italian explorer Marco Polo, who visited Hangchow in the 1200s, noted that the houses of the wealthy “are well built and elaborately furnished; and the delight they take in decoration. . . leads them to spend . . . sums of money that would astonish you.” In contrast, ordinary residents lived in crowded apartments, while many of the poor were homeless. The state set up hospitals and orphanages to help the poor, but poverty remained a serious urban problem. Despite the growing urbanization of China, most Chinese still lived in the countryside. To promote rural progress, the Tang rulers tried to break up large estates and give every farmer a piece of land. The Song, however, abandoned this policy and land became increasingly concentrated in the hands of large landlords. Most peasants became tenant farmers who worked under slave-like Nevertheless, some positive changes took place in the countryside. Technological improvements in agriculture—including new irrigation techniques and new strains of rice from Southeast Asia—raised farm productivity and allowed farmers to produce a surplus. Also from Southeast Asia came a new crop—tea—that soon became a popular drink throughout China. Agriculture was just one area in which technological change was evident during the Tang and Song years. Spurred on by social and economic trends, Chinese inventors made China the most technologically advanced civilization in One of the most significant Chinese inventions was gunpowder. First developed by the Tang for use in firecrackers, gunpowder was being used as an explosive by 1100. Printing was an even greater invention of the Chinese. Using carved blocks, the Chinese produced the world’s first book in 868, a Buddhist text called the Diamond Sutra. Other inventions included the nautical compass, the abacus, the suspension bridge, and an early clock. Chinese inventions and products were so advanced that the word for “Chinese” became a synonym for “superior” in many Asian languages. and the Arts period of Tang and Song rule also produced a flowering of art in China. The Chinese excelled at sculpture, textile weaving, and jade carving. The fine Chinese pottery, known as porcelain, became famous around the world. The genius of Chinese artists was most evident, however, in painting. Inspired by the Daoist and Buddhist love of nature, and the Confucian ideal of self-improvement, painters developed landscape painting to an unparalleled degree. They depicted scenes of natural beauty—rushing rivers, jagged mountains, bamboo groves—with practiced and delicate brushwork. As one master artist explained: wind rises from the green forest, and the foaming water rushes in the stream. Alas! Such painting cannot be achieved by physical movements of the fingers and hand, but only by the spirit entering into them. This is the nature of Like painting, the literature of this era was also highly developed and reflected the Daoist and Confucian roots of Chinese culture. Perhaps the greatest literature of the time was produced by Tang poets. Chinese literary collections contain nearly 50,000 poems written by more than 2,000 poets of this period. Two of the most famous Tang poets reflected the divergent tendencies in Chinese thought. Li Bai (LEE BY), a Daoist, spent much of his life seeking pleasure. His writings—happy, light, and elegant—described the delights of life. Du Fu (DOO FOO), on the other hand, possessed a serious, even solemn nature and devoutly followed Confucian teachings. His carefully written verses showed his deep concern for the suffering and tragedy of human life.
We observe the cost or price of an item or product before purchasing it. Cost is the value that is considered to produce the product or that item. Cost is one of the major factors in selecting one item over two other items of a similar kind. We have many kinds of costs. We have different kinds of costs based on the concept of how they are calculated. Opportunity cost and Marginal cost are two concepts related to the cost. Opportunity vs Marginal Cost The difference between Opportunity and Marginal Cost is the concept that is applied to calculate them. In detail, Opportunity cost is an economic concept that represents the relationship between scarcity and other options. Whereas Marginal cost is the economic concept that expresses the cost of the production in producing an additional item. Want to save this article for later? Click the heart in the bottom right corner to save to your own articles box! Opportunity cost is the value that a person might have received instead of another option. It can also be defined as the maximum amount that a person has foregone by accepting some other work instead of it. These are not real costs; they are just illusions costs that it might be. They have usually overlooked costs. Marginal cost is the additional cost required to produce another new unit. It is simply the monetary value. It is termed the basic concept of finance and economics. Marginal cost is the additional value required to manufacture an excess product or service. Marginal costs include fixed costs and variable costs too. |Parameters of Comparison||Opportunity Cost||Marginal Cost| |Definition||Opportunity cost is the difference value observed during the selection of one item instead of another.||Marginal cost is the value of producing an extra item.| |Monetary value||Opportunity cost may or may not include the monetary value.||Marginal cost always includes the monetary value.| |Visibility||Opportunity costs are not that transparent.||Marginal costs are transparent and clearly visible.| |Included in||Opportunity cost is included in the choice of consumers.||Marginal cost is included in the cost of production.| |Others||Opportunity costs include the benefits or advantages like money, time, etc., in selecting an item instead of another.||Marginal costs don’t include the benefits or advantages in selecting an item instead of another.| What is Opportunity Cost? Opportunity cost is the value of benefits or price that had been forgone in choosing an item or service over the other. Opportunity costs not only include extra value in terms of money but also contains the value of time and other benefits too. Opportunity cost is simply the difference between choosing one item over the other. They are not seen clearly. They are simply calculated by comparing the items. For instance, Jayanth works in a bakery as a chef. He earns 50,000 per month. But he thought that he could benefit through earning more by setting up a bakery on his own. After setting up his bakery, Jayanth earns 25,000 only in the first month. Here, he could have earned 25,000 more if he works as a chef. He lost 25,000, this is the opportunity cost of Jayanth in this month. Next month Jayanth earned 1 lakh from his bakery. In this month, 50,000 is the opportunity cost of Jayanth during the second month. Opportunity cost is the benefits lost in choosing an item or service over another item or service. It does not affect the cost of production. It does not depend on any other costs or the total cost of production of goods or services. It is simply the difference cost between the benefits of a chosen item over other the item. What is Marginal Cost? Marginal cost is the extra value required to produce an extra unit, service, or item. We have two costs included in the marginal cost. They are static costs and non-static costs. Static costs are costs that do not change on the basis of any parameters. While non-static costs change due to the parameters of production. Hence, we can state that marginal costs are dependent on non-static costs. For example, consider a swimming pool in a resort. The cost of filling the pool with water will be the same for 5 members or 10 members. As we need to fill the pool complete with water for a minimum or a maximum number of people. So, the cost required to pump the water comes under the static costs. While chlorine requirement for cleaning depends on the season and members in the pool. Hence the cost of chlorine comes under non-static costs. Here Marginal cost for service depends on the chlorine cost, which comes under the non-static costs. Marginal cost is usually associated with the production of additional units or services of some kind. Marginal costs bring out changes in the total cost of production of the service or items. Marginal cost is dependent on non-static or variable costs. Hence, marginal cost exists when there exist non-static costs in the total value of production. Marginal cost can be defined as the ratio of the change in the total cost of production to the change in the quantity of the production. Main Differences Between Opportunity and Marginal Cost - Opportunity cost is the value or the benefits of gained or lost choosing an item over the other. While Marginal cost is the value of producing extra item or service. - Opportunity cost is independent of total cost of production. In contrary Marginal cost depends on the variable costs of total cost of production. - Opportunity costs does not depend on external parameters like labour, time or outputs. Marginal costs depend on the external parameters like worker wages etc., - Opportunity cost may or may not be monetary value. While marginal cost is always a monetary value. - Opportunity cost is the monetary value or benefit differences between two items or more than two items. While marginal cost is amount required to produce an item. I’ve put so much effort writing this blog post to provide value to you. It’ll be very helpful for me, if you consider sharing it on social media or with your friends/family. SHARING IS ♥️ Chara Yadav holds MBA in Finance. Her goal is to simplify finance-related topics. She has worked in finance for about 25 years. She has held multiple finance and banking classes for business schools and communities. Read more at her bio page.
Here we will discuss decimal to binary conversion example:- As we know, the decimal number system has base 10 and binary number system has base 2. While converting decimal number to binary number, the base of decimal number i.e. 10 will be changed into the base of binary number system i.e. 2. All the decimal numbers retain their equivalent binary number. For example, if we want to convert (294)₁₀, then we will divide this number by 2. We will be left with some remainder and quotient value after dividing the given number by 2. The number will be divided by 2 till the quotient value reaches zero. As the quotient value reaches 2, place the remainder value in the series of Least Significant Bit (LSB) at the top and the Most Significant bit (MSB) at the bottom, The binary numbers are most commonly used by computers for coding and programming purposes because the binary numbers system retains 2 digits 0 and 1 and computers only understand the language of the binary number system.(image will be uploaded soon) To convert Decimal to binary numbers, the following steps should be followed:- 1. Take any decimal number and divide it by “2". After dividing, you will get some results along with the remainder. 2. If the decimal number chosen by you is even, then the result will be in a whole number and it will give the remainder 0. 3. If the decimal number chosen by you is odd, then the number will not be divided fully and you will get the remainder “1”. 4. Continue dividing the number till you get the quotient 0 5. Now place all the remainders in the series of Least Significant Bit (LSB) at the top and the Most Significant bit (MSB) at the bottom. On the basis of the above steps, let us discuss the decimal to binary conversion through example. Hence, 244₁₀ =11110100₂ Some of the decimal to binary conversion solved examples are:- 1. How to convert 145 into the binary number system? Hence, 145₁₀ = 10010001₂ 2. How to convert 112 into the binary number system? Hence, 112₁₀ = 1110000₂ Here are some questions given below regarding the decimal to binary conversion for the students to solve. Solving the questions again and again will help them to solve the problem speedily. With this, they will be able to solve the questions accurately and score good marks in their examination. 1. Convert 112₁₀ to binary number system 2. Convert 25673₁₀ to its equivalent binary number. 3. What would be the binary equivalent number of 12999₁₀? 4. Convert 555₁₀ to binary number. Decimal Number System is also known as base ten or denary numeral system. The Chinese counting rod system and Hindu-Arabic numeral system are the only two positional decimal systems in ancient civilization. Most of the computer storage systems such as Compact disc, DVDs use a binary number system to manifest the large files. A set of 8 binary digits is known as bit Sometimes the word ‘period’ is used in place of ‘decimal’ to point out the dot used to separate the position of number in the decimal number system. 1. How many unique symbols are there in binary number systems? 2. What would be the greatest 4 digit number which can be made from decimal numbers? 3. What would be the greatest 4 digit number which can be made from binary numbers? 4. Convert 100₁₀ to the binary number system. 1. Define Binary Number Systems and list Some of its Applications. The binary number system also is known as the base-2 system is a way of symbolizing numbers that count through a combination of only two digits i.e. 0 and 1. A single number in a binary digit is known As "Bit". Binary arithmetic operations such as addition, subtraction, multiplication, and division are performed similarly as arithmetic operations are performed in the numeral. Applications of Binary Number System Binary numbers are commonly used in common applications. Each coding and language in computers such as java, C++ to write a program or encode any digital data as the computer understands only two digits i.e. 0 and 1. These two digits binary numbers are also used by the computers to symbolize data or information in varied bits of information as computers understand only coded language. 2. Explain Binary Number System Arithmetic Operations Binary numbers arithmetic operations are performed similarly as arithmetic operations are calculated in numerals. Here are some of the binary number system arithmetic operations: Addition of the two binary numbers gives the binary number itself. For example-If we will add two binary numbers 1101 ₂and 1001₂, we will get 10110₂, which is a binary number. Subtraction of the two binary numbers gives a binary number itself. For example-If we will subtraction two binary numbers 1101 ₂and 1010₂, we will get 0010₂, which is a binary number. The binary number systems multiplication operations are performed similarly as multiplication is done in numerals. For example-If we will multiply two binary numbers 1101₂ and 1010₂, we will get 10000010₂, which is a binary number. The binary number systems division operations are performed similarly as the division is done in numerals. For Example- Divide 1010₂ by 2, we will get the quotient value 101, which is a binary number.
When the sailing ship became a viable means of long-distance transport by about 1450, shipbuilding assumed real economic and strategic importance. By developing technical innovations, shipbuilders enhanced the efficiency of water transport, and thus supported the growth of world trade. Moreover, they came to occupy a central place within commercial webs that fostered trading; builders forged links with input suppliers, merchants, ship owners, and insurance providers. Governments came to see shipbuilding as a strategic industry, not only because trade and overseas possessions had to be protected by navies, but also because an efficient merchant fleet enabled nations to import vital materials and pay their way in the world by exporting goods. Indeed, a ship is essentially a vessel, or a self-propelled container. The builder's task is to construct a ship that represents a suitable compromise between speed, seaworthiness, and carrying capacity. For example, a sleek hull will increase speed, but it will afford less stability and cargo space. The builder must also take into account the depth of the harbors served and the types of goods the ship will convey. Thus, the shipbuilder balances engineering principles with variables affecting economic performance. EUROPEAN SAILING SHIP DESIGN A revolution in European ship design occurred after 1450 (hitherto, Chinese ships were larger and technically superior) as shipbuilders moved from constructing simple ships to three-masted types with hulls of up to 300 feet in length. Portugal produced the caravel, a lanteen-rigged ship with a triangular sail, used on voyages of discovery. Square-rigged types built at this time included the car-rack, an early version of the Spanish galleon. Dutch builders developed the efficient fluit. All of these vessels had blunt bows and broad beams, which made them stable and slow but afforded large carrying capacity. Shipbuilding was a labor-intensive assembly operation carried out on a seasonal basis. Different types of wood were used for specific parts of the ship. Oak was used in areas where strength was vital, and softwoods were used for decks and masts. Water tightness was achieved by caulking, that is, pounding fabric soaked in pitch into spaces between planks. Sails were made from linen and, later, canvas, and ropes were woven from hemp. Iron was used only for components such as anchors. During the 1500s methods of ship design changed under England's Tudor monarchs, who adopted an expansive maritime policy. Master shipwrights who used plans based on empirical principles replaced the carpenter of earlier times, who built ships "by eye," These men codified vital shipbuilding knowledge; for example, a later English shipwright, Sir Anthony Deane (1638– 1721), wrote a classic study titled The Doctrine of Naval Architecture (1690). In 1741 France founded the School of Naval Construction, which provided a high standard of education. During the seventeenth century France, Britain, Holland, Spain, and Baltic ports were major shipbuilding centers. The government naval dockyards founded by the English and French monarchs became large facilities employing hundreds of men. Merchant ship owners established yards in the many English and French ports. Bristol and London constructed what were, until the 1850s, the world's biggest ships—the 1,200- to 1,400-ton East Indiamen. In the eighteenth century Britain's North American colonies produced large quantities of tonnage. The expanding coastal trades called for more maneuverable ships, and New England developed schooners and other specialized types such as whaling ships. After the American Revolution, the United States modified the French lugger to create the fast-sailing packet that metamorphosed into the clipper ship. Three Americans, William Webb (1816–1899), John Griffith (1809–1882), and Donald McKay (1818–1880), were famous clipper builders. In the early nineteenth century Britain, its North American colonies, and the United States were the chief shipbuilding areas. Britain focused on large, high-quality vessels made from hardwoods, whereas yards in the northeast United States and British North America constructed less durable ships of softwoods. While Britain pursued protectionist trade policies, its colonial shipbuilders enjoyed important competitive advantages. In Nova Scotia and New Brunswick shipbuilding became a specialized occupation, rather than one conducted by merchant ship owners who had diversified businesses. Independent colonial builders formed networks with local suppliers and imported manufactured inputs from Britain. Colonial yards included some specialized facilities, including sail lofts, saw pits, forges, and joiner's shops, but they carried on a "protoindustrial" activity that involved little mechanization in comparison with the yards that soon produced steam vessels made from iron and steel. RISE OF STEAM SHIPS By 1890 the steel ship powered by a triple-cylinder engine triumphed over the sailing ship as the most efficient ocean carrier. The United Kingdom emerged as the world's foremost shipbuilder as the result of a conjunction of favorable supply and demand conditions. On the supply side, its advantages stemmed from its lead in coal, engineering, and metal production, which provided cheap inputs and the means for shipbuilding to industrialize. Production was still labor intensive (and wages were low), but machinery was used extensively to increase efficiency. Steam-powered equipment bent plates, punched holes, and sheared metal, and ever-larger cranes lifted heavy components. Highly sophisticated machinery was used in engine works that were usually included within shipbuilding yards. Economies of specialization arose from the rise of dedicated component makers within the main shipbuilding regions. All of these developments enabled U.K. builders to become the most efficient in the world. On the demand side, tariff repeal, an expanding empire, and industrialization called for vast amounts of new tonnage. Britain's emergence as the center of global trade and finance fostered the growth of extensive networks that provided information and capital to its maritime industries. Between 1870 and 1914 the value of the United Kingdom's trade grew by 150 percent and its fleet doubled to 11.7 million grt (gross register tons). This rapidly growing market enabled companies and entire regions to specialize and generate further efficiency. Firms on the Clyde and in Northern Ireland focused on passenger liners and warships, whereas yards in the northeast concentrated on tramps and cargo liners. In 1913 U.K. yards produced 58 percent of world output. Germany ranked next, accounting for 14 percent; it built very large vessels and developed the revolutionary diesel engine in 1914. The United States lost the comparative advantage it enjoyed during the heyday of sail, and henceforth its marine industries were reliant on government aid. In 1913 U.S. yards constructed just over 8 percent of world production. Holland, France, Japan, Norway, and Italy were minor producers. Strategic and economic concerns impelled most of these countries to subsidize their marine industries to maintain them in the face of the United Kingdom's formidable comparative and competitive advantages. The interwar years were a troubled time for global shipbuilding. Trade shrank, and a vast amount of tonnage built during World War I overhung the market for years. The United Kingdom remained the world's biggest producer, but Norway, Holland, and especially Japan made important gains. This period saw the wider application of diesel propulsion, the spread of welding, and the beginnings of prefabrication. During World War II an American shipbuilder, Henry Kaiser (1882–1967), demonstrated the efficiency that could be won by standardization and mass-production techniques. After 1945, international trade increased at an unprecedented rate, causing the world's fleet to double in size to over 160 million grt by 1965. Such growth supported the introduction of new specialized ships, including car carriers, container vessels, and bulk carriers. The size of ships increased dramatically, beginning with tankers during the Suez Crisis of 1956. A major shift in the location of shipbuilding unfolded after 1945. U.K. output recovered to 57 percent of world production in 1947, then fell to 8.2 percent in 1967, and virtually collapsed in the 1980s. The reasons for this phenomenon have been debated; labor conflict, spiraling costs, underinvestment in new technology, and the erosion of supporting commercial networks all played some part. After being a major source of tonnage during the war, the commercial shipbuilding industry of the United States also declined, although warship construction remained strong. Germany, Spain, and Norway gained market shares but remained small producers. With government assistance, yards in the Soviet bloc launched large quantities of tonnage. Sweden became the world's second-largest producer in the 1970s. However, it was the Japanese industry that made the most breathtaking progress, surpassing Britain in 1956 and accounting for 47.5 percent of world output in 1967. A rapidly expanding national fleet, highly productive low-cost labor, improved construction methods, and state policy supported this growth. Japan's expanding conglomerates (keiretsu) offered financial and commercial support to shipbuilders. In 2001 Japan was the still the world's largest shipbuilder, with a market share of 33 percent. Korea ranked number two with 30 percent, followed by Europe at 13 percent, and China with 10 percent. One year later, Korea displaced Japan by securing a 45 percent share, and China's two main state-controlled yards made gains. Korean shipbuilding has benefited from having close connections with Daewoo, Samsung, and Hyundai conglomerates, and from massive state support. Such government aid has attracted complaints to the World Trade Organization from European producers. China, Korea, and Japan produce relatively unsophisticated ships, including bulk carriers and tankers, although all are moving into higher-value sectors; Japan is now building cruise liners, and Korea has secured a large percentage of recent liquefied natural gas (LNG) carrier orders. These developments threaten European yards, which focus on the most advanced types, including ferries, cruise ships, drilling rigs, specialized tankers, and container carriers. As this occurs, the pressure on French and German firms to merge and rationalize within the European Union framework will intensify. Norway's Aker Group is the largest and most stable European producer; the Swedish industry collapsed in the 1980s, and firms in Bulgaria and Poland have filed for bankruptcy. These trends suggest that future production will be even more highly concentrated in Asia, especially as China increases its trade. Tensions will increase between countries that follow market-based policies and those where state involvement is extensive, confirming the continued economic and strategic importance of the shipbuilding industry. SEE ALSO Canada; Cargoes, Freight; Cargoes, Passenger; Containerization; Germany; Gujarat; Hanseatic League (Hansa or Hanse); Hong Kong; India; Indian Ocean; Japan; Japanese Ministry of International Trade and Industry (METI); Korea; Mediterranean; Mitsubishi; Petroleum; Shipping, Aids to; Shipping, Coastal; Shipping, Inland Waterways, Europe; Shipping, Inland Waterways, North America; Shipping Lanes; Shipping, Merchant; Shipping, Technological Change; Ships and Shipping; Ship Types; South China Sea; Russia; Sweden; Taiwan; Tung Chee-Hwa; United Kingdom; United States. Boyce, Gordon H. Information, Mediation, and Institutional Development: The Rise of Large-scale Enterprise in British Shipping, 1870–1919. Manchester, U.K.: Manchester University Press, 1995. Childa, Tomokei, and Davies, Peter N. The Japanese Shipping and Shipbuilding Industries: A History of their Modern Growth. London: Athlone Press, 1990. Gibson, Andrew, and Donovan, Arthur. The Abandoned Ocean: A History of United States Maritime Policy. Columbia: University of South Carolina Press, 2000. Hass, J. M. A Management Odyssey: The Royal Dockyards, 1714–1914. Lanham, MD: University Press of America, 2000. Japan Ship Exporters' Association. Shipbuilding and Marine Engineering in Japan. Tokyo: Author, 1980, 1990, and 1999. Lobley, Douglas. Ships through the Ages. New York: Octopus Books, 1972. Moss, Michael, and Hume, John. Shipbuilders to the World: 125 Years of Harland and Wolff, Belfast, 1861–1986. Belfast: Blackstaff Press, 1986. Pollard, Sydney, and Robertson, Paul. The British Shipbuilding Industry. Cambridge, MA: Harvard University Press, 1979. Sager, Eric W., and Panting, Gerald E. Maritime Capital: The Shipping Industry of Atlantic Canada, 1820–1914. Kingston, Ontario: McGill-Queen's University Press, 1990. The Industry . While the steamboat was the most dramatic maritime innovation of the period, most commerce continued to be carried by sailing ships. Americans had become the world’s best builders of boats and ships, and the rise of British maritime power was made possible by American shipwrights, who had delivered to England an average of fifty ships each year before the Revolution. In 1769 shipyards in the American colonies, mainly in New England, but also in New York and the Chesapeake, produced 389 vessels. After the war, with British markets for American ships shut off and merchants excluded from English ports, the industry declined. In 1789 the new U.S. government put a higher tariff on ships built or owned by foreigners which entered American ports, hoping to stimulate the shipbuilding industry. It succeeded, with the total tonnage of American-built ships owned by Americans more than doubling by 1790, from 123, 000 tons to 364, 000 tons. Because laws also forbade foreigners to buy American-built ships, more of these ships were owned by Americans, greatly increasing the United States’ share of the world’s carrying trade. American Advantages . Americans had several advantages in building ships, most notably in their access to good timber. Shipyards tended to follow the forests, moving up the coast of Maine in the 1790s. Boston and New York shipbuilders invested in canals to help bring timber to their shipyards. Even with the forests closest to New York and Boston depleted, the country still had vast timber reserves, making the cost of construction much lower. An American ship, built of New England oak, would cost twenty-four dollars per ton; a similar ship built of fir along the Baltic coast would cost thirty-five dollars per ton. An American vessel made of more expensive live oak and cedar would cost thirty-six dollars to thirty-eight dollars per ton, while a similar vessel made of oak in England, France, or Holland would cost fifty-five dollars to sixty dollars per ton. AMERICA RULES THE WAVES The resurgence of foreign trade after the end of the Revolutionary War in 1783 allowed for the American ship industry to reestablish itself. Shipping became one of the most significant parts of the American economy. From 1790 to 1807 American shippers more than doubled their carrying capacity. In 1790 American ships carried 40.5 percent of the value of goods carried in the nation’s foreign trade; by 1807 they were carrying 92 percent. Shipbuilding naturally became a vibrant part of the American economy, helped by abundant timber and naval stores and a skilled workforce. Tenche Coxe described these advantages in 1794: Ship-building is an art for which the United States are peculiarly qualified by their skill in the construction, and by the materials, with which this country abounds: and they are strongly tempted to pursue it by their commercial spirit, by the capital fisheries in their bays and on their coasts, and by the productions of a great and rapidly increasing agriculture. They build their oak vessels on lower terms than the cheapest European vessels of fir, pine, and larch. The cost of an oak ship in New England is about twenty-four Mexican dollars per ton fitted for sea: a fir vessel costs in the ports of the Baltic, thirty-five Mexican dollars: and the American ship will be much the most durable. The cost of a vessel of the American live-oak and cedar, which will last (if salted in her timbers) thirty years, is only thirty-six to thirty-eight dollars in our different ports; and an oak ship in the cheapest part of England, Holland, or France, fitted in the same manner will cost 55 to 60 dollars. In such a country, the fisheries and commerce, with due care and attention on the part of government, must be profitable. Source: Tenche Coxe, A View of the United States of America (Philadelphia: William Hall, Wrigley & Berriman, 1794), pp. 99–100, Live Oak . More important than the quantity of timber was its quality. The live oak found in Georgia and South Carolina will not rot quickly. Under normal use a ship with a live-oak frame would last thirty years, three times as long as a ship made of inferior wood. Live oak is also somewhat denser than regular oak or other kinds of wood, making the ship much stronger. In fact, the U.S. frigate Constitution, built in Boston in 1797, has such a strong frame that British cannonballs bounced off her hull in 1812, earning the ship the nickname “Old Ironsides.” Merchant ships made of live oak would not be expected to repel cannonballs but would resist rot and other enemies of wooden ships such as the teredo worm. In 1797 Congress appropriated $200, 000 to preserve groves of live oak in the nation. Wages and Exports . Another advantage to American shipbuilding was a well-trained labor force. International trade became so important to businesses that sailors’ wages rose from eight dollars per month in the 1790s to thirty dollars a month by 1815, and the demand for good ships expanded so much that buyers would pay cash in advance to shipbuilders, who thus were able to pay their workers in hard currency. Shipwrights would earn about a dollar a day, more than farm laborers, and about the same wage as sailors or skilled carpenters. With the value of American exports growing from $23 million in 1790 to $52 million in 1815, good ships were in great demand. While shipbuilders did not become wealthy, they did earn good livings: in 1815 one New York shipbuilder earned $30, 000. American shipbuilders earned a reputation for producing the world’s best ships in this period. Speed and Size. In addition to needing more ships, American businesses needed faster ones. Remarkable as the steamboat was, sailing technology made astonishing advances in this period. Merchants sought two different qualities in a ship: speed and size. The two could not be easily reconciled; a large ship which could carry bulky cargo could not sail as fast as a narrow ship which could quickly cut through the water. Boston shipbuilder John Peck experimented with long, narrow ships, which could both carry large cargoes and sail quickly. Elias Derby built a ship which sailed from Salem to Ireland in just eleven days; another of Derby’s ships sailed to France and back in five weeks, the time it took some sailing ships to make one crossing. Massachusetts builders favored smaller vessels. In 1795 E. H. Derby’s second Grand Turk, built at his Salem shipyard, had to be sold in New York because it was too large for Salem’s harbor and for Derby’s preferred method of trade. New York merchants preferred larger ships while New England merchants favored smaller, faster ones. With this greater speed, American ships were able to make two, three, or four trading voyages each year, while English ships typically made only one trip each year. Algiers . The high quality of materials and the skills of the labor force made American ships the envy of the world. The Dey of Algiers in 1795 asked the American consul to send him some American shipbuilders. Send them poor, he told the consul, and they would return home rich. After making a treaty with the United States, the Dey contracted to have two merchant vessels built for his commercial fleet. The United States also built a frigate, the Crescent, as a special gift for the Dey. When this small fleet arrived in Algiers in 1798, it impressed all with the skills of American builders. No one, the American consul reported, had ever seen such beautiful ships, and the Dey, who had been threatening to attack American merchant ships, became convinced that the United States would be a dangerous enemy. Freedom of the Seas . The U.S. merchants did a tremendous business during the wars between England and France (1793–1815). The United States followed a policy of neutrality and argued that neutral ships should be allowed to trade freely on the world’s seas. U.S. merchants grew wealthy at the expense of England and France while they supplied each side with American grain and took up much of the carrying trade merchants from those nations had formerly enjoyed. The French were first to object to this, and in 1797 they began capturing American merchant ships in the West Indies and Europe. The Adams administration responded with the use of the new navy, begun in 1793 to fight Algiers. In a series of naval battles the United States defeated the French all but once. In 1800 the two sides agreed to peace. One year later Tripoli announced that it would begin seizing American merchant vessels. The United States responded by sending its navy to blockade and bombard Tripoli. Arguing again for freedom of the seas, the United States declared war on England in 1812, and while the war at home went very badly, with the city of Washington burned and coastal New England blockaded, the navy, on the ocean and the Great Lakes, proved superior to the British. American sailors, trained in the merchant fleets, and shipbuilders, challenged to build sturdy, fast-sailing ships, defeated the British in many naval engagements. Free international commerce was vital to the survival of the American nation; the U.S. government would go to war to protect this principle. Thanks to the tremendous skill of American shipbuilders and sailors, the United States was able to maintain this principle. The frigate U.S.S. Constitution, completed in October 1797, remains in commission to this day, demonstrating the technological skill of American shipbuilders. Samuel Eliot Morison, The Maritime History of Massachusetts, 1783–1860 (Boston: Houghton Mifflin, 1961); Curtis P. Nettels, The Emergence of a National Economy, 1775–1815 (New York: Holt, Rinehart & Winston, 1962). SHIPBUILDING. Shipbuilding in the United States began out of necessity, flourished as maritime trade expanded, declined when industrialization attracted its investors, then revived in World War II. Shipyards grew from barren eighteenth-century establishments with a few workers using hand tools even for "large" ships (200 tons) to huge twentieth-century organizations where thousands of employees use ever-changing technology to build aircraft carriers of 70,000 tons. Today the United States no longer leads the world in ship production, but it is still a major force in marine technology and engineering. American shipbuilding began when Spanish sailors constructed replacements for ships wrecked on the North Carolina coast in the 1520s. Other Europeans launched small vessels for exploration and trade. In the 1640s the trading ventures of Massachusetts built vessels that established New England as a shipbuilding region. By the 1720s, however, New England shipyards faced competition from Pennsylvania and later from other colonies with growing merchant communities, such as Virginia, where slave labor boosted production. The typical eighteenth-century urban shipyard was a small waterfront lot with few if any permanent structures. Rural yards, where land was cheap and theft less of a problem, often had covered sawpits, storage sheds, and wharfs. The labor force consisted of about half a dozen men, sawyers and shipbuilders as well as apprentices, servants, or slaves. Work was sporadic, and accidents, sometimes fatal, were common. Yet from such facilities came 40 percent of Great Britain's oceangoing tonnage on the eve of the Revolution. After Independence, shipbuilding stagnated until European wars in the 1790s enabled American shipyards to launch neutral vessels for their countrymen and merchant ships or privateers for French and British buyers. During the Golden Age of American shipbuilding, from the mid-1790s through the mid-1850s, shipping reached its highest proportional levels, the navy expanded, and the clipper ship became a symbol of national pride. New technology entered the shipyard: the steam engine supplied supplementary power for some sailing vessels and the sole power for others; iron first reinforced and then replaced some wooden hulls. Many shipowners, attracted to the promised economy of size, ordered larger ships that required more labor, raw materials, and technology. Meanwhile, a transportation revolution compelled coastal vessels to connect with and compete with canal barges, inland river trade, and railroads. At this time, many New England merchants turned to manufacturing for higher and steadier returns. By the late 1850s, the glory days had begun to fade. Maine and Massachusetts shipyards launched more tonnage than anyone else, but they did not construct steam-ships, while builders outside New England recognized that the future belonged to steam, not sail. The Civil War promoted naval construction, with both sides making remarkable innovations, but the war devastated commercial shipbuilding. Confederate raids on Union ships convinced some Yankee merchants to sell their ships to foreign owners. By 1865, American tonnage in foreign trade was half that of the late 1850s; at the end of the decade it was down to a third. In 1880, Pennsylvania shipyards launched almost half of what the top ten states constructed. Iron, not steam, now represented the future; most shipyards could not afford the transition from wood to iron. Massachusetts build-ers held on by mass-producing small boats for offshore fishing schooners. Capital investments per yard many times greater than those of other states allowed Pennsylvania and Delaware yards to succeed. With yards in six of the ten states producing at a rate of less than two vessels per year, many establishments did not survive the introduction of iron. Two successful shipyards of the period, William Cramp and Sons in Philadelphia and Newport News Shipbuilding and Drydock Company of Virginia, embraced the new technology and benefited from the naval modernization program of the 1890s. Naval contracts proved vital to these builders' success, and the strength of the navy depended upon such shipyards. When the United States entered World War I, it undertook an unprecedented shipbuilding program. After the war, builders watched maritime trade decline through the 1920s as the coastal trade gave way to trains and trucks and quotas restricted the once profitable immigrant trade. The Newport News Shipbuilding and Drydock Company survived by performing non-maritime work such as building traffic lights. Relief did not come until the 1930s, when the U.S. government began ordering aircraft carriers to serve the dual purpose of strengthening the navy and providing jobs for the unemployed. At the outbreak of World War II, Great Britain asked the United States to mass-produce an outdated English freighter design that had many deficiencies but possessed the all-important virtue of simplicity. Thanks to new welding techniques and modular construction, the "Liberty" ship became the most copied vessel in history. More than 2,700 were built—many completed in less than two months, some in a few weeks. This remarkable feat, accomplished by a hastily trained workforce using parts produced across the nation, was directed by Henry Kaiser, who had never before built a vessel. American shipyards also produced 800 Victory ships (a faster, more economical freighter), more than 300 tankers, and hundreds of other warships. American shipbuilding, a key factor in the Allied victory, increased 1,000 percent by war's end, making the United States the world's undisputed maritime power. Following World War II, America abandoned maritime interests and focused on highways, factories, and planes. During the 1950s, Japanese, European, and Latin American shipbuilders outperformed American shipyards, while American Atlantic passenger liners succumbed to passenger jets. A nuclear-powered freighter, Savannah, proved both a commercial and public relations failure. While Americans pioneered development of the very economical container ship, it was quickly adopted by foreign competitors. Despite technical advances, shipbuilding continued to decline in the face of waning public and private support. Today, Japan, Korea, and China build over 90 percent of the world's commercial tonnage; the U.S. share is only 0.2 percent. Since 1992, U.S. shipyards have averaged fewer than nine new commercial ships per year of 1,000 tons or more. Submarines and aircraft carriers are still under construction, although in reduced numbers; guided-missile destroyers and support vessels are on the rise. Modern maritime technology requires significant resources and expertise. Unlike the colonial years, when every seaport, however small, had a few shipyards, today the nation has just half a dozen major shipyards in total. The United States still enjoys an abundance of materials, skilled labor, and engineering ingenuity. It requires only large-scale public and private support to reignite interest in this once flourishing industry. Chapelle, Howard I. The National Watercraft Collection. Washington, D.C.: United States National Museum, 1960. 2d ed., Washington, D.C.: Smithsonian Institution Press, 1976. Goldenberg, Joseph A. Shipbuilding in Colonial America. Charlottesville, Va.: University Press of Virginia, 1976. Pedraja, René de la. The Rise and Decline of U.S. Merchant Shipping in the Twentieth Century. New York: Macmillan, 1992. In the seventeenth and eighteenth centuries wooden sailing ships were built at various locations around the coast of Ireland, including Belfast Lough. Belfast's first significant shipbuilding firm was established in 1791 by William Ritchie, a shipbuilder from Saltcoats on the west coast of Scotland. After 1850, product and process innovation, with the development of iron and later steel steamships together with scale economies, led to larger establishments and firms and to regional concentration in the shipbuilding industry throughout the United Kingdom. By the late nineteenth century most U.K. merchant tonnage was launched on the River Clyde in Scotland, the northeast coast of England, and the River Lagan in Belfast. The industry in Belfast consisted of two firms: Harland and Wolff and Workman, Clark and Company. In the years from 1906 to1914 they produced 10 percent of the United Kingdom's output and 6 percent of the world's output. Harland and Wolff was formed in 1861 by Edward Harland, an engineer and shipbuilder from the northeast of England, and Gustav Wolff, an English-trained engineer from Hamburg. The partnership acquired a small yard on Queen's Island, which Harland had started to manage for Robert Hickson in 1854 and then purchased four years later. The Belfast Harbour Commissioners played an important role in the creation of this yard and in the subsequent development of shipbuilding on the River Lagan. Workman, Clark, and Company was formed in 1880 by Frank Workman and George Clark. Both men had served as apprentices with Harland and Wolff. The new company's yards were located mainly on the northern shore of the Lagan. As with other U.K. firms, close links with shipping-line customers allowed the Belfast firms to maintain a high level of output and hence capacity utilization and also to develop product specialization, thereby enabling them to sustain unit-cost advantages over competitors. Under the leadership of William Pirrie, Harland and Wolff was one of a small number of yards equipped to construct the largest vessels, including the luxury liners Olympic (1911), and Titanic (1912). Workman Clark specialized in medium-sized cargo boats and combined cargo and passenger vessels; the firm pioneered the development of the Parsons turbine engine and the construction of refrigerated meat- and fruit-carrying vessels. Employment at Harland and Wolff increased from 500 in 1861 to 2,200 in 1871, and from 9,000 in 1900 to 14,000 in 1914. Altogether 20,000 were employed in shipbuilding in Belfast in 1914, and an all-time peak of nearly 30,000 held such jobs in 1919. Belfast did not have a large reserve of skilled labor. Skilled workers from Scotland and England were attracted and retained by offering them a premium on regional rates of pay: markets for skilled labor were interregional. These premiums did not apply to unskilled labor, which was in plentiful local supply. Because of their relative scarcity the skilled shipyard workers had considerable bargaining power and, as in Great Britain, were able to exercise a traditional right to select apprentices for their crafts. This informal labor market meant that recruitment frequently came from within the established local communities, often from within family groups. These employment practices continued into the twentieth century and help to explain the religious mix of the shipyard labor force. Serious sectarian incidents occurred in the shipyards in 1886, when there was a sharp downturn in shipbuilding output and employment, and in 1920, at the beginning of another major downturn for the Belfast yards. Each of these episodes took place at a time of heightened political tension over the national question: In 1886 and 1920 riots occurred during the first Home Rule crisis and as the Anglo-Irish War edged into the north, respectively. In the 1920s and 1930s U.K. shipbuilders confronted the problems of slow growth in demand for shipping services, excess capacity, and increased foreign competition. Both Belfast firms experienced severe financial difficulties. Harland and Wolff responded by entering the market for oil tankers in the 1920s and diversified in 1936 by entering into partnership with Short Brothers to produce aircraft. Workman Clark did not survive the world depression that began in 1929 and launched its last ship in 1934. The outbreak of World War II, like the previous world war, caused a boom in output; Harland and Wolff's contribution made the shipyard a target for German bombs in 1941. The long postwar boom saw an increase in demand for oil tankers and bulk carriers. Despite a decline in the U.K. shipbuilding industry's share of world output, tonnage launched by Harland and Wolff reached a historical high in the 1970s. However, the firm was in receipt of government financial support from 1966, and in 1975 the Northern Ireland government became the sole shareholder in the company. In 1989 Harland and Wolff was returned to the private sector as Harland and Wolff Holdings after a management and employee buyout in partnership with companies associated with the Norwegian shipowner Fred Olsen. Following privatization, the company diversified its product mix to include not just oil tankers and bulk carriers but also offshore production vessels for the oil and gas industry. After further restructuring in the late 1990s the dominant shareholder in the twenty-first century is Fred Olsen Energy. Diversification continues: Recalling the glory days at the start of the twentieth century the company is developing a research and tourism area on Queen's Island called Titanic Quarter. However, its shipbuilding days may have come to an end with the launch on 17 January 2003 of Anvil Point, a roll-on, roll-off ferry built for service with the U.K. Ministry of Defence. Geary, F., and W. Johnson. "Shipbuilding in Belfast, 1861–1986." Irish Economic and Social History 16 (1989): 42–64. Moss, Michael, and John R. Hume. Shipbuilders to the World: 125 Years of Harland and Wolff, Belfast, 1861–1986. 1986. Frank Geary and Walford Johnson The Clyde was a latecomer as a major shipbuilding river. The main hull-builders were downriver at Greenock and Port Glasgow. Deepening the river served both commerce and industry, for Glasgow's engine-builders came to dominate British shipbuilding. Labour costs in the new shipyards were lower than on the Thames, and technical innovations gave the Clyde major advantages. In 1813–14 this region produced only 4.5 per cent of the British tonnage, and this market share remained relatively constant until the 1840s. In the production of iron river steamers the Clyde falteringly led the way in the early 19th cent. but between 1840 and 1870 produced two-thirds of British steam tonnage. Early marine engines used fuel prodigally; Glasgow engineers solved this problem and also improved boilers and methods of construction and propulsion: the screw propeller replaced the paddle in the 1840s; compound engines were installed from 1853, dramatically cutting coal consumption; iron hulls increased the scale of shipping, reducing freight costs and encouraging the growth of international trade. Glasgow became the home base for many shipping lines, including Cunard, and their orders tended to go to Clyde yards. Steam and iron eclipsed wood and sail in the 1850s. Steam tonnage, which in 1850 represented under 7 per cent of British output, accounted for 70 per cent by 1870. About 24,000 of 47,500 men working in shipbuilding in 1871 were resident in Scotland, all but a few employed in the Clyde yards. They produced at least one-third of British tonnage—mostly specialist vessels—every year from 1870 to 1914. The Wear initially challenged the Clyde, producing about one-third of Britain's merchant tonnage in the 1830s, but the north-east increasingly specialized in lower-cost tramp shipping. Belfast was essentially an extension of Clyde capacity, and by 1914 one firm, Harland and Wolff, dominated its shipbuilding just as Cammell Laird on the Mersey and Vickers-Armstrong at Barrow controlled regional output. The integration of iron, steel, coal, and shipbuilding as major exporting industries explains why the economy which made shipbuilding regions prosperous before 1914 should be a source of economic weakness after 1920. The long decline of shipbuilding had a downward multiplier effect on these regional economies which became the depressed areas of inter-war Britain. Demand for capital goods declined rapidly after 1920, but shipbuilding suffered most. World capacity had been grossly inflated during the First World War, but peacetime demand was reduced by the decline in world trade. In 1933 launchings from British yards fell to 7 per cent of the 1914 figure. Foreign orders for new ships were markedly reduced. Britain was slow to move into the production of motor vessels which were most in demand; foreign governments provided subsidies to retain orders within their own boundaries. In 1930 ‘National Shipbuilders' Security Limited’ was formed to reduce the number of shipyards and excess capacity. By 1937, 28 firms had been bought and closed, with a capacity of about 3,500,000 tons. The government in 1935 sponsored an ineffective ‘scrap and build’ scheme whereby owners were subsidized to scrap 2 tons of shipping for every new ton they ordered. Rearmament and the Second World War revived shipbuilding, and after 1945 the world dollar shortage drove shipowners to order in Britain. World trade expanded and kept the boom going, but increasingly foreign yards benefited from this exceptional demand. The Clyde produced a third of British tonnage in the early 1950s (although demand was greatest for tankers and cargo ships); the Wear and Tees a quarter and the Tyne about one-sixth; Belfast, the Mersey, and Barrow nearly one-quarter. In 1956 Britain was third in export sales behind Germany and Japan; by 1977 she produced 4 per cent of world output (compared with 60 per cent in 1910–14), and British owners were ordering ships from overseas. Asia, with its low labour costs and modern equipment, became the most significant continent for ship production. The government responded by further rationalization under British Shipbuilders (1977), a public corporation. Technically backward, the industry was faced with closures and redundancies until the government returned firms to private ownership and a process of private investment in the 1980s. Shipbuilding survives but subject to intense foreign competition. See also merchant navy.
Solve for x and represent your answer on a number line. Sketch the solution to each system of inequalities. Multistep inequalities worksheets contain solving, graphing, multiplechoice questions and more. Write down an inequality which involves c and solve this inequality. Use the link at the top of the page for a printable page. The topic of inequalities from the gcse books of the mathematics enhancement program. Each worksheet has problems determining the inequality from the numberline. Can the situation on the picture be represented by an equation or an inequality. Gcse maths revision section looking at inequalities. Linear inequality worksheets contain graphing inequalities, writing inequality from the graph, solving onestep, twostep and multistep inequalities, graphing solutions, solving and graphing compound inequalities, absolute value inequalities and more. Also, chose word problems from holt algebra 1 chapter 34 and 35. Solving inequalities mctyinequalities20091 inequalities are mathematical expressions involving the symbols, 2 in this worksheet, students solve simple twostage inequalities. The activity gives students a reallife perspective on systems of inequalities by using simple business models. Combining like terms worksheets simplifying expressions, math expressions, math lesson. Graphing linear equations worksheet pdf lovely solving and graphing. Main menu math language arts science social studies workbooks holidays login become a member. Inequalities and number lines progressive worksheets teaching. How to solve inequalities gcse maths revision section looking at inequalities. They are also excellent for onetoone tuition and for interventions. T p cmqa1d pes kw ti3tgh1 hijndfpitn ti6t0eo sa al vgde pb drza4 r1 s. Gcse revision solving inequalities teaching resources. All inequalities on this worksheet have addition or subtraction on the side of the inequality as the variable. Solving inequalities worksheet about this worksheet. Algebra inequalities inequalities question sheet aineq01 gcse maths tutor. Multistep inequalities date period kuta software llc. This is a set of 5 pdfs and a powerpoint quiz covering everything that is needed on linear inequalities up to gcse. On the grid, clearly indicate the region that satisfies all these inequalities. Write down all the integers that satisfy both inequalities shown in part a and b. Three worksheets at different difficulty levels first steps, strengthen and extend. On these problems, students need to isolate the variable using only a single step. Combining both subject knowledge and teaching techniques, this online. A collection of linear inequalities that must be satisfied. Inequalities algebra 1 worksheet solving equations, algebra. After reading a few sentences about a business situation, students will dete. There are also questions on solving inequality relationships with quantities less than and more than an expression in x. Multi step inequalities worksheets math worksheets 4 kids. Chapter 5 linear inequalities and linear programming. Materials required for examination items included with question papers. In our case, the linear inequalities are the constraints. U c pmiardme5 ywjitzh5 sizn6fvipn4icteev yablxgee1brria c w1x. This packet is designed for students to complete after learning about solving systems of linear inequalities. On the grid, clearly indicate the region that satisfies all. Inequalities gcserevision, maths, numberandalgebra. Two worksheets, which can be used to teach students about. And when solving a combined inequality in the form x 3 and x 2, and means intersection, or only whats in common to the two inequalities. So when graphing a combined inequality, the first step is to graph the inequalities above the. Introduction to inequalities, including notation graphical and algebraic for foundation level gcse. Eighth grade solving inequalities worksheet 05 one page. Should be a breeze, but if you have trouble check the bottom of this page. Systems of inequalities linear programming worksheet. This section shows you how to solve inequalities with one variable and solve inequalities with two variables. The set of all solutions of an inequality is called the solution set. Compound inequalities with solutions teaching resources. In these very important sections i will explain to you all you need to know about solving different types of equations and inequalities in order to pass your igcse gcse maths exam. Inequalities h a collection of 91 maths gcse sample and specimen questions from aqa, ocr, pearsonedexcel and wjec eduqas. Ruler graduated in centimetres and nil millimetres, protractor, compasses, pen, hb pencil, eraser. Students will be able to practice dividing fractions while also working with unit rates. As above, but combining division and additionsubtraction. Equations and inequalities help with igcse gcse maths. Then graph the solution on the provided number line. Students learn that when solving a combined inequality in the form x 3 or x 2, or means union, or everything thats mentioned in the two inequalities. Onestep inequalities date period kuta software llc. Use k to represent the number of knives he can sell to receive his bonus. When you are done, just click back on your browser. Represent each of the inequalities below on a number line. Solving inequalites this worksheet has over 50 questions on solving linear inequalities included one set which is strictly for higher level. Solving equations maths worksheet ks2 maths, gcse math, maths algebra, math test. Quadratic expressions algebra 2 worksheet algebra 2. Instructions for printing the worksheet or answer key. Check out these free ratio word problem worksheets each worksheet contains problems such as it snowed 23 of an inch every 16 of an hour. This combining like terms algebra 1 activity is the perfect worksheet. Moves onto solving inequalities, please note method for solving does not consider problems which involve squares. Inequalities worksheet not all equations are created equal 1. Graph, write, and solve onestep inequalities with these basic algebra worksheets. Understanding inequalities will be a snap with these thorough, engaging free worksheets that take students from graphing to linear equations, multistep inequalities, and much more. With an edplace account youll be able to track and measure progress, helping each child achieve their best. Pencil, pen, ruler, protractor, pair of compasses and eraser you may use tracing paper if needed guidance 1. Using the substitution method to solve systems of equations math gcse maths revision, maths. Gcse revision solving inequalities ideal for gcse revision, this is one of a collection of worksheets which contain examtype questions that gradually increase in difficulty. Ideal for gcse revision, this is one of a collection of worksheets which. One step equations worksheet pdf lovely solve one step equation addition and. You may view or download the pdf version of this worksheet with answers here. Read each question carefully before you begin answering it. These worksheets are especially meant for prealgebra and algebra 1 courses grades 79.903 860 1476 322 38 988 1540 929 1313 1490 1510 650 1613 66 621 957 435 1564 671 1531 599 1321 189 513 121 625 557
History of radar The history of radar (where radar stands for RAdio Detection And Ranging) started with experiments by Heinrich Hertz in the late 19th century that showed that radio waves were reflected by metallic objects. This possibility was suggested in James Clerk Maxwell's seminal work on electromagnetism. However, it was not until the early 20th century that systems able to use these principles were becoming widely available, and it was German inventor Christian Hülsmeyer who first used them to build a simple ship detection device intended to help avoid collisions in fog (Reichspatent Nr. 165546). Numerous similar systems, which provided directional information to objects over short ranges, were developed over the next two decades. The development of systems able to produce short pulses of radio energy was the key advance that allowed modern radar systems to come into existence. By timing the pulses on an oscilloscope, the range could be determined and the direction of the antenna revealed the angular location of the targets. The two, combined, produced a "fix", locating the target relative to the antenna. In the 1934–1939 period, eight nations developed independently, and in great secrecy, systems of this type: the United Kingdom, Germany, the United States, the USSR, Japan, the Netherlands, France, and Italy. In addition, Britain shared their information with the United States and four Commonwealth countries: Australia, Canada, New Zealand, and South Africa, and these countries also developed their own radar systems. During the war, Hungary was added to this list. The term RADAR was coined in 1939 by the United States Signal Corps as it worked on these systems for the Navy. Progress during the war was rapid and of great importance, probably one of the decisive factors for the victory of the Allies. A key development was the magnetron in the UK, which allowed the creation of relatively small systems with sub-meter resolution. By the end of hostilities, Britain, Germany, the United States, the USSR, and Japan had a wide variety of land- and sea-based radars as well as small airborne systems. After the war, radar use was widened to numerous fields including: civil aviation, marine navigation, radar guns for police, meteorology and even medicine. Key developments in the post-war period include the travelling wave tube as a way to produce large quantities of coherent microwaves, the development of signal delay systems that led to phased array radars, and ever-increasing frequencies that allow higher resolutions. Increases in signal processing capability due to the introduction of solid state computers has also had a large impact on radar use. The place of radar in the larger story of science and technology is argued differently by different authors. On the one hand, radar contributed very little to theory, which was largely known since the days of Maxwell and Hertz. Therefore, radar did not advance science, but was simply a matter of technology and engineering. Maurice Ponte, one of the developers of radar in France, states: The fundamental principle of the radar belongs to the common patrimony of the physicists; after all, what is left to the real credit of the technicians is measured by the effective realisation of operational materials. But others point out the immense practical consequences of the development of radar. Far more than the atomic bomb, radar contributed to the Allied victory in World War II. Robert Buderi states that it was also the precursor of much modern technology. From a review of his book: ... radar has been the root of a wide range of achievements since the war, producing a veritable family tree of modern technologies. Because of radar, astronomers can map the contours of far-off planets, physicians can see images of internal organs, meteorologists can measure rain falling in distant places, air travel is hundreds of times safer than travel by road, long-distance telephone calls are cheaper than postage, computers have become ubiquitous and ordinary people can cook their daily dinners in the time between sitcoms, with what used to be called a radar range. In 1886–1888 the German physicist Heinrich Hertz conducted his series of experiments that proved the existence of electromagnetic waves (including radio waves), predicted in equations developed in 1862–4 by the Scottish physicist James Clerk Maxwell. In Hertz's 1887 experiment he found that these waves would transmit through different types of materials and also would reflect off metal surfaces in his lab as well as conductors and dielectrics. The nature of these waves being similar to visible light in their ability to be reflected, refracted, and polarized would be shown by Hertz and subsequent experiments by other physicists. Radio pioneer Guglielmo Marconi noticed radio waves were being reflected back to the transmitter by objects in radio beacon experiments he conducted on March 3, 1899 on Salisbury Plain. In 1916 he and British engineer Charles Samuel Franklin used short-waves in their experiments, critical to the practical development of radar. He would relate his findings 6 years later in a 1922 paper delivered before the Institution of Electrical Engineers in London: I also described tests carried out in transmitting a beam of reflected waves across country ... and pointed out the possibility of the utility of such a system if applied to lighthouses and lightships, so as to enable vessels in foggy weather to locate dangerous points around the coasts ... It [now] seems to me that it should be possible to design [an] apparatus by means of which a ship could radiate or project a divergent beam of these rays in any desired direction, which rays, if coming across a metallic object, such as another steamer or ship, would be reflected back to a receiver screened from the local transmitter on the sending ship, and thereby immediately reveal the presence and bearing of the other ship in fog or thick weather. In 1904, Christian Hülsmeyer gave public demonstrations in Germany and the Netherlands of the use of radio echoes to detect ships so that collisions could be avoided. His device consisted of a simple spark gap used to generate a signal that was aimed using a dipole antenna with a cylindrical parabolic reflector. When a signal reflected from a ship was picked up by a similar antenna attached to the separate coherer receiver, a bell sounded. During bad weather or fog, the device would be periodically spun to check for nearby ships. The apparatus detected the presence of ships up to 3 kilometres (1.6 nmi), and Hülsmeyer planned to extend its capability to 10 kilometres (5.4 nmi). It did not provide range (distance) information, only warning of a nearby object. He patented the device, called the telemobiloscope, but due to lack of interest by the naval authorities the invention was not put into production. Hülsmeyer also received a patent amendment for estimating the range to the ship. Using a vertical scan of the horizon with the telemobiloscope mounted on a tower, the operator would find the angle at which the return was the most intense and deduce, by simple triangulation, the approximate distance. This is in contrast to the later development of pulsed radar, which determines distance via two-way transit time of the pulse. In 1915, Robert Watson Watt joined the Meteorological Office as a meteorologist, working at an outstation at Aldershot in Hampshire. Over the next 20 years, he studied atmospheric phenomena and developed the use of radio signals generated by lightning strikes to map out the position of thunderstorms. The difficulty in pinpointing the direction of these fleeting signals using rotatable directional antennas led, in 1923, to the use of oscilloscopes in order to display the signals. The operation eventually moved to the outskirts of Slough in Berkshire, and in 1927 formed the Radio Research Station (RRS), Slough, an entity under the Department of Scientific and Industrial Research (DSIR). Watson Watt was appointed the RRS Superintendent. As war clouds gathered over Britain, the likelihood of air raids and the threat of invasion by air and sea drove a major effort in applying science and technology to defence. In November 1934, the Air Ministry established the Committee for the Scientific Survey of Air Defence (CSSAD) with the official function of considering "how far recent advances in scientific and technical knowledge can be used to strengthen the present methods of defence against hostile aircraft". Commonly called the "Tizard Committee" after its Chairman, Sir Henry Tizard, this group had a profound influence on technical developments in Britain. H. E. Wimperis, Director of Scientific Research at the Air Ministry and a member of the Tizard Committee, had read about a German newspaper article claiming that the Germans had built a death ray using radio signals, accompanied by an image of a very large radio antenna. Both concerned and potentially excited by this possibility, but highly skeptical at the same time, Wimperis looked for an expert in the field of radio propagation who might be able to pass judgement on the concept. Watt, Superintendent of the RRS, was now well established as an authority in the field of radio, and in January 1935, Wimperis contacted him asking if radio might be used for such a device. After discussing this with his scientific assistant, Arnold F. 'Skip' Wilkins, Wilkins quickly produced a back-of-the-envelope calculation that showed the energy required would be enormous. Watt wrote back that this was unlikely, but added the following comment: "Attention is being turned to the still difficult, but less unpromising, problem of radio detection and numerical considerations on the method of detection by reflected radio waves will be submitted when required". Over the following several weeks, Wilkins considered the radio detection problem. He outlined an approach and backed it with detailed calculations of necessary transmitter power, reflection characteristics of an aircraft, and needed receiver sensitivity. He proposed using a directional receiver based on Watt's lightning detection concept, listening for powerful signals from a separate transmitter. Timing, and thus distance measurements, would be accomplished by triggering the oscilloscope's trace with a muted signal from the transmitter, and then simply measuring the returns against a scale. Watson Watt sent this information to the Air Ministry on February 12, 1935, in a secret report titled "The Detection of Aircraft by Radio Methods". Reflection of radio signals was critical to the proposed technique, and the Air Ministry asked if this could be proven. To test this, Wilkins set up receiving equipment in a field near Upper Stowe, Northamptonshire. On February 26, 1935, a Handley Page Heyford bomber flew along a path between the receiving station and the transmitting towers of a BBC shortwave station in nearby Daventry. The aircraft reflected the 6 MHz (49 m) BBC signal, and this was readily detected by Arnold "Skip" Wilkins using Doppler-beat interference at ranges up to 8 mi (13 km). This convincing test, known as the Daventry Experiment, was witnessed by a representative from the Air Ministry, and led to the immediate authorization to build a full demonstration system. This experiment was later reproduced by Wilkins for the 1977 BBC television series The Secret War episode "To See a Hundred Miles". Based on pulsed transmission as used for probing the ionosphere, a preliminary system was designed and built at the RRS by the team. Their existing transmitter had a peak power of about 1 kW, and Wilkins had estimated that 100 kW would be needed. Edward George Bowen was added to the team to design and build such a transmitter. Bowens’ transmitter operated at 6 MHz (50 m), had a pulse-repetition rate of 25 Hz, a pulse width of 25 μs, and approached the desired power. Orfordness, a narrow 19-mile (31 km) peninsula in Suffolk along the coast of the North Sea, was selected as the test site. Here the equipment would be openly operated in the guise of an ionospheric monitoring station. In mid-May 1935, the equipment was moved to Orfordness. Six wooden towers were erected, two for stringing the transmitting antenna, and four for corners of crossed receiving antennas. In June, general testing of the equipment began. On June 17, the first target was detected—a Supermarine Scapa flying boat at 17 mi (27 km) range. It is historically correct that, on June 17, 1935, radio-based detection and ranging was first demonstrated in Britain. Watson Watt, Wilkins, and Bowen are generally credited with initiating what would later be called radar in this nation. In December 1935, the British Treasury appropriated £60,000 for a five-station system called Chain Home (CH), covering approaches to the Thames Estuary. The secretary of the Tizard Committee, Albert Percival Rowe, coined the acronym RDF as a cover for the work, meaning Range and Direction Finding but suggesting the already well-known Radio Direction Finding. Late in 1935, responding to Lindemann's recognition of the need for night detection and interception gear, and realizing existing transmitters were too heavy for aircraft, Bowen proposed fitting only receivers, what would later be called bistatic radar. Frederick Lindemann's proposals for infrared sensors and aerial mines would prove impractical. It would take Bowen's efforts, at the urging of Tizard, who became increasingly concerned about the need, to see Air to Surface Vessel (ASV), and through it Airborne Interception (AI), radar to fruition. In 1937, Bowen's team set their crude ASV radar, the world's first airborne set, to detect the Home Fleet in dismal weather. Only in spring 1939, "as a matter of great urgency" after the failure of the searchlight system Silhouette, did attention turn to using ASV for air-to-air interception (AI). Demonstrated in June 1939, AI got a warm reception from Air Chief Marshal Hugh Dowding, and even more so from Churchill. This proved problematic. Its accuracy, dependent on the height of the aircraft, meant that CH, capable of only 4 sm (0.0068 km), was not accurate enough to place an aircraft within its detection range, and an additional system was required. Its wooden chassis had a disturbing tendency to catch fire (even with attention from expert technicians), so much so that Dowding, when told that Watson-Watt could provide hundreds of sets, demanded "ten that work". The Cossor and MetroVick sets were overweight for aircraft use and the RAF lacked night fighter pilots, observers, and suitable aircraft.[page needed] In 1940, John Randall and Harry Boot developed the cavity magnetron, which made ten-centimetre ( wavelength ) radar a reality. This device, the size of a small dinner plate, could be carried easily on aircraft and the short wavelength meant the antenna would also be small and hence suitable for mounting on aircraft. The short wavelength and high power made it very effective at spotting submarines from the air. The solution to night intercepts would be provided by Dr. W. B. "Ben" Lewis, who proposed a new, more accurate ground control display, the Plan Position Indicator (PPI), a new Ground-Controlled Interception (GCI) radar, and reliable AI radar. The AI sets would ultimately be built by EMI. GCI was unquestionably delayed by Watson-Watt's opposition to it and his belief that CH was sufficient, as well as by Bowen's preference for using ASV for navigation, despite Bomber Command disclaiming a need for it, and by Tizard's reliance on the faulty Silhouette system. In March 1936, the work at Orfordness was moved to Bawdsey Manor, nearby on the mainland. Until this time, the work had officially still been under the DSIR, but was now transferred to the Air Ministry. At the new Bawdsey Research Station, the Chain Home (CH) equipment was assembled as a prototype. There were equipment problems when the Royal Air Force (RAF) first exercised the prototype station in September 1936. These were cleared by the next April, and the Air Ministry started plans for a larger network of stations. Initial hardware at CH stations was as follows: The transmitter operated on four pre-selected frequencies between 20 and 55 MHz, adjustable within 15 seconds, and delivered a peak power of 200 kW. The pulse duration was adjustable between 5 and 25 μs, with a repetition rate selectable as either 25 or 50 Hz. For synchronization of all CH transmitters, the pulse generator was locked to the 50 Hz of the British power grid. Four 360-foot (110 m) steel towers supported transmitting antennas, and four 240-foot (73 m) wooden towers supported cross-dipole arrays at three different levels. A goniometer was used to improve the directional accuracy from the multiple receiving antennas. By the summer of 1937, 20 initial CH stations were in check-out operation. A major RAF exercise was performed before the end of the year, and was such a success that £10,000,000 was appropriated by the Treasury for an eventual full chain of coastal stations. At the start of 1938, the RAF took over control of all CH stations, and the network began regular operations. In May 1938, Rowe replaced Watson Watt as Superintendent at Bawdsey. In addition to the work on CH and successor systems, there was now major work in airborne RDF equipment. This was led by E. G. Bowen and centered on 200-MHz (1.5 m) sets. The higher frequency allowed smaller antennas, appropriate for aircraft installation. From the initiation of RDF work at Orfordness, the Air Ministry had kept the British Army and the Royal Navy generally informed; this led to both of these forces having their own RDF developments. In 1931, at the Woolwich Research Station of the Army's Signals Experimental Establishment (SEE), W. A. S. Butement and P. E. Pollard had examined pulsed 600 MHz (50-cm) signals for detection of ships. Although they prepared a memorandum on this subject and performed preliminary experiments, for undefined reasons the War Office did not give it consideration. As the Air Ministry's work on RDF progressed, Colonel Peter Worlledge of the Royal Engineer and Signals Board met with Watson Watt and was briefed on the RDF equipment and techniques being developed at Orfordness. His report, “The Proposed Method of Aeroplane Detection and Its Prospects”, led the SEE to set up an “Army Cell” at Bawdsey in October 1936. This was under E. Talbot Paris and the staff included Butement and Pollard. The Cell's work emphasize two general types of RDF equipment: gun-laying (GL) systems for assisting anti-aircraft guns and searchlights, and coastal- defense (CD) systems for directing coastal artillery and defense of Army bases overseas. Pollard led the first project, a gun-laying RDF code-named Mobile Radio Unit (MRU). This truck-mounted system was designed as a small version of a CH station. It operated at 23 MHz (13 m) with a power of 300 kW. A single 105-foot (32 m) tower supported a transmitting antenna, as well as two receiving antennas set orthogonally for estimating the signal bearing. In February 1937, a developmental unit detected an aircraft at a range of 60 miles (96 km). The Air Ministry also adopted this system as a mobile auxiliary to the CH system. In early 1938, Butement started the development of a CD system based on Bowen's evolving 200-MHz (1.5-m) airborne sets. The transmitter had a 400 Hz pulse rate, a 2-μs pulse width, and 50 kW power (later increased to 150 kW). Although many of Bowen's transmitter and receiver components were used, the system would not be airborne so there were no limitations on antenna size. Primary credit for introducing beamed RDF systems in Britain must be given to Butement. For the CD, he developed a large dipole array, 10 feet (3.0 m) high and 24 feet (7.3 m) wide, giving much narrower beams and higher gain. This could be rotated at a speed up to 1.5 revolutions per minute. For greater directional accuracy, lobe switching on the receiving antennas was adopted. As a part of this development, he formulated the first – at least in Britain – mathematical relationship that would later become well known as the “radar range equation”. By May 1939, the CD RDF could detect aircraft flying as low as 500 feet (150 m) and at a range of 25 mi (40 km). With an antenna 60 feet (18 m) above sea level, it could determine the range of a 2,000-ton ship at 24 mi (39 km) and with an angular accuracy of as little as a quarter of a degree. Although the Royal Navy maintained close contact with the Air Ministry work at Bawdsey, they chose to establish their own RDF development at the Experimental Department of His Majesty's Signal School (HMSS) in Portsmouth, Hampshire, on the south coast. HMSS started RDF work in September 1935. Initial efforts, under R. F. Yeo, were in frequencies between 75 MHz (4 m) and 1.2 GHz (25 cm). All of the work was under the utmost secrecy; it could not even be discussed with other scientists and engineers at Portsmouth. A 75 MHz range-only set was eventually developed and designated Type 79X. Basic tests were done using a training ship, but the operation was unsatisfactory. In August 1937, the RDF development at HMSS changed, with many of their best researchers brought into the activity. John D. S. Rawlinson was made responsible for improving the Type 79X. To increase the efficiency, he decreased the frequency to 43 MHz ( 7 metre wavelength ). Designated Type 79Y, it had separate, stationary transmitting and receiving antennas. Prototypes of the Type 79Y air-warning system were successfully tested at sea in early 1938. The detection range on aircraft was between 30 and 50 miles (48 and 80 km), depending on height. The systems were then placed into service in August on the cruiser HMS Sheffield and in October on the battleship HMS Rodney. These were the first vessels in the Royal Navy with RDF systems. A radio-based device for remotely indicating the presence of ships was built in Germany by Christian Hülsmeyer in 1904. Often referred to as the first radar system, this did not directly measure the range (distance) to the target, and thus did not meet the criteria to be given this name. Over the following three decades in Germany, a number of radio-based detection systems were developed but none were true radars. This situation changed before World War II. Developments in three leading industries are described. In the early 1930s, physicist Rudolf Kühnhold, Scientific Director at the Kriegsmarine (German navy) Nachrichtenmittel-Versuchsanstalt (NVA—Experimental Institute of Communication Systems) in Kiel, was attempting to improve the acoustical methods of underwater detection of ships. He concluded that the desired accuracy in measuring distance to targets could be attained only by using pulsed electromagnetic waves. During 1933, Kühnhold first attempted to test this concept with a transmitting and receiving set that operated in the microwave region at 13.5 cm (2.22 GHz). The transmitter used a Barkhausen-Kurz tube (the first microwave generator) that produced only 0.1 watt. Unsuccessful with this, he asked for assistance from Paul-Günther Erbslöh and Hans-Karl Freiherr von Willisen, amateur radio operators who were developing a VHF system for communications. They enthusiastically agreed, and in January 1934, formed a company, Gesellschaft für Elektroakustische und Mechanische Apparate (GEMA), for the effort. From the start, the firm was always called simply GEMA. Work on a Funkmessgerät für Untersuchung (radio measuring device for research) began in earnest at GEMA. Hans Hollmann and Theodor Schultes, both affiliated with the prestigious Heinrich Hertz Institute in Berlin, were added as consultants. The first apparatus used a split-anode magnetron purchased from Philips in the Netherlands. This provided about 70 W at 50 cm (600 MHz), but suffered from frequency instability. Hollmann built a regenerative receiver and Schultes developed Yagi antennas for transmitting and receiving. In June 1934, large vessels passing through the Kiel Harbor were detected by Doppler-beat interference at a distance of about 2 km (1.2 mi). In October, strong reflections were observed from an aircraft that happened to fly through the beam; this opened consideration of targets other than ships. Kühnhold then shifted the GEMA work to a pulse-modulated system. A new 50 cm (600 MHz) Philips magnetron with better frequency stability was used. It was modulated with 2- μs pulses at a PRF of 2000 Hz. The transmitting antenna was an array of 10 pairs of dipoles with a reflecting mesh. The wide-band regenerative receiver used Acorn tubes from RCA, and the receiving antenna had three pairs of dipoles and incorporated lobe switching. A blocking device (a duplexer), shut the receiver input when the transmitter pulsed. A Braun tube (a CRT) was used for displaying the range. The equipment was first tested at a NVA site at the Lübecker Bay near Pelzerhaken. During May 1935, it detected returns from woods across the bay at a range of 15 km (9.3 mi). It had limited success, however, in detecting a research ship, Welle, only a short distance away. The receiver was then rebuilt, becoming a super-regenerative set with two intermediate-frequency stages. With this improved receiver, the system readily tracked vessels at up to 8 km (5.0 mi) range. In September 1935, a demonstration was given to the Commander-in-Chief of the Kriegsmarine. The system performance was excellent; the range was read off the Braun tube with a tolerance of 50 meters (less than 1 percent variance), and the lobe switching allowed a directional accuracy of 0.1 degree. Historically, this marked the first naval vessel equipped with radar. Although this apparatus was not put into production, GEMA was funded to develop similar systems operating around 50 cm (500 MHz). These became the Seetakt for the Kriegsmarine and the Freya for the Luftwaffe (German Air Force). Kühnhold remained with the NVA, but also consulted with GEMA. He is considered by many in Germany as the Father of Radar. During 1933–6, Hollmann wrote the first comprehensive treatise on microwaves, Physik und Technik der ultrakurzen Wellen (Physics and Technique of Ultrashort Waves), Springer 1938. In 1933, when Kühnhold at the NVA was first experimenting with microwaves, he had sought information from Telefunken on microwave tubes. (Telefunken was the largest supplier of radio products in Germany) There, Wilhelm Tolmé Runge had told him that no vacuum tubes were available for these frequencies. In fact, Runge was already experimenting with high-frequency transmitters and had Telefunken's tube department working on cm-wavelength devices. In the summer of 1935, Runge, now Director of Telefunken's Radio Research Laboratory, initiated an internally funded project in radio-based detection. Using Barkhausen-Kurz tubes, a 50 cm (600 MHz) receiver and 0.5-W transmitter were built. With the antennas placed flat on the ground some distance apart, Runge arranged for an aircraft to fly overhead and found that the receiver gave a strong Doppler-beat interference signal. Runge, now with Hans Hollmann as a consultant, continued in developing a 1.8 m (170 MHz) system using pulse-modulation. Wilhelm Stepp developed a transmit-receive device (a duplexer) for allowing a common antenna. Stepp also code-named the system Darmstadt after his home town, starting the practice in Telefunken of giving the systems names of cities. The system, with only a few watts transmitter power, was first tested in February 1936, detecting an aircraft at about 5 km (3.1 mi) distance. This led the Luftwaffe to fund the development of a 50 cm (600 MHz) gun-laying system, the Würzburg. Since before the First World War, Standard Elektrik Lorenz had been the main supplier of communication equipment for the German military and was the main rival of Telefunken. In late 1935, when Lorenz found that Runge at Telefunken was doing research in radio-based detection equipment, they started a similar activity under Gottfried Müller. A pulse-modulated set called Einheit für Abfragung (DFA – Device for Detection) was built. It used a type DS-310 tube (similar to the Acorn) operating at 70 cm (430 MHz) and about 1 kW power, it had identical transmitting and receiving antennas made with rows of half-wavelength dipoles backed by a reflecting screen. In early 1936, initial experiments gave reflections from large buildings at up to about 7 km (4.3 mi). The power was doubled by using two tubes, and in mid-1936, the equipment was set up on cliffs near Kiel, and good detections of ships at 7 km (4.3 mi) and aircraft at 4 km (2.5 mi) were attained. The success of this experimental set was reported to the Kriegsmarine, but they showed no interest; they were already fully engaged with GEMA for similar equipment. Also, because of extensive agreements between Lorenz and many foreign countries, the naval authorities had reservations concerning the company handling classified work. The DFA was then demonstrated to the Heer (German Army), and they contracted with Lorenz for developing Kurfürst (Elector), a system for supporting Flugzeugabwehrkanone (Flak, anti-aircraft guns). In the United States, both the Navy and Army needed means of remotely locating enemy ships and aircraft. In 1930, both services initiated the development of radio equipment that could meet this need. There was little coordination of these efforts; thus, they will be described separately. In the autumn of 1922, Albert H. Taylor and Leo C. Young at the U.S. Naval Aircraft Radio Laboratory were conducting communication experiments when they noticed that a wooden ship in the Potomac River was interfering with their signals. They prepared a memorandum suggesting that this might be used for ship detection in a harbor defense, but their suggestion was not taken up. In 1930, Lawrence A. Hyland working with Taylor and Young, now at the U.S. Naval Research Laboratory (NRL) in Washington, D.C., used a similar arrangement of radio equipment to detect a passing aircraft. This led to a proposal and patent for using this technique for detecting ships and aircraft. A simple wave-interference apparatus can detect the presence of an object, but it cannot determine its location or velocity. That had to await the invention of pulsed radar, and later, additional encoding techniques to extract this information from a CW signal. When Taylor's group at the NRL were unsuccessful in getting interference radio accepted as a detection means, Young suggested trying pulsing techniques. This would also allow the direct determination of range to the target. In 1924, Hyland and Young had built such a transmitter for Gregory Breit and Merle A. Tuve at the Carnegie Institution of Washington for successfully measuring the height of the ionosphere. Robert Morris Page was assigned by Taylor to implement Young's suggestion. Page designed a transmitter operating at 60 MHz and pulsed 10 μs in duration and 90 μs between pulses. In December 1934, the apparatus was used to detect a plane at a distance of one mile (1.6 km) flying up and down the Potomac. Although the detection range was small and the indications on the oscilloscope monitor were almost indistinct, it demonstrated the basic concept of a pulsed radar system. Based on this, Page, Taylor, and Young are usually credited with building and demonstrating the world's first true radar. An important subsequent development by Page was the duplexer, a device that allowed the transmitter and receiver to use the same antenna without overwhelming or destroying the sensitive receiver circuitry. This also solved the problem associated with synchronization of separate transmitter and receiver antennas which is critical to accurate position determination of long-range targets. The experiments with pulsed radar were continued, primarily in improving the receiver for handling the short pulses. In June 1936, the NRL's first prototype radar system, now operating at 28.6 MHz, was demonstrated to government officials, successfully tracking an aircraft at distances up to 25 miles (40 km). Their radar was based on low frequency signals, at least by today's standards, and thus required large antennas, making it impractical for ship or aircraft mounting. Antenna size is inversely proportional to the operating frequency; therefore, the operating frequency of the system was increased to 200 MHz, allowing much smaller antennas. The frequency of 200 MHz was the highest possible with existing transmitter tubes and other components. The new system was successfully tested at the NRL in April 1937, That same month, the first sea-borne testing was conducted. The equipment was temporarily installed on the USS Leary, with a Yagi antenna mounted on a gun barrel for sweeping the field of view. Based on success of the sea trials, the NRL further improved the system. Page developed the ring oscillator, allowing multiple output tubes and increasing the pulse-power to 15 kW in 5-µs pulses. A 20-by-23 ft (6 x 7 m), stacked-dipole “bedspring” antenna was used. In laboratory test during 1938, the system, now designated XAF, detected planes at ranges up to 100 miles (160 km). It was installed on the battleship USS New York for sea trials starting in January 1939, and became the first operational radio detection and ranging set in the U.S. fleet. In May 1939, a contract was awarded to RCA for production. Designated CXAM, deliveries started in May 1940. The acronym RADAR was coined from "Radio Detection And Ranging". One of the first CXAM systems was placed aboard the USS California, a battleship that was sunk in the Japanese attack on Pearl Harbor on December 7, 1941. United States Army As the Great Depression started, economic conditions led the U.S. Army Signal Corps to consolidate its widespread laboratory operations to Fort Monmouth, New Jersey. On June 30, 1930, these were designated the Signal Corps Laboratories (SCL) and Lt. Colonel (Dr.) William R. Blair was appointed the SCL Director. Among other activities, the SCL was made responsible for research in the detection of aircraft by acoustical and infrared radiation means. Blair had performed his doctoral research in the interaction of electromagnet waves with solid materials, and naturally gave attention to this type of detection. Initially, attempts were made to detect infrared radiation, either from the heat of aircraft engines or as reflected from large searchlights with infrared filters, as well as from radio signals generated by the engine ignition. Some success was made in the infrared detection, but little was accomplished using radio. In 1932, progress at the Naval Research Laboratory (NRL) on radio interference for aircraft detection was passed on to the Army. While it does not appear that any of this information was used by Blair, the SCL did undertake a systematic survey of what was then known throughout the world about the methods of generating, modulating, and detecting radio signals in the microwave region. The SCL's first definitive efforts in radio-based target detection started in 1934 when the Chief of the Army Signal Corps, after seeing a microwave demonstration by RCA, suggested that radio-echo techniques be investigated. The SCL called this technique radio position-finding (RPF). Based on the previous investigations, the SCL first tried microwaves. During 1934 and 1935, tests of microwave RPF equipment resulted in Doppler-shifted signals being obtained, initially at only a few hundred feet distance and later greater than a mile. These tests involved a bi-static arrangement, with the transmitter at one end of the signal path and the receiver at the other, and the reflecting target passing through or near the path. Blair was evidently not aware of the success of a pulsed system at the NRL in December 1934. In an internal 1935 note, Blair had commented: Consideration is now being given to the scheme of projecting an interrupted sequence of trains of oscillations against the target and attempting to detect the echoes during the interstices between the projections. In 1936, W. Delmar Hershberger, SCL's Chief Engineer at that time, started a modest project in pulsed microwave transmission. Lacking success with microwaves, Hershberger visited the NRL (where he had earlier worked) and saw a demonstration of their pulsed set. Back at the SCL, he and Robert H. Noyes built an experimental apparatus using a 75 watt, 110 MHz (2.73 m) transmitter with pulse modulation and a receiver patterned on the one at the NRL. A request for project funding was turned down by the War Department, but $75,000 for support was diverted from a previous appropriation for a communication project. In October 1936, Paul E. Watson became the SCL Chief Engineer and led the project. A field setup near the coast was made with the transmitter and receiver separated by a mile. On December 14, 1936, the experimental set detected at up to 7 mi (11 km) range aircraft flying in and out of New York City. Work then began on a prototype system. Ralph I. Cole headed receiver work and William S. Marks lead transmitter improvements. Separate receivers and antennas were used for azimuth and elevation detection. Both receiving and the transmitting antennas used large arrays of dipole wires on wooden frames. The system output was intended to aim a searchlight. The first demonstration of the full set was made on the night of May 26, 1937. A bomber was detected and then illuminated by the searchlight. The observers included the Secretary of War, Henry A. Woodring; he was so impressed that the next day orders were given for the full development of the system. Congress gave an appropriation of $250,000. The frequency was increased to 200 MHz (1.5 m). The transmitter used 16 tubes in a ring oscillator circuit (developed at the NRL), producing about 75 kW peak power. Major James C. Moore was assigned to head the complex electrical and mechanical design of lobe switching antennas. Engineers from Western Electric and Westinghouse were brought in to assist in the overall development. Designated SCR-268, a prototype was successfully demonstrated in late 1938 at Fort Monroe, Virginia. The production of SCR-268 sets was started by Western Electric in 1939, and it entered service in early 1941. Even before the SCR-268 entered service, it had been greatly improved. In a project led by Major (Dr.) Harold A. Zahl, two new configurations evolved – the SCR-270 (mobile) and the SCR-271 (fixed-site). Operation at 106 MHz (2.83 m) was selected, and a single water-cooled tube provided 8 kW (100 kW pulsed) output power. Westinghouse received a production contract, and started deliveries near the end of 1940. The Army deployed five of the first SCR-270 sets around the island of Oahu in Hawaii. At 7:02 on the morning of December 7, 1941, one of these radars detected a flight of aircraft at a range of 136 miles (219 km) due north. The observation was passed on to an aircraft warning center where it was misidentified as a flight of U.S. bombers known to be approaching from the mainland. The alarm went unheeded, and at 7:48, the Japanese aircraft first struck at Pearl Harbor. In 1895, Alexander Stepanovich Popov, a physics instructor at the Imperial Russian Navy school in Kronstadt, developed an apparatus using a coherer tube for detecting distant lightning strikes. The next year, he added a spark-gap transmitter and demonstrated the first radio communication set in Russia. During 1897, while testing this in communicating between two ships in the Baltic Sea, he took note of an interference beat caused by the passage of a third vessel. In his report, Popov wrote that this phenomenon might be used for detecting objects, but he did nothing more with this observation. In a few years following the 1917 Russian Revolution and the establishment the Union of Soviet Socialist Republics (USSR or Soviet Union) in 1924, Germany's Luftwaffe had aircraft capable of penetrating deep into Soviet territory. Thus, the detection of aircraft at night or above clouds was of great interest to the Soviet Air Defense Forces (PVO). The PVO depended on optical devices for locating targets, and had physicist Pavel K. Oshchepkov conducting research in possible improvement of these devices. In June 1933, Oshchepkov changed his research from optics to radio techniques and started the development of a razvedyvlatl’naya elektromagnitnaya stantsiya (reconnaissance electromagnetic station). In a short time, Oshchepkov was made responsible for a technical expertise sector of PVO devoted to radiolokatory (radio-location) techniques as well as heading a Special Design Bureau (SKB, spetsialnoe konstruktorskoe byuro) in Leningrad. The Glavnoe Artilleriyskoe Upravlenie (GAU, Main Artillery Administration) was considered the “brains” of the Red Army. It not only had competent engineers and physicists on its central staff, but also had a number of scientific research institutes. Thus, the GAU was also assigned the aircraft detection problem, and Lt. Gen. M. M. Lobanov was placed in charge. After examining existing optical and acoustical equipment, Lobanov also turned to radio-location techniques. For this he approached the Tsentral’naya Radiolaboratoriya (TsRL, Central Radio Laboratory) in Leningrad. Here, Yu. K. Korovin was conducting research on VHF communications, and had built a 50 cm (600 MHz), 0.2 W transmitter using a Barkhausen-Kurz tube. For testing the concept, Korovin arranged the transmitting and receiving antennas along the flight path of an aircraft. On January 3, 1934, a Doppler signal was received by reflections from the aircraft at some 600 m range and 100–150 m altitude. For further research in detection methods, a major conference on this subject was arranged for the PVO by the Russian Academy of Sciences (RAN). The conference was held in Leningrad in mid-January 1934, and chaired by Abram Fedorovich Ioffe, Director of the Leningrad Physical-Technical Institute (LPTI). Ioffe was generally considered the top Russian physicist of his time. All types of detection techniques were discussed, but radio-location received the greatest attention. To distribute the conference findings to a wider audience, the proceedings were published the following month in a journal. This included all of the then-existing information on radio-location in the USSR, available (in Russian language) to researchers in this field throughout the world. Recognizing the potential value of radio-location to the military, the GAU made a separate agreement with the Leningrad Electro-Physics Institute (LEPI), for a radio-location system. This technical effort was led by B. K. Shembel. The LEPI had built a transmitter and receiver to study the radio-reflection characteristics of various materials and targets. Shembel readily made this into an experimental bi-static radio-location system called Bistro (Rapid). The Bistro transmitter, operating at 4.7 m (64 MHz), produced near 200 W and was frequency-modulated by a 1 kHz tone. A fixed transmitting antenna gave a broad coverage of what was called a radioekran (radio screen). A regenerative receiver, located some distance from the transmitter, had a dipole antenna mounted on a hand-driven reciprocating mechanism. An aircraft passing into the screened zone would reflect the radiation, and the receiver would detect the Doppler-interference beat between the transmitted and reflected signals. Bistro was first tested during the summer of 1934. With the receiver up to 11 km away from the transmitter, the set could only detect an aircraft entering a screen at about 3 km (1.9 mi) range and under 1,000 m. With improvements, it was believed to have a potential range of 75 km, and five sets were ordered in October for field trials. Bistro is often cited as the USSR's first radar system; however, it was incapable of directly measuring range and thus could not be so classified. LEPI and TsRL were both made a part of Nauchno-issledovatelsky institut-9 (NII-9, Scientific Research Institute #9), a new GAU organization opened in Leningrad in 1935. Mikhail A. Bonch-Bruyevich, a renowned radio physicist previously with TsRL and the University of Leningrad, was named the NII-9 Scientific Director. Research on magnetrons began at Kharkov University in Ukraine during the mid-1920s. Before the end of the decade this had resulted in publications with worldwide distribution, such as the German journal Annalen der Physik (Annals of Physics). Based on this work, Ioffe recommended that a portion of the LEPI be transferred to the city of Kharkiv, resulting in the Ukrainian Institute of Physics and Technology (LIPT) being formed in 1930. Within the LIPT, the Laboratory of Electromagnetic Oscillations (LEMO), headed by Abram A. Slutskin, continued with magnetron development. Led by Aleksandr S. Usikov, a number of advanced segmented-anode magnetrons evolved. (It is noted that these and other early magnetrons developed in the USSR suffered from frequency instability, a problem in their use in Soviet radar systems.) In 1936, one of Usikov's magnetrons producing about 7 W at 18 cm (1.7 GHz) was used by Shembel at the NII-9 as a transmitter in a radioiskatel (radio-seeker) called Burya (Storm). Operating similarly to Bistro, the range of detection was about 10 km, and provided azimuth and elevation coordinates estimated to within 4 degrees. No attempts were made to make this into a pulsed system, thus, it could not provide range and was not qualified to be classified as a radar. It was, however, the first microwave radio-detection system. While work by Shembel and Bonch-Bruyevich on continuous-wave systems was taking place at NII-9, Oshehepkov at the SKB and V. V. Tsimbalin of Ioffe's LPTI were pursuing a pulsed system. In 1936, they built a radio-location set operating at 4 m (75 MHz) with a peak-power of about 500 W and a 10-μs pulse duration. Before the end of the year, tests using separated transmitting and receiving sites resulted in an aircraft being detected at 7 km. In April 1937, with the peak-pulse power increased to 1 kW and the antenna separation also increased, test showed a detection range of near 17 km at a height of 1.5 km. Although a pulsed system, it was not capable of directly providing range – the technique of using pulses for determining range had not yet been developed. Pre-war radio location systems In June 1937, all of the work in Leningrad on radio-location suddenly stopped. The infamous Great Purge of dictator Joseph Stalin swept over the military high commands and its supporting scientific community. The PVO chief was executed. Oshchepkov, charged with “high crime”, was sentenced to 10 years at a Gulag penal labor camp. NII-9 as an organization was saved, but Shenbel was dismissed and Bonch-Bruyevich was named the new director. The Nauchnoissledovatel'skii ispytalel'nyi institut svyazi RKKA (NIIIS-KA, Scientific Research Institute of Signals of the Red Army), had initially opposed research in radio-location, favoring instead acoustical techniques. However, this portion of the Red Army gained power as a result of the Great Purge, and did an about face, pressing hard for speedy development of radio-location systems. They took over Oshchepkov's laboratory and were made responsible for all existing and future agreements for research and factory production. Writing later about the Purge and subsequent effects, General Lobanov commented that it led to the development being placed under a single organization, and the rapid reorganization of the work. At Oshchepkov's former laboratory, work with the 4 m (75 MHz) pulsed-transmission system was continued by A. I. Shestako. Through pulsing, the transmitter produced a peak power of 1 kW, the highest level thus far generated. In July 1938, a fixed-position, bi-static experimental system detected an aircraft at about 30 km range at heights of 500 m, and at 95 km range, for high-flying targets at 7.5 km altitude. The system was still incapable of directly determining the range. The project was then taken up by Ioffe's LPTI, resulting in the development of a mobile system designated Redut (Redoubt). An arrangement of new transmitter tubes was used, giving near 50 kW peak-power with a 10 μs pulse-duration. Yagi antennas were adopted for both transmitting and receiving. The Redut was first field tested in October 1939, at a site near Sevastopol, a port in Ukraine on the coast of the Black Sea. This testing was in part to show the NKKF (Soviet Navy) the value of early-warning radio-location for protecting strategic ports. With the equipment on a cliff about 160 meters above sea level, a flying boat was detected at ranges up to 150 km. The Yagi antennas were spaced about 1,000 meters; thus, close coordination was required to aim them in synchronization. An improved version of the Redut, the Redut-K, was developed by Aksel Berg in 1940 and placed aboard the light cruiser Molotov in April 1941. Molotov became the first Soviet warship equipped with radar. At the NII-9 under Bonch-Bruyevich, scientists developed two types of very advanced microwave generators. In 1938, a linear-beam, velocity-modulated vacuum tube (a klystron) was developed by Nikolay Devyatkov, based on designs from Kharkiv. This device produced about 25 W at 15–18 cm (2.0–1.7 GHz) and was later used in experimental systems. Devyatkov followed this with a simpler, single-resonator device (a reflex klystron). At this same time, D. E. Malyarov and N. F. Alekseyev were building a series of magnetrons, also based on designs from Kharkov; the best of these produced 300 W at 9 cm (3 GHz). Also at NII-9, D. S. Stogov was placed in charge of the improvements to the Bistro system. Redesignated as Reven (Rhubarb), it was tested in August 1938, but was only marginally better than the predecessor. With additional minor operational improvements, it was made into a mobile system called Radio Ulavlivatel Samoletov (RUS, Radio Catcher of Aircraft), soon designated as RUS-1. This continuous-wave, bi-static system had a truck-mounted transmitter operating at 4.7 m (64 MHz) and two truck-mounted receivers. Although the RUS-1 transmitter was in a cabin on the rear of a truck, the antenna had to be strung between external poles anchored to the ground. A second truck carrying the electrical generator and other equipment was backed against the transmitter truck. Two receivers were used, each in a truck-mounted cabin with a dipole antenna on a rotatable pole extended overhead. In use, the receiver trucks were placed about 40 km apart; thus, with two positions, it would be possible to make a rough estimate of the range by triangulation on a map. The RUS-1 system was tested and put into production in 1939, then entered service in 1940, becoming the first deployed radio-location system in the Red Army. About 45 RUS-1 systems were built at the Svetlana Factory in Leningrad before the end of 1941, and deployed along the western USSR borders and in the Far East. Without direct ranging capability, however, the military found the RUS-1 to be of little value. Even before the demise of efforts in Leningrad, the NIIIS-KA had contracted with the UIPT in Kharkov to investigate a pulsed radio-location system for anti-aircraft applications. This led the LEMO, in March 1937, to start an internally funded project with the code name Zenit (a popular football team at the time). The transmitter development was led by Usikov, supplier of the magnetron used earlier in the Burya. For the Zenit, Usikov used a 60 cm (500 MHz) magnetron pulsed at 10–20 μs duration and providing 3 kW pulsed power, later increased to near 10 kW. Semion Braude led the development of a superheterodyne receiver using a tunable magnetron as the local oscillator. The system had separate transmitting and receiving antennas set about 65 m apart, built with dipoles backed by 3-meter parabolic reflectors. Zenit was first tested in October 1938. In this, a medium-sized bomber was detected at a range of 3 km. The testing was observed by the NIIIS-KA and found to be sufficient for starting a contracted effort. An agreement was made in May 1939, specifying the required performance and calling for the system to be ready for production by 1941. The transmitter was increased in power, the antennas had selsens added to allow them to track, and the receiver sensitivity was improved by using an RCA 955 acorn triode as the local oscillator. A demonstration of the improved Zenit was given in September 1940. In this, it was shown that the range, altitude, and azimuth of an aircraft flying at heights between 4,000 and 7,000 meters could be determined at up to 25 km distance. The time required for these measurements, however, was about 38 seconds, far too long for use by anti-aircraft batteries. Also, with the antennas aimed at a low angle, there was a dead zone of some distance caused by interference from ground-level reflections. While this performance was not satisfactory for immediate gun-laying applications, it was the first full three-coordinate radio-location system in the Soviet Union and showed the way for future systems. Work at the LEMO continued on Zenit, particularly in converting it into a single-antenna system designated Rubin. This effort, however, was disrupted by the invasion of the USSR by Germany in June 1941. In a short while, the development activities at Kharkov were ordered to be evacuated to the Far East. The research efforts in Leningrad were similarly dispersed. After eight years of effort by highly qualified physicists and engineers, the USSR entered World War II without a fully developed and fielded radar system. As a seafaring nation, Japan had an early interest in wireless (radio) communications. The first known use of wireless telegraphy in warfare at sea was by the Imperial Japanese Navy, in defeating the Russian Imperial Fleet in 1904 at the Battle of Port Arthur. There was an early interest in equipment for radio direction-finding, for use in both navigation and military surveillance. The Imperial Navy developed an excellent receiver for this purpose in 1921, and soon most of the Japanese warships had this equipment. In the two decades between the two World Wars, radio technology in Japan made advancements on a par with that in the western nations. There were often impediments, however, in transferring these advancements into the military. For a long time, the Japanese had believed that they had the best fighting capability of any military force in the world. The military leaders, who were then also in control of the government, sincerely felt that the weapons, aircraft, and ships that they had built were fully sufficient and, with these as they were, the Japanese Army and Navy were invincible. In 1936, Japan joined Nazi Germany and Fascist Italy in a Tripartite Pact. Radio engineering was strong in Japan's higher education institutions, especially the Imperial (government-financed) universities. This included undergraduate and graduate study, as well as academic research in this field. Special relationships were established with foreign universities and institutes, particularly in Germany, with Japanese teachers and researchers often going overseas for advanced study. The academic research tended toward the improvement of basic technologies, rather than their specific applications. There was considerable research in high-frequency and high-power oscillators, such as the magnetron, but the application of these devices was generally left to industrial and military researchers. One of Japan's best-known radio researchers in the 1920s–1930s era was Professor Hidetsugu Yagi. After graduate study in Germany, England, and America, Yagi joined Tohoku University, where his research centered on antennas and oscillators for high-frequency communications. A summary of the radio research work at Tohoku University was contained in a 1928 seminal paper by Yagi. Jointly with Shintaro Uda, one of Yagi's first doctoral students, a radically new antenna emerged. It had a number of parasitic elements (directors and reflectors) and would come to be known as the Yagi-Uda or Yagi antenna. A U.S. patent, issued in May 1932, was assigned to RCA. To this day, this is the most widely used directional antenna worldwide. The cavity magnetron was also of interest to Yagi. This HF (~10-MHz) device had been invented in 1921 by Albert W. Hull at General Electric, and Yagi was convinced that it could function in the VHF or even the UHF region. In 1927, Kinjiro Okabe, another of Yagi's early doctoral students, developed a split-anode device that ultimately generated oscillations at wavelengths down to about 12 cm (2.5 GHz). Researchers at other Japanese universities and institutions also started projects in magnetron development, leading to improvements in the split-anode device. These included Kiyoshi Morita at the Tokyo Institute of Technology, and Tsuneo Ito at Tokoku University. Shigeru Nakajima at Japan Radio Company (JRC) saw a commercial potential of these devices and began the further development and subsequent very profitable production of magnetrons for the medical dielectric heating (diathermy) market. The only military interest in magnetrons was shown by Yoji Ito at the Naval Technical Research Institute (NTRI). The NTRI was formed in 1922, and became fully operational in 1930. Located at Meguro, Tokyo, near the Tokyo Institute of Technology, first-rate scientists, engineers, and technicians were engaged in activities ranging from designing giant submarines to building new radio tubes. Included were all of the precursors of radar, but this did not mean that the heads of the Imperial Navy accepted these accomplishments. In 1936, Tsuneo Ito (no relationship to Yoji Ito) developed an 8-split-anode magnetron that produced about 10 W at 10 cm (3 GHz). Based on its appearance, it was named Tachibana (or Mandarin, an orange citrus fruit). Tsuneo Ito also joined the NTRI and continued his research on magnetrons in association with Yoji Ito. In 1937, they developed the technique of coupling adjacent segments (called push-pull), resulting in frequency stability, an extremely important magnetron breakthrough. By early 1939, NTRI/JRC had jointly developed a 10-cm (3-GHz), stable-frequency Mandarin-type magnetron (No. M3) that, with water cooling, could produce 500-W power. In the same time period, magnetrons were built with 10 and 12 cavities operating as low as 0.7 cm (40 GHz). The configuration of the M3 magnetron was essentially the same as that used later in the magnetron developed by Boot and Randall at Birmingham University in early 1940, including the improvement of strapped cavities. Unlike the high-power magnetron in Britain, however, the initial device from the NTRI generated only a few hundred watts. In general, there was no lack of scientific and engineering capabilities in Japan; their warships and aircraft clearly showed high levels of technical competency. They were ahead of Britain in the development of magnetrons, and their Yagi antenna was the world standard for VHF systems. It was simply that the top military leaders failed to recognize how the application of radio in detection and ranging – what was often called the Radio Range Finder (RRF) – could be of value, particularly in any defensive role; offense not defense, totally dominated their thinking. In 1938, engineers from the Research Office of Nippon Electric Company (NEC) were making coverage tests on high-frequency transmitters when rapid fading of the signal was observed. This occurred whenever an aircraft passed over the line between the transmitter and receiving meter. Masatsugu Kobayashi, the Manager of NEC's Tube Department, recognized that this was due to the beat-frequency interference of the direct signal and the Doppler-shifted signal reflected from the aircraft. Kobayashi suggested to the Army Science Research Institute that this phenomenon might be used as an aircraft warning method. Although the Army had rejected earlier proposals for using radio-detection techniques, this one had appeal because it was based on an easily understandable method and would require little developmental cost and risk to prove its military value. NEC assigned Kinji Satake of their Research Institute to develop a system called the Bi-static Doppler Interference Detector (BDID). For testing the prototype system, it was set up on an area recently occupied by Japan along the coast of China. The system operated between 4.0–7.5 MHz (75–40 m) and involved a number of widely spaced stations; this formed a radio screen that could detect the presence (but nothing more) of an aircraft at distances up to 500 km (310 mi). The BDID was the Imperial Army's first deployed radio-based detection system, placed into operation in early 1941. A similar system was developed by Satake for the Japanese homeland. Information centers received oral warnings from the operators at BDID stations, usually spaced between 65 and 240 km (40 and 150 mi). To reduce homing vulnerability – a great fear of the military – the transmitters operated with only a few watts power. Although originally intended to be temporary until better systems were available, they remained in operation throughout the war. It was not until after the start of war that the Imperial Army had equipment that could be called radar. In the mid-1930s, some of the technical specialists in the Imperial Navy became interested in the possibility of using radio to detect aircraft. For consultation, they turned to Professor Yagi who was the Director of the Radio Research Laboratory at Osaka Imperial University. Yagi suggested that this might be done by examining the Doppler frequency-shift in a reflected signal. Funding was provided to the Osaka Laboratory for experimental investigation of this technique. Kinjiro Okabe, the inventor of the split-anode magnetron and who had followed Yagi to Osaka, led the effort. Theoretical analyses indicated that the reflections would be greater if the wavelength was approximately the same as the size of aircraft structures. Thus, a VHF transmitter and receiver with Yagi antennas separated some distance were used for the experiment. In 1936, Okabe successfully detected a passing aircraft by the Doppler-interference method; this was the first recorded demonstration in Japan of aircraft detection by radio. With this success, Okabe's research interest switched from magnetrons to VHF equipment for target detection. This, however, did not lead to any significant funding. The top levels of the Imperial Navy believed that any advantage of using radio for this purpose were greatly outweighed by enemy intercept and disclosure of the sender's presence. Historically, warships in formation used lights and horns to avoid collision at night or when in fog. Newer techniques of VHF radio communications and direction-finding might also be used, but all of these methods were highly vulnerable to enemy interception. At the NTRI, Yoji Ito proposed that the UHF signal from a magnetron might be used to generate a very narrow beam that would have a greatly reduced chance of enemy detection. Development of microwave system for collision avoidance started in 1939, when funding was provided by the Imperial Navy to JRC for preliminary experiments. In a cooperative effort involving Yoji Ito of the NTRI and Shigeru Nakajima of JRC, an apparatus using a 3-cm (10-GHz) magnetron with frequency modulation was designed and built. The equipment was used in an attempt to detect reflections from tall structures a few kilometers away. This experiment gave poor results, attributed to the very low power from the magnetron. The initial magnetron was replaced by one operating at 16 cm (1.9 GHz) and with considerably higher power. The results were then much better, and in October 1940, the equipment obtained clear echoes from a ship in Tokyo Bay at a distance of about 10 km (6.2 mi). There was still no commitment by top Japanese naval officials for using this technology aboard warships. Nothing more was done at this time, but late in 1941, the system was adopted for limited use. In late 1940, Japan arranged for two technical missions to visit Germany and exchange information about their developments in military technology. Commander Yoji Ito represented the Navy's interest in radio applications, and Lieutenant Colonel Kinji Satake did the same for the Army. During a visit of several months, they exchanged significant general information, as well as limited secret materials in some technologies, but little directly concerning radio-detection techniques. Neither side even mentioned magnetrons, but the Germans did apparently disclose their use of pulsed techniques. After receiving the reports from the technical exchange in Germany, as well as intelligence reports concerning the success of Britain with firing using RDF, the Naval General Staff reversed itself and tentatively accepted pulse-transmission technology. On August 2, 1941, even before Yoji Ito returned to Japan, funds were allocated for the initial development of pulse-modulated radars. Commander Chuji Hashimoto of the NTRI was responsible for initiating this activity. A prototype set operating at 4.2 m (71 MHz) and producing about 5 kW was completed on a crash basis. With the NTRI in the lead, the firm NEC and the Research Laboratory of Japan Broadcasting Corporation (NHK) made major contributions to the effort. Kenjiro Takayanagi, Chief Engineer of NHK's experimental television station and called “the father of Japanese television”, was especially helpful in rapidly developing the pulse-forming and timing circuits, as well as the receiver display. In early September 1941, the prototype set was first tested; it detected a single bomber at 97 km (60 mi) and a flight of aircraft at 145 km (90 mi). The system, Japan's first full Radio Range Finder (RRF – radar), was designated Mark 1 Model 1. Contracts were given to three firms for serial production; NEC built the transmitters and pulse modulators, Japan Victor the receivers and associated displays, and Fuji Electrical the antennas and their servo drives. The system operated at 3.0 m (100 MHz) with a peak-power of 40 kW. Dipole arrays with matte+-type reflectors were used in separate antennas for transmitting and receiving. In November 1941, the first manufactured RRF was placed into service as a land-based early-warning system at Katsuura, Chiba, a town on the Pacific coast about 100 km (62 mi) from Tokyo. A large system, it weighed close to 8,700 kg (19,000 lb). The detection range was about 130 km (81 mi) for single aircraft and 250 km (160 mi) for groups. The Philips Company in Eindhoven, Netherlands, operated Natuurkundig Laboratorium (NatLab) for fundamental research related to its products. NatLab researcher Klaas Posthumus developed a magnetron split into four elements. In developing a communication system using this magnetron, C.H.J.A. Staal was testing the transmission by using parabolic transmitting and receiving antennas set side-by-side, both aimed at a large plate some distance away. To overcome frequency instability of the magnetron, pulse modulation was used. It was found that the plate reflected a strong signal. Recognizing the potential importance of this as a detection device, NatLab arranged a demonstration for the Koninklijke Marine (Royal Netherlands Navy). This was conducted in 1937 across the entrance to the main naval port at Marsdiep. Reflections from sea waves obscured the return from the target ship, but the Navy was sufficiently impressed to initiate sponsorship of the research. In 1939, an improved set was demonstrated at Wijk aan Zee, detecting a vessel at a distance of 3.2 km (2.0 mi). A prototype system was built by Philips, and plans were started by the firm Nederlandse Seintoestellen Fabriek (a Philips subsidiary) for building a chain of warning stations to protect the primary ports. Some field testing of the prototype was conducted, but the project was discontinued when Germany invaded the Netherlands on May 10, 1940. Within the NatLab, however, the work was continued in great secrecy until 1942. During the early 1930s, there were widespread rumours of a “death ray” being developed. The Dutch Parliament set up a Committee for the Applications of Physics in Weaponry under G.J. Elias to examine this potential, but the Committee quickly discounted death rays. The Committee did, however, establish the Laboratorium voor Fysieke Ontwikkeling (LFO, Laboratory for Physical Development), dedicated to supporting the Netherlands Armed Forces. Operating in great secrecy, the LFO opened a facility called the Meetgebouw (Measurements Building) located on the Plain of Waalsdorp. In 1934, J.L.W.C. von Weiler joined the LFO and, with S.G. Gratama, began research on a 1.25-m (240-MHz) communication system to be used in artillery spotting. In 1937, while tests were being conducted on this system, a passing flock of birds disturbed the signal. Realizing that this might be a potential method for detecting aircraft, the Minister of War ordered continuation of the experiments. Weiler and Gratama set about developing a system for directing searchlights and aiming anti-aircraft guns. The experimental “electrical listening device” operated at 70 cm (430 MHz) and used pulsed transmission at an RPF of 10 kHz. A transmit-receive blocking circuit was developed to allow a common antenna. The received signal was displayed on a CR tube with a circular time base. This set was demonstrated to the Army in April 1938 and detected an aircraft at a range of 18 km (11 mi). The set was rejected, however, because it could not withstand the harsh environment of Army combat conditions. The Navy was more receptive. Funding was provided for final development, and Max Staal was added to the team. To maintain secrecy, they divided the development into parts. The transmitter was built at the Delft Technical College and the receiver at the University of Leiden. Ten sets would be assembled under the personal supervision of J.J.A. Schagen van Leeuwen, head of the firm Hazemeijer Fabriek van Signaalapparaten. The prototype had a peak-power of 1 kW, and used a pulse length of 2 to 3 μs with a 10- to 20 kHz PRF. The receiver was a super-heterodyne type using Acorn tubes and a 6 MHz IF stage. The antenna consisted of 4 rows of 16 half-wave dipoles backed by a 3- by 3-meter mesh screen. The operator used a bicycle-type drive to rotate the antenna, and the elevation could be changed using a hand crank. Several sets were completed, and one was put into operation on the Malieveld in The Hague just before the Netherlands fell to Germany in May 1940. The set worked well, spotting enemy aircraft during the first days of fighting. To prevent capture, operating units and plans for the system were destroyed. Von Weiler and Max Staal fled to England aboard one of the last ships able to leave, carrying two disassembled sets with them. Later, Gratama and van Leeuwen also escaped to England. In 1927, French physicists Camille Gutton and Emile Pierret experimented with magnetrons and other devices generating wavelengths going down to 16 cm. Camille's son, Henri Gutton, was with the Compagnie générale de la télégraphie sans fil (CSF) where he and Robert Warneck improved his father's magnetrons. In 1934, following systematic studies on the magnetron, the research branch of the CSF, headed by Maurice Ponte, submitted a patent application for a device designed to detect obstacles using continuous radiation of ultra-short wavelengths produced by a magnetron. These were still CW systems and depended on Doppler interference for detection. However, as most modern radars, antennas were collocated. The device was measuring distance and azimuth but not directly as in the later "radar" on a screen (1939). Still, this was the first patent of an operational radio-detection apparatus using centimetric wavelengths. The system was tested in late 1934 aboard the cargo ship Oregon, with two transmitters working at 80 cm and 16 cm wavelengths. Coastlines and boats were detected from a range of 10–12 nautical miles. The shortest wavelength was chosen for the final design, which equipped the liner SS Normandie as early as mid-1935 for operational use. In late 1937, Maurice Elie at SFR developed a means of pulse-modulating transmitter tubes. This led to a new 16-cm system with a peak power near 500 W and a pulse width of 6 μs. French and U.S. patents were filed in December 1939. The system was planned to be sea-tested aboard the Normandie, but this was cancelled at the outbreak of war. At the same time, Pierre David at the Laboratoire National de Radioélectricité (National Laboratory of Radioelectricity, LNR) experimented with reflected radio signals at about a meter wavelength. Starting in 1931, he observed that aircraft caused interference to the signals. The LNR then initiated research on a detection technique called barrage électromagnétique (electromagnetic curtain). While this could indicate the general location of penetration, precise determination of direction and speed was not possible. In 1936, the Défense Aérienne du Territoire (Defence of Air Territory), ran tests on David's electromagnetic curtain. In the tests, the system detected most of the entering aircraft, but too many were missed. As the war grew closer, the need for an aircraft detection was critical. David realized the advantages of a pulsed system, and in October 1938 he designed a 50 MHz, pulse-modulated system with a peak-pulse power of 12 kW. This was built by the firm SADIR. France declared war on Germany on September 1, 1939, and there was a great need for an early-warning detection system. The SADIR system was taken to near Toulon, and detected and measured the range of invading aircraft as far as 55 km (34 mi). The SFR pulsed system was set up near Paris where it detected aircraft at ranges up to 130 km (81 mi). However, the German advance was overwhelming and emergency measures had to be taken; it was too late for France to develop radars alone and it was decided that her breakthroughs would be shared with her allies. In mid-1940, Maurice Ponte, from the laboratories of CSF in Paris, presented a cavity magnetron designed by Henri Gutton at SFR (see above) to the GEC laboratories at Wembley, Britain. This magnetron was designed for pulsed operation at a wavelength of 16 cm. Unlike other magnetron designs to that day, such as the Boots and Randall magnetron (see British contributions above), this tube used an oxide-coated cathode with a peak power output of 1 kW, demonstrating that oxide cathodes were the solution for producing high-power pulses at short wavelengths, a problem which had eluded British and American researchers for years. The significance this event was underlined by Eric Megaw, in a 1946 review of early radar developments: "This was the starting point of the use of the oxide cathode in practically all our subsequent pulsed transmitting waves and as such was a significant contribution to British radar. The date was the 8th May 1940". A tweaked version of this magnetron reached a peak output of 10 kW by August 1940. It was that model which, in turn, was handed to the Americans as a token of good faith during the negotiations made by the Tizard delegation in 1940 to obtain from the U.S. the resources necessary for Britain to exploit the full military potential of her research and development work. Guglielmo Marconi initiated the research in Italy on radio-based detection technology. In 1933, while participating with his Italian firm in experiments with a 600 MHz communications link across Rome, he noted transmission disturbances caused by moving objects adjacent to its path. This led to the development at his laboratory at Cornegliano of a 330-MHz (0.91-m) CW Doppler detection system that he called radioecometro. Barkhausen-Kurz tubes were used in both the transmitter and receiver. In May 1935, Marconi demonstrated his system to the Fascist dictator Benito Mussolini and members of the military General Staff; however the output power was insufficient for military use. While Marconi's demonstration raised considerable interest, little more was done with his apparatus. Mussolini directed that radio-based detection technology be further developed, and it was assigned to the Regio Istituto Elettrotecnico e delle Comunicazioni (RIEC, Royal Institute for Electro-technics and Communications). The RIEC had been established in 1916 on the campus of the Italian Naval Academy in Livorno. Lieutenant Ugo Tiberio, a physics and radio-technology instructor at the Academy, was assigned to head the project on a part-time basis. Tiberio prepared a report on developing an experimental apparatus that he called telemetro radiofonico del rivelatore (RDT, Radio-Detector Telemetry). The report, submitted in mid-1936, included what was later known as the radar range equation. When the work got underway, Nello Carrara, a civilian physics instructor who had been doing research at the RIEC in microwaves, was added to be responsible for developing the RDT transmitter. Before the end of 1936, Tiberio and Carrara had demonstrated the EC-1, the first Italian RDT system. This had an FM transmitter operating at 200 MHz (1.5 m) with a single parabolic cylinder antenna. It detected by mixing the transmitted and the Doppler-shifted reflected signals, resulting in an audible tone. The EC-1 did not provide a range measurement; to add this capability, development of a pulsed system was initiated in 1937. Captain Alfeo Brandimarte joined the group and primarily designed the first pulsed system, the EC-2. This operated at 175 MHz (1.7 m) and used a single antenna made with a number of equi-phased dipoles. The detected signal was intended to be displayed on an oscilloscope. There were many problems, and the system never reached the testing stage. Work then turned to developing higher power and operating frequencies. Carrara, in cooperation with the firm FIVRE, developed a magnetron-like device. This was composed of a pair of triodes connected to a resonate cavity and produced 10 kW at 425 MHz (70 cm). It was used in designing two versions of the EC-3, one for shipboard and the other for coastal defense. Italy, joining Germany, entered WWII in June 1940 without an operational RDT. A breadboard of the EC-3 was built and tested from atop a building at the Academy, but most RDT work was stopped as direct support of the war took priority. In early 1939, the British Government invited representatives from the most technically advanced Commonwealth Nations to visit England for briefings and demonstrations on the highly secret RDF (radar) technology. Based on this, RDF developments were started in Australia, Canada, New Zealand, and South Africa by September 1939. In addition, this technology was independently developed in Hungary early in the war period. In Australia, the Radiophysics Laboratory was established at Sydney University under the Council for Scientific and Industrial Research; John H. Piddington was responsible for RDF development. The first project was a 200-MHz (1.5-m) shore-defense system for the Australian Army. Designated ShD, this was first tested in September 1941, and eventually installed at 17 ports. Following the Japanese attack on Pearl Harbor, the Royal Australian Air Force urgently needed an air-warning system, and Piddington's team, using the ShD as a basis, put the AW Mark I together in five days. It was being installed in Darwin, Northern Territory, when Australia received the first Japanese attack on February 19, 1942. A short time later, it was converted to a light-weight transportable version, the LW-AW Mark II; this was used by the Australian forces, as well as the U.S. Army, in early island landings in the South Pacific. The early RDF developments in Canada were at the Radio Section of the National Research Council of Canada. Using commercial components and with essentially no further assistance from Britain, John Tasker Henderson led a team in developing the Night Watchman, a surface-warning system for the Royal Canadian Navy to protect the entrance to the Halifax Harbour. Successfully tested in July 1940, this set operated at 200 MHz (1.5 m), had a 1 kW output with a pulse length of 0.5 μs, and used a relatively small, fixed antenna. This was followed by a ship-borne set designated Surface Warning 1st Canadian (SW1C) with the antenna hand-rotated through the use of a Chevrolet steering wheel in the operator's compartment. The SW1C was first tested at sea in mid-May 1941, but the performance was so poor compared to the Royal Navy's Model 271 ship-borne radar that the Royal Canadian Navy eventually adopted the British 271 in place of the SW1C. For coastal defense by the Canadian Army, a 200 MHz set with a transmitter similar to the Night Watchman was developed. Designated CD, it used a large, rotating antenna atop a 70-foot (21 m) wooden tower. The CD was put into operation in January 1942. Ernest Marsden represented New Zealand at the briefings in England, and then established two facilities for RDF development – one in Wellington at the Radio Section of the Central NZ Post Office, and another at Canterbury University College in Christchurch. Charles N. Watson-Munro led the development of land-based and airborne sets at Wellington, while Frederick W. G. White led the development of shipboard sets at Christchurch. Before the end of 1939, the Wellington group had converted an existing 180-MHz (1.6-m), 1 kW transmitter to produce 2-μs pulses and tested it to detect large vessels at up to 30 km; this was designated CW (Coastal Watching). A similar set, designated CD (Coast Defense) used a CRT for display and had lobe-switching on the receiving antenna; this was deployed in Wellington in late 1940. A partially completed ASV 200 MHz set was brought from Britain by Marsden, and another group at Wellington built this into an aircraft set for the Royal New Zealand Air Force; this was first flown in early 1940. At Christchurch, there was a smaller staff and work went slower, but by July 1940, a 430-MHz (70-cm), 5 kW set was tested. Two types, designated SW (Ship Warning) and SWG (Ship Warning, Gunnery), were placed into service by the Royal New Zealand Navy starting in August 1941. In all some 44 types were developed in New Zealand during WWII. South Africa did not have a representative at the 1939 meetings in England, but in mid-September, as Ernest Marsden was returning by ship to New Zealand, Basil F. J. Schonland came aboard and received three days of briefings. Schonland, a world authority on lightning and Director of the Bernard Price Institute of Geophysics at Witwatersrand University, immediately started an RDF development using amateur radio components and Institute's lightning-monitoring equipment. Designated JB (for Johannesburg), the 90-MHz (3.3-m), 500-W mobile system was tested in November 1939, just two months after its start. The prototype was operated in Durban before the end of 1939, detecting ships and aircraft at distances up to 80 km, and by the next March a system was fielded by anti-aircraft brigades of the South African Defence Force. In Hungary, Zoltán Lajos Bay was a Professor of Physics at the Technical University of Budapest as well as the Research Director of Egyesült Izzolampa (IZZO), a radio and electrical manufacturing firm. In late 1942, IZZO was directed by the Minister of Defense to develop a radio-location (rádiólokáció, radar) system. Using journal papers on ionospheric measurements for information on pulsed transmission, Bay developed a system called Sas (Eagle) around existing communications hardware. The Sas operated at 120 MHz (2.5 m) and was in a cabin with separate transmitting and receiving dipole arrays attached; the assembly was all on a rotatable platform. According to published records, the system was tested in 1944 atop Mount János and had a range of “better than 500 km”. A second Sas was installed at another location. There is no indication that either Sas installation was ever in regular service. After the war, Bay used a modified Sas to successfully bounce a signal off the moon. World War II radar At the start of World War II in September 1939, both the United Kingdom and Germany knew of each other's ongoing efforts in radio navigation and its countermeasures – the "Battle of the beams". Also, both nations were generally aware of, and intensely interested in, the other's developments in radio-based detection and tracking, and engaged in an active campaign of espionage and false leaks about their respective equipment. By the time of the Battle of Britain, both sides were deploying range and direction-finding units (radars) and control stations as part of integrated air defense capability. However, the German Funkmessgerät (radio measuring device) systems could not assist in an offensive role and was thus not supported by Adolf Hitler. Also, the Luftwaffe did not sufficiently appreciate the importance of British Range and Direction Finding (RDF) stations as part of RAF's air defense capability, contributing to their failure. While the United Kingdom and Germany led in pre-war advances in the use of radio for detection and tracking of aircraft, there were also developments in the United States, the Soviet Union, and Japan. Wartime systems in all of these nations will be summarized. The acronym RADAR (for RAdio Detection And Ranging) was coined by the U.S. Navy in 1940, and the subsequent name "radar" was soon widely used. The XAF and CXAM search radars were designed by the Naval Research Laboratory, and were the first operational radars in the US fleet, produced by RCA. When France had just fallen to the Nazis and Britain had no money to develop the magnetron on a massive scale, Churchill agreed that Sir Henry Tizard should offer the magnetron to the Americans in exchange for their financial and industrial help (the Tizard Mission). An early 6 kW version, built in England by the General Electric Company Research Laboratories, Wembley, London (not to be confused with the similarly named American company General Electric), was given to the US government in September 1940. The British magnetron was a thousand times more powerful than the best American transmitter at the time and produced accurate pulses. At the time the most powerful equivalent microwave producer available in the US (a klystron) had a power of only ten watts. The cavity magnetron was widely used during World War II in microwave radar equipment and is often credited with giving Allied radar a considerable performance advantage over German and Japanese radars, thus directly influencing the outcome of the war. It was later described by noted Historian James Phinney Baxter III as "The most valuable cargo ever brought to our shores". The Bell Telephone Laboratories made a producible version from the magnetron delivered to America by the Tizard Mission, and before the end of 1940, the Radiation Laboratory had been set up on the campus of the Massachusetts Institute of Technology to develop various types of radar using the magnetron. By early 1941, portable centimetric airborne radars were being tested in American and British aircraft. In late 1941, the Telecommunications Research Establishment in Great Britain used the magnetron to develop a revolutionary airborne, ground-mapping radar codenamed H2S. The H2S radar was in part developed by Alan Blumlein and Bernard Lovell. The magnetron radars used by the US and Britain could spot the periscope of a U-boat World War II, which gave impetus to the great surge in radar development, ended between the Allies and Germany in May 1945, followed by Japan in August. With this, radar activities in Germany and Japan ceased for a number of years. In other countries, particularly the United States, Britain, and the USSR, the politically unstable post-war years saw continued radar improvements for military applications. In fact, these three nations all made significant efforts in bringing scientists and engineers from Germany to work in their weapon programs; in the U.S., this was under Operation Paperclip. Even before the end of the war, various projects directed toward non-military applications of radar and closely related technologies were initiated. The US Army Air Forces and the British RAF had made wartime advances in using radar for handling aircraft landing, and this was rapidly expanded into the civil sector. The field of radio astronomy was one of the related technologies; although discovered before the war, it immediately flourished in the late 1940s with many scientists around the world establishing new careers based on their radar experience. Four techniques, highly important in post-war radars, were matured in the late 1940s-early 1950s: pulse Doppler, monopulse, phased array, and synthetic aperture; the first three were known and even used during wartime developments, but were matured later. - Pulse-Doppler radar (often known as moving target indication or MTI), uses the Doppler-shifted signals from targets to better detect moving targets in the presence of clutter. - Monopulse radar (also called simultaneous lobing) was conceived by Robert Page at the NRL in 1943. With this, the system derives error-angle information from a single pulse, greatly improving the tracking accuracy. - Phased-array radar has the many segments of a large antenna separately controlled, allowing the beam to be quickly directed. This greatly reduces the time necessary to change the beam direction from one point to another, allowing almost simultaneous tracking of multiple targets while maintaining overall surveillance. - Synthetic-aperture radar (SAR), was invented in the early 1950s at Goodyear Aircraft Corporation. Using a single, relatively small antenna carried on an aircraft, a SAR combines the returns from each pulse to produce a high-resolution image of the terrain comparable to that obtained by a much larger antenna. SAR has wide applications, particularly in mapping and remote sensing. One of the early applications of digital computers was in switching the signal phase in elements of large phased-array antennas. As smaller computers came into being, these were quickly applied to digital signal processing using algorithms for improving radar performance. Other advances in radar systems and applications in the decades following WWII are far too many to be included herein. The following sections are intended to provide representative samples. In the United States, the Rad Lab at MIT officially closed at the end of 1945. The Naval Research Laboratory (NRL) and the Army's Evans Signal Laboratory continued with new activities in centimeter radar development. The United States Air Force (USAF) – separated from the Army in 1946 – concentrated radar research at their Cambridge Research Center (CRC) at Hanscom Field, Massachusetts. In 1951, MIT opened the Lincoln Laboratory for joint developments with the CRC. While the Bell Telephone Laboratories embarked on major communications upgrades, they continued with the Army in radar for their ongoing Nike air-defense program In Britain, the RAF's Telecommunications Research Establishment (TRE) and the Army's Radar Research and Development Establishment (RRDE) both continued at reduced levels at Malvern, Worcestershire, then in 1953 were combined to form the Radar Research Establishment. In 1948, all of the Royal Navy's radio and radar R&D activities were combined to form the Admiralty Signal and Radar Establishment, located near Portsmouth, Hampshire. The USSR, although devastated by the war, immediately embarked on the development of new weapons, including radars. During the Cold War period following WWII, the primary "axis" of combat shifted to lie between the United States and the Soviet Union. By 1949, both sides had nuclear weapons carried by bombers. To provide early warning of an attack, both deployed huge radar networks of increasing sophistication at ever-more remote locations. In the West, the first such system was the Pinetree Line, deployed across Canada in the early 1950s, backed up with radar pickets on ships and oil platforms off the east and west coasts. The Pinetree Line initially used vintage pulsed radars and was soon supplemented with the Mid-Canada Line (MCL). Soviet technology improvements made these Lines inadequate and, in a construction project involving 25,000 persons, the Distant Early Warning Line (DEW Line) was completed in 1957. Stretching from Alaska to Baffin Island and covering over 6,000 miles (9,700 km), the DEW Line consisted of 63 stations with AN/FPS-19 high-power, pulsed, L-Band radars, most augmented by AN/FPS-23 pulse-Doppler systems. The Soviet Unit tested its first Intercontinental Ballistic Missile (ICBM) in August 1957, and in a few years the early-warning role was passed almost entirely to the more capable DEW Line. Both the U.S. and the Soviet Union then had ICBMs with nuclear warheads, and each began the development of a major anti-ballistic missile (ABM) system. In the USSR, this was the Fakel V-1000, and for this they developed powerful radar systems. This was eventually deployed around Moscow as the A-35 anti-ballistic missile system, supported by radars designated by NATO as the Cat House, Dog House, and Hen House. In 1957, the U.S. Army initiated an ABM system first called Nike-X; this passed through several names, eventually becoming the Safeguard Program. For this, there was a long-range Perimeter Acquisition Radar (PAR) and a shorter-range, more precise Missile Site Radar (MSR). The PAR was housed in a 128-foot (39 m)-high nuclear-hardened building with one face sloping 25 degrees facing north. This contained 6,888 antenna elements separated in transmitting and receiving phased arrays. The L-Band transmitter used 128 long-life traveling-wave tubes (TWTs), having a combined power in the megawatt range The PAR could detect incoming missiles outside the atmosphere at distances up to 1,800 miles (2,900 km). The MSR had an 80-foot (24 m), truncated pyramid structure, with each face holding a phased-array antenna 13 feet (4.0 m) in diameter and containing 5,001 array elements used for both transmitting and receiving. Operating in the S-Band, the transmitter used two klystrons functioning in parallel, each with megawatt-level power. The MSR could search for targets from all directions, acquiring them at up to 300 miles (480 km) range. One Safeguard site, intended to defend Minuteman ICBM missile silos near the Grand Forks AFB in North Dakota, was finally completed in October 1975, but the U.S. Congress withdrew all funding after it was operational but a single day. During the following decades, the U.S. Army and the U.S. Air Force developed a variety of large radar systems, but the long-serving BTL gave up military development work in the 1970s. A modern radar developed by of the U.S. Navy is the AN/SPY-1. First fielded in 1973, this S-Band, 6 MW system has gone through a number of variants and is a major component of the Aegis Combat System. An automatic detect-and-track system, it is computer controlled using four complementary three-dimensional passive electronically scanned array antennas to provide hemispherical coverage. Radar signals, traveling with line-of-sight propagation, normally have a range to ground targets limited by the visible horizon, or less than about 10 miles (16 km). Airborne targets can be detected by ground-level radars at greater ranges, but, at best, several hundred miles. Since the beginning of radio, it had been known that signals of appropriate frequencies (3 to 30 MHz) could be “bounced” from the ionosphere and received at considerable distances. As long-range bombers and missiles came into being, there was a need to have radars give early warnings at great ranges. In the early 1950s, a team at the Naval Research Laboratory came up with the Over-the-Horizon (OTH) radar for this purpose. To distinguish targets from other reflections, it was necessary to use a phase-Doppler system. Very sensitive receivers with low-noise amplifiers had to be developed. Since the signal going to the target and returning had a propagation loss proportional to the range raised to the fourth power, a powerful transmitter and large antennas were required. A digital computer with considerable capability (new at that time) was necessary for analyzing the data. In 1950, their first experimental system was able to detect rocket launches 600 miles (970 km) away at Cape Canaveral, and the cloud from a nuclear explosion in Nevada 1,700 miles (2,700 km) distant. In the early 1970s, a joint American-British project, code named Cobra Mist, used a 10-MW OTH radar at Orfordness (the birthplace of British radar), England, in an attempt to detect aircraft and missile launchings over the Western USSR. Because of US-USSR ABM agreements, this was abandoned within two years. In the same time period, the Soviets were developing a similar system; this successfully detected a missile launch at 2,500 km (1,600 mi). By 1976, this had matured into an operational system named Duga (“Arc” in English), but known to western intelligence as Steel Yard and called Woodpecker by radio amateurs and others who suffered from its interference – the transmitter was estimated to have a power of 10 MW. Australia, Canada, and France also developed OTH radar systems. With the advent of satellites with early-warning capabilities, the military lost most of its interest in OTH radars. However, in recent years, this technology has been reactivated for detecting and tracking ocean shipping in applications such as maritime reconnaissance and drug enforcement. Systems using an alternate technology have also been developed for over-the-horizon detection. Due to diffraction, electromagnetic surface waves are scattered to the rear of objects, and these signals can be detected in a direction opposite from high-powered transmissions. Called OTH-SW (SW for Surface Wave), Russia is using such a system to monitor the Sea of Japan, and Canada has a system for coastal surveillance. Civil aviation radars The post-war years saw the beginning of a revolutionary development in Air Traffic Control (ATC) – the introduction of radar. In 1946, the Civil Aeronautics Administration (CAA) unveiled an experimental radar-equipped tower for control of civil flights. By 1952, the CAA had begun its first routine use of radar for approach and departure control. Four years later, it placed a large order for long-range radars for use in en route ATC; these had the capability, at higher altitudes, to see aircraft within 200 nautical miles (370 km). In 1960, it became required for aircraft flying in certain areas to carry a radar transponder that identified the aircraft and helped improve radar performance. Since 1966, the responsible agency has been called the Federal Aviation Administration (FAA). A Terminal Radar Approach Control (TRACON) is an ATC facility usually located within the vicinity of a large airport. In the US Air Force it is known as RAPCON (Radar Approach Control), and in the US Navy as a RATCF (Radar Air Traffic Control Facility). Typically, the TRACON controls aircraft within a 30 to 50 nautical mile (56 to 93 km) radius of the airport at an altitude between 10,000 and 15,000 feet (3,000 to 4,600 m). This uses one or more Airport Surveillance Radars (ASR-8, 9 and 11, ASR-7 is obsolete), sweeping the sky once every few seconds. These Primary ASR radars are typically paired with secondary radars (Air Traffic Radar Beacon Interrogators, or ATCBI) of the ATCBI-5, Mode S or MSSR types. Unlike primary radar, secondary radar relies upon an aircraft based transponder, which receives an interrogation from the ground and replies with an appropriate digital code which includes the aircraft id and reports the aircraft's altitude. The principle is similar to the military IFF Identification friend or foe. The secondary radar antenna array rides atop the primary radar dish at the radar site, with both rotating at approximately 12 revolutions per minute. The Digital Airport Surveillance Radar (DASR) is a newer TRACON radar system, replacing the old analog systems with digital technology. The civilian nomenclature for these radars is the ASR-9 and the ASR-11, and AN/GPN-30 is used by the military. In the ASR-11, two radar systems are included. The primary is an S-Band (~2.8 GHz) system with 25 kW pulse power. It provides 3-D tracking of target aircraft and also measures rainfall intensity. The secondary is a P-Band (~1.05 GHz) system with a peak-power of about 25 kW. It uses a transponder set to interrogate aircraft and receive operational data. The antennas for both systems rotate atop a tall tower. During World War II, military radar operators noticed noise in returned echoes due to weather elements like rain, snow, and sleet. Just after the war, military scientists returned to civilian life or continued in the Armed Forces and pursued their work in developing a use for those echoes. In the United States, David Atlas, for the Air Force group at first, and later for MIT, developed the first operational weather radars. In Canada, J.S. Marshall and R.H. Douglas formed the "Stormy Weather Group " in Montreal. Marshall and his doctoral student Walter Palmer are well known for their work on the drop size distribution in mid-latitude rain that led to understanding of the Z-R relation, which correlates a given radar reflectivity with the rate at which water is falling on the ground. In the United Kingdom, research continued to study the radar echo patterns and weather elements such as stratiform rain and convective clouds, and experiments were done to evaluate the potential of different wavelengths from 1 to 10 centimetres. Between 1950 and 1980, reflectivity radars, which measure position and intensity of precipitation, were built by weather services around the world. In United States, the U.S. Weather Bureau, established in 1870 with the specific mission of to provide meteorological observations and giving notice of approaching storms, developed the WSR-1 (Weather Surveillance Radar-1), one of the first weather radars. This was a modified version of the AN/APS-2F radar, which the Weather Bureau acquired from the Navy. The WSR-1A, WSR-3, and WSR-4 were also variants of this radar. This was followed by the WSR-57 (Weather Surveillance Radar – 1957) was the first weather radar designed specifically for a national warning network. Using WWII technology based on vacuum tubes, it gave only coarse reflectivity data and no velocity information. Operating at 2.89 GHz (S-Band), it had a peak-power of 410 kW and a maximum range of about 580 mi (930 km). AN/FPS-41 was the military designation for the WSR-57. The early meteorologists had to watch a cathode ray tube. During the 1970s, radars began to be standardized and organized into larger networks. The next significant change in the United States was the WSR-74 series, beginning operations in 1974. There were two types: the WSR-74S, for replacements and filling gaps in the WSR-57 national network, and the WSR-74C, primarily for local use. Both were transistor-based, and their primary technical difference was indicated by the letter, S band (better suited for long range) and C band, respectively. Until the 1990s, there were 128 of the WSR-57 and WSR-74 model radars were spread across that country. The first devices to capture radar images were developed during the same period. The number of scanned angles was increased to get a three-dimensional view of the precipitation, so that horizontal cross-sections (CAPPI) and vertical ones could be performed. Studies of the organization of thunderstorms were then possible for the Alberta Hail Project in Canada and National Severe Storms Laboratory (NSSL) in the US in particular. The NSSL, created in 1964, began experimentation on dual polarization signals and on Doppler effect uses. In May 1973, a tornado devastated Union City, Oklahoma, just west of Oklahoma City. For the first time, a Dopplerized 10-cm wavelength radar from NSSL documented the entire life cycle of the tornado. The researchers discovered a mesoscale rotation in the cloud aloft before the tornado touched the ground : the tornadic vortex signature. NSSL's research helped convince the National Weather Service that Doppler radar was a crucial forecasting tool. Between 1980 and 2000, weather radar networks became the norm in North America, Europe, Japan and other developed countries. Conventional radars were replaced by Doppler radars, which in addition to position and intensity of could track the relative velocity of the particles in the air. In the United States, the construction of a network consisting of 10 cm (4 in) wavelength radars, called NEXRAD or WSR-88D (Weather Service Radar 1988 Doppler), was started in 1988 following NSSL's research. In Canada, Environment Canada constructed the King City station, with a five centimeter research Doppler radar, by 1985; McGill University dopplerized its radar (J. S. Marshall Radar Observatory) in 1993. This led to a complete Canadian Doppler network between 1998 and 2004. France and other European countries switched to Doppler network by the end of the 1990s to early 2000s. Meanwhile, rapid advances in computer technology led to algorithms to detect signs of severe weather and a plethora of "products" for media outlets and researchers. After 2000, research on dual polarization technology moved into operational use, increasing the amount of information available on precipitation type (e.g. rain vs. snow). "Dual polarization" means that microwave radiation which is polarized both horizontally and vertically (with respect to the ground) is emitted. Wide-scale deployment is expected by the end of the decade in some countries such as the United States, France, and Canada. Since 2003, the U.S. National Oceanic and Atmospheric Administration has been experimenting with phased-array radar as a replacement for conventional parabolic antenna to provide more time resolution in atmospheric sounding. This would be very important in severe thunderstorms as their evolution can be better evaluated with more timely data. Also in 2003, the National Science Foundation established the Engineering Research Center for Collaborative Adaptive Sensing of the Atmosphere, "CASA", a multidisciplinary, multi-university collaboration of engineers, computer scientists, meteorologists, and sociologists to conduct fundamental research, develop enabling technology, and deploy prototype engineering systems designed to augment existing radar systems by sampling the generally undersampled lower troposphere with inexpensive, fast scanning, dual polarization, mechanically scanned and phased array radars. The plan position indicator, dating from the early days of radar and still the most common type of display, provides a map of the targets surrounding the radar location. If the radar antenna on an aircraft is aimed downward, a map of the terrain is generated, and the larger the antenna, the greater the image resolution. After centimeter radar came into being, downward-looking radars – the H2S ( L-Band) and H2X (C-Band) – provided real-time maps used by the U.S. and Britain in bombing runs over Europe at night and through dense clouds. In 1951, Carl Wiley led a team at Goodyear Aircraft Corporation (later Goodyear Aerospace) in developing a technique for greatly expanding and improving the resolution of radar-generated images. Called synthetic aperture radar (SAR), an ordinary-sized antenna fixed to the side of an aircraft is used with highly complex signal processing to give an image that would otherwise require a much larger, scanning antenna; thus, the name synthetic aperture. As each pulse is emitted, it is radiated over a lateral band onto the terrain. The return is spread in time, due to reflections from features at different distances. Motion of the vehicle along the flight path gives the horizontal increments. The amplitude and phase of returns are combined by the signal processor using Fourier transform techniques in forming the image. The overall technique is closely akin to optical holography. Through the years, many variations of the SAR have been made with diversified applications resulting. In initial systems, the signal processing was too complex for on-board operation; the signals were recorded and processed later. Processors using optical techniques were then tried for generating real-time images, but advances in high-speed electronics now allow on-board processes for most applications. Early systems gave a resolution in tens of meters, but more recent airborne systems provide resolutions to about 10 cm. Current ultra-wideband systems have resolutions of a few millimeters. Other radars and applications There are many other post-war radar systems and applications. Only a few will be noted. The most widespread radar device today is undoubtedly the radar gun. This is a small, usually hand-held, Doppler radar that is used to detect the speed of objects, especially trucks and automobiles in regulating traffic, as well as pitched baseballs, runners, or other moving objects in sports. This device can also be used to measure the surface speed of water and continuously manufactured materials. A radar gun does not return information regarding the object's position; it uses the Doppler effect to measure the speed of a target. First developed in 1954, most radar guns operate with very low power in the X or Ku Bands. Some use infrared radiation or laser light; these are usually called LIDAR. A related technology for velocity measurements in flowing liquids or gasses is called laser Doppler velocimetry; this technology dates from the mid-1960s. As pulsed radars were initially being developed, the use of very narrow pulses was examined. The pulse length governs the accuracy of distance measurement by radar – the shorter the pulse, the greater the precision. Also, for a given pulse repetition frequency (PRF), a shorter pulse results in a higher peak power. Harmonic analysis shows that the narrower the pulse, the wider the band of frequencies that contain the energy, leading to such systems also being called wide-band radars. In the early days, the electronics for generating and receiving these pulses was not available; thus, essentially no applications of this were initially made. By the 1970s, advances in electronics led to renewed interest in what was often called short-pulse radar. With further advances, it became practical to generate pulses having a width on the same order as the period of the RF carrier (T = 1/f). This is now generally called impulse radar. The first significant application of this technology was in ground-penetrating radar (GPR). Developed in the 1970s, GPR is now used for structural foundation analysis, archeological mapping, treasure hunting, unexploded ordnance identification, and other shallow investigations. This is possible because impulse radar can concisely locate the boundaries between the general media (the soil) and the desired target. The results, however, are non-unique and are highly dependent upon the skill of the operator and the subsequent interpretation of the data. In dry or otherwise favorable soil and rock, penetration up to 300 feet (91 m) is often possible. For distance measurements at these short ranges, the transmitted pulse is usually only one radio-frequency cycle in duration; With a 100 MHz carrier and a PRF of 10 kHz (typical parameters), the pulse duration is only 10 ns (nanosecond). leading to the "impulse" designation. A variety of GPR systems are commercially available in back-pack and wheeled-cart versions with pulse-power up to a kilowatt. With continued development of electronics, systems with pulse durations measured in picoseconds became possible. Applications are as varied as security and motion sensors, building stud-finders, collision-warning devices, and cardiac-dynamics monitors. Some of these devices are matchbox sized, including a long-life power source. As radar was being developed, astronomers considered its application in making observations of the Moon and other near-by extraterrestrial objects. In 1944, Zoltán Lajos Bay had this as a major objective as he developed a radar in Hungary. His radar telescope was taken away by the conquering Soviet army and had to be rebuilt, thus delaying the experiment. Under Project Diana conducted by the Army's Evans Signal Laboratory in New Jersey, a modified SCR-271 radar (the fixed-position version of the SCR-270) operating at 110 MHz with 3 kW peak-power, was used in receiving echoes from the Moon on January 10, 1946. Zoltán Bay accomplished this on the following February 6. Radio astronomy also had its start following WWII, and many scientists involved in radar development then entered this field. A number of radio observatories were constructed during the following years; however, because of the additional cost and complexity of involving transmitters and associated receiving equipment, very few were dedicated to radar astronomy. In fact, essentially all major radar astronomy activities have been conducted as adjuncts to radio astronomy observatories. The radio telescope at the Arecibo Observatory, opened in 1963, is the largest in the world. Owned by the U.S. National Science Foundation and contractor operated, it is used primarily for radio astronomy, but equipment is available for radar astronomy. This includes transmitters operating at 47 MHz, 439 MHz, and 2.38 GHz, all with very-high pulse power. It has a 305-m (1,000-ft) primary reflector fixed in position; the secondary reflector is on tracks to allow precise pointing to different parts of the sky. Many significant scientific discoveries have been made using the Arecibo radar telescope, including mapping of surface roughness of Mars and observations of Saturns and its largest moon, Titan. In 1989, the observatory radar-imaged an asteroid for the first time in history. Several spacecraft orbiting the Moon, Mercury, Venus, Mars, and Saturn have carried radars for surface mapping; a ground-penetration radar was carried on the Mars Express mission. Radar systems on a number of aircraft and orbiting spacecraft have mapped the entire Earth for various purposes; on the Shuttle Radar Topography Mission, the entire planet was mapped at a 30-m resolution. The Jodrell Bank Observatory, an operation of the University of Manchester in Britain, was originally started by Bernard Lovell to be a radar astronomy facility. It initially used a war-surplus GL-II radar system operating at 71 MHz (4.2 m). The first observations were of ionized trails in the Geminids meteor shower during December 1945. While the facility soon evolved to become the third largest radio observatory in the world, some radar astronomy continued. The largest (250-ft or 76-m in diameter) of their three fully steerable radio telescopes became operational just in time to radar track Sputnik 1, the first artificial satellite, in October 1957. - Cavity magnetron - History of smart antennas - List of German inventions and discoveries - List of World War II electronic warfare equipment - Secrets of Radar Museum - Raymond C. Watson, Jr.; Radar Origins Worldwide’’, Trafford Publishing, 2009. - Part 4 – America Between The Wars; "Archived copy". Archived from the original on 2013-11-10. Retrieved 2013-09-13.CS1 maint: archived copy as title (link) - Harford, Tim (9 October 2017). "How the search for a 'death ray' led to radar". BBC World Service. Archived from the original on 9 October 2017. Retrieved 9 October 2017. But by 1940, it was the British who had made a spectacular breakthrough: the resonant cavity magnetron, a radar transmitter far more powerful than its predecessors.... The magnetron stunned the Americans. Their research was years off the pace. - "L'histoire du "radar", les faits". Archived from the original on October 5, 2007. Le principe fondamental du radar appartient au patrimoine commun des physiciens : ce qui demeure en fin de compte au crédit réel des techniciens se mesure à la réalisation effective de matériels opérationnels - van Keuren, D.K. (1997). "Science Goes to War: The Radiation Laboratory, Radar, and Their Technological Consequences". Reviews in American History. 25 (4): 643–647. doi:10.1353/rah.1997.0150. S2CID 201792951. Archived from the original on 2012-09-12. - Buderi, Robert; The Invention that Changed the World, Simon & Schuster, 1996 - Wald, Matthew L. (June 22, 1997). "Jam Sessions". New York Times. Archived from the original on March 7, 2016. - Such experiments were conducted by Oliver Lodge, Jagadish Chandra Bose, and Alexander Stepanovich Popov. - Andia, Gianfranco; Duroc, Yvan; Tedjini, Smail (2018-01-19). Non-Linearities in Passive RFID Systems: Third Harmonic Concept and Applications. ISBN 9781119490739. - "Marconi Radar History / Franklin and Round". marconiradarhistory.pbworks.com. Archived from the original on 25 April 2018. Retrieved 25 April 2018. - Marconi, Guglielmo (1922). "Radio Telegraphy". Proc. IRE. 10 (4): 215–238. doi:10.1109/JRPROC.1922.219820. - "Development of A Monopulse Radar System", Kirkpatrick, George M., letter to IEEE Transactions on Aerospace and Electronic Systems, vol. 45, no. 2 (April 2009). - James B. Campbell, Randolph H. Wynne, Introduction to Remote Sensing, Fifth Edition, Guilford Press – 2011, page 207 - "Christian Hülsmeyer in Radar World". radarworld.org. Archived from the original on 26 September 2017. Retrieved 25 April 2018. - Raymond C. Watson, Jr. (2009). Radar Origins Worldwide: History of Its Evolution in 13 Nations Through World War II. Trafford Publishing. p. 45. ISBN 978-1-4269-9156-1. - Bowen, E. G.; Radar Days, Inst. of Physics Publishing, 1987, p. 16 - Latham, Colin, and Anne Stobbs (2011). The Birth of British Radar: The memoirs of Arnold 'Skip' Wilkins, Second Edition, Radio Society of Great Britain, ISBN 9781-9050-8675-7 - Judkins, Phil. "Making Vision into Power" Archived 2013-11-10 at the Wayback Machine, International Journal of Engineering and Technology, Vol 82, No 1 (January 2012), pp. 103–104. - Judkins, p.113. - Judkins, pp.113–114. - Judkins, p.109. - Judkins, p.114. - Judkins, p.116. - Judkins, p.117. - Judkins, p.116: quoting from Jones, R. V. Most Secret War (London: Hamish Hamilton, 1978), pp.155–156. - Judkins, p.115. - Braham, J. R. D., Wing Commander, RAF. Night Fighter (Bantam, 1984). - Judkins, pp.114, 116, 118, & 119–120. - Butement, W. A. S., and P. E. Pollard; “Coastal Defense Apparatus”, Recorded in the Inventions Book of the Royal Engineers Board, Jan. 1931 - Coales, J. F., and J. D. S. Rawlinson; “The Development of Naval Radar 1935–1945”, J. Naval Science, vol. 13, nos. 2–3, 1987. - Kummritz, Herbert; “On the Development of Radar Technologies in Germany up to 1945”, in Tracking the History of Radar, ed. by Oskar Blumtritt et al., IEEE-Rutgers, 1994 - Kroge, Harry von; GEMA: Birthplace of German Radar and Sonar, translated by Louis Brown, Inst. of Physics Publishing, 2000 - “Telefunken firm in Berlin reveals details of a 'mystery ray' system capable of locating position of aircraft through fog, smoke and clouds.” Electronics, September 1935 - Runge. W.; “A personal reminiscence”, in Radar Development to 1945, edited by Russell Burns, Peter Peregrinus Ltd, 1988, p.227 - Brown, Louis; A Radar History of World War II; Inst. of Physics Publishing, 1999, p.43 - Hyland, L.A., A.H. Taylor, and L.C. Young; "System for detecting objects by radio", U.S. Patent No. 1981884, 27 Nov. 1934 - Breit, Gregory, and Merle A. Tuve; "A Radio Method for Estimating the Height of the Conducting Layer", Nature, vol. 116, 1925, p. 116 - Page, Robert Morris; The Origin of Radar, Doubleday, 1962, p. 66. - Wolff, Christian. "Origin of the term "radar"". RadarTutorial.eu. Retrieved 7 August 2020. - Coulton, Roger B.; "Radar in the U.S. Army", Proc. IRE, vol. 33, 1947, pp. 740–753 - Erickson, J.; “The air defense problem and the Soviet radar programme 1934/35-1945”, in Radar Development to 1945, ed. by Russell Burns, Peter Peregrinus Ltd., 1988, pp. 227–234 - Ioffe, A. F.; “Contemporary problems of the development of the technology of air defense”, Sbornik PVO, February 1934 (in Russian) - Shembel, B. K.; At the Origin of Radar in USSR, Sovetskoye Radio, 1977 (in Russian) - Slutzkin, A. A., and D. S. Shteinberg, "Die Erzeugung von kurzwelligen ungedämpften Schwingungen bei Anwendung des Magnetfeldes" ["The generation of undamped shortwave oscillations by application of a magnetic field"], Annalen der Physik, vol. 393, no. 5, pages 658–670 (May 1929) - Siddiqi, Asif A.; “Rockets Red Glare: Technology, Conflict, and Terror in the Soviet Union”; Technology & Culture, vol. 44, 2003, p.470 - Lobanov, M. M.; The Beginning of Soviet Radar, Sovetskoye Radio, 1975 (in Russian) - Watson, Raymond C. (2009).Radar Origins Worldwide. Trafford Publishing, p. 306. ISBN 1-4269-2110-1 - Kostenko, Alexei A., Alexander I, Nosich, and Irina A. Tishchenko; “Development of the First Soviet Three-Coordinate L-Band Pulsed Radar in Kharkov Before WWII”, IEEE Antennas and Propagation Magazine, vol. 43, June 2001, pp. 29–48; "Archived copy" (PDF). Archived from the original (PDF) on 2011-03-13. Retrieved 2010-02-08.CS1 maint: archived copy as title (link) - Chernyak, V. S., I. Ya. Immoreev, and B. M. Vovshin; “Radar in the Soviet Union and Russia: A Brief Historical Outline”, IEEE AES Magazine, vol. 19, December 2003, p. 8 - Yagi, H., “Beam Transmission of Ultra Short Waves”, Proc. IRE, vol. 16, June 1928 - Nakajima, S., "The history of Japanese radar development to 1945", in Russell Burns, Radar Development to 1945, Peter Peregrinus Ltd, 1988 - Wilkinson, Roger I.; “Short survey of Japanese radar – Part I”, Trans. AIEE, vol. 65, 1946, p. 370 - Nakajima, S.; “Japanese radar development prior to 1945”, IEEE Antennas and Propagation Magazine, vol. 34, Dec., 1992, pp. 17–22 - Le Pair, C. (Kees); “Radar in the Dutch Knowledge Network”, Telecommunication and Radar Conference, EUMW98, Amsterdam, 1998; "Radar in the Dutch knowledge network". Archived from the original on 2011-07-23. Retrieved 2010-01-20. - Posthumus, K; "Oscillations in a Split-Anode Magnetron, Mechanism of Generation", Wireless Engineer, vol. 12, 1935, pp. 126–13 - Staal, M., and J.L.C. Weiller; “Radar Development in the Netherlands before the war”, in Radar Development to 1945, ed. by Russell Burns, Peter Peregrinus, 1988, pp. 235–237 - ”Measurements Building” "Archived copy". Archived from the original on 2009-09-17. Retrieved 2010-01-20.CS1 maint: archived copy as title (link) - Swords, S. S.; Technical history of the beginnings of radar, Peter Peregrinus Ltd, 1986, pp. 142–144 - French patent Archived 2009-01-16 at the Wayback Machine (no. 788.795, "New system of location of obstacles and its applications") - Molyneux-Berry, R. B.; “Henri Gutton, French radar pioneer”, in Radar Development to 1945, ed. by Russell Burns, Peter Peregrinus, 1988, pp. 45–52 - "System for Object Detection and Distance Measurement" http://www.freepatentsonline.com/2433838.html - David, Pierre; Le Radar (The Radar), Presses Universitaires de France, 1949 (in French) - Megaw, Eric C. S.; “The High-Power Magnetron: A Review of Early Developments”, Journal of the IEE, vol. 93, 1946, p. 928, doi:10.1049/ji-3a-1.1946.0056 - Paul A. Redhead, The invention of the cavity magnetron and its introduction into Canada and the U.S.A., PHYSICS IN CANADA, November/December 2001, "Archived copy" (PDF). Archived from the original (PDF) on 2012-02-13. Retrieved 2008-10-10.CS1 maint: archived copy as title (link) - Calamia, M., and R. Palandri; “The History of the Italian Radio Detector Telemetro”, in Radar Development to 1945, ed. by Russell Burns, Peter Peregrinus, 1988, pp. 97–105 - Carrara, N.; “The detection of microwaves”, Proc. IRE, vol. 20, Oct. 1932, pp. 1615–1625 - Tiberio, U.; “Some historical data concerning the first Italian naval radar”, IEEE Trans. AES, vol. 15, Sept., 1979, p. 733 - Sinnott, D. H.; “Radar Development in Australia: 1939 to Present”, Proc. of IEEE 2005 International Radar Conference, 9–12 May, pp. 5–9 - Lamb, James B. (1987). On the triangle run. Toronto: Totem Books. pp. 26–28. ISBN 978-0-00-217909-6. - Moorcroft, Don; “Origins of Radar-based Research in Canada”, Univ. Western Ontario, 2002; "DRM - radar history". Archived from the original on 2014-11-29. Retrieved 2014-12-14. - Unwin, R. S.; “Development of Radar in New Zealand in World War II”, IEEE Antennas and Propagation Magazine, vol. 34, June, pp.31–33, 1992 - Hewitt, F. J.; “South Africa’s Role in the Development and Use of Radar in World War II”, Military History Journal, vol. 3, no, 3, June 1975; "South African Military History Society - Journal- SA and Radar". Archived from the original on 2010-01-27. Retrieved 2010-02-13. - Renner, Peter; “The Role of the Hungarian Engineers in the Development of Radar Systems”, Periodica Polytechnica Ser. Soc. Man. Sci, Vol. 12, p. 277, 2004; "Archived copy" (PDF). Archived from the original (PDF) on 2011-07-19. Retrieved 2010-02-13.CS1 maint: archived copy as title (link) - Angela Hind (February 5, 2007). "Briefcase 'that changed the world'". BBC News. Archived from the original on November 15, 2007. Retrieved 2007-08-16. - James Phinney Baxter III (Official Historian of the Office of Scientific Research and Development), Scientists Against Time (Boston: Little, Brown, and Co., 1946), page 142. - Barlow, E. J.; “Doppler Radar”, Proc. IRE, vol. 37, pp. 340–355, April 1949 - Page, R. M.; “Monopulse Radar”, op. cet. - Von Aulock, W. H.; “Properties of Phased Arrays”, Proc. IRE, vol. 48, pp. 1715–1727, Oct., 1960 - ”Airborne Synthetic Aperture Radar”; "AIRSAR JPL/NASA, Welcome !". Archived from the original on 2012-04-14. Retrieved 2010-03-11. - ”ABM Research and Development at Bell Laboratories”, "Archived copy" (PDF). Archived (PDF) from the original on 2011-07-17. Retrieved 2010-03-11.CS1 maint: archived copy as title (link) - ”Cobra Mist”; "Archived copy". Archived from the original on 2008-07-04. Retrieved 2010-03-11.CS1 maint: archived copy as title (link) - ”Mystery Signals Of The Short Wave”, Wireless World, Feb. 1977; "Archived copy". Archived from the original on 2011-10-03. Retrieved 2015-01-31.CS1 maint: archived copy as title (link) - ”Airport Surveillance Radars”; "Airport Surveillance Radar (ASR-11)". Archived from the original on 2014-08-11. Retrieved 2014-06-24. - David Atlas, "Radar in Meteorology", published by American Meteorological Society - "Stormy Weather Group". McGill University. 2000. Archived from the original on 2011-07-06. Retrieved 2006-05-21. - Whiton, Roger C., et al. "History of Operational Use of Weather Radar by U.S. Weather Services. Part I: The Pre-NEXRAD Era"; Weather and Forecasting, vol. 13, no. 2, pp. 219–243, 19 Feb. 1998; http://ams.allenpress.com/amsonline/?request=get-document&doi=10.1175%2F1520-0434(1998)013%3C0219:HOOUOW%3E2.0.CO%3B2[permanent dead link] - Susan Cobb (October 29, 2004). "Weather radar development highlight of the National Severe Storms Laboratory first 40 years". NOAA Magazine. NOAA. Archived from the original on February 15, 2013. Retrieved 2009-03-07. - Crozier, C.L.; P.I. Joe; J.W. Scott; H.N. Herscovitch; T.R. Nichols (1990). "The King City Operational Doppler Radar: Development, All-Season Applications and Forecasting (PDF)" (PDF). Canadian Meteorological and Oceanographic Society. Archived from the original (PDF) on 2006-10-02. Retrieved 2006-05-24. - "Information about Canadian radar network". The National Radar Program. Environment Canada. 2002. Archived from the original on 2004-06-29. Retrieved 2006-06-14. - Parent du Châtelet, Jacques; et al. (2005). "The PANTHERE project and the evolution of the French operational radar network and products: Rain estimation, Doppler winds, and dual polarization" (PDF). Météo-France. 32nd Radar Conference of the AMS, Albuquerque, NM. Archived (PDF) from the original on 2011-06-09. Retrieved 2006-06-24. - Daniels, Jeffrey J.; “Ground Penetrating Radar Fundamentals”; "Archived copy" (PDF). Archived (PDF) from the original on 2010-07-06. Retrieved 2010-03-11.CS1 maint: archived copy as title (link) - ”Micropower Impulse Radar”; "Archived copy". Archived from the original on 2010-05-27. Retrieved 2010-03-11.CS1 maint: archived copy as title (link) - Mofenson, Jack; “Radio Echoes From the Moon”, Electronics, April 1946; "Archived copy". Archived from the original on 2013-10-04. Retrieved 2013-04-09.CS1 maint: archived copy as title (link) - Bay, Z.; "Reflection of microwaves from the moon", Hung. Acta Phys., vol. 1, pp. 1–22, April 1946. - Lovell, Bernard; Story of Jodrell Bank, Oxford U. Press, 1968 - Blanchard, Yves, Le radar. 1904–2004 : Histoire d'un siècle d'innovations techniques et opérationnelles, éditions Ellipses,(in French) - Bowen, E. G.; “The development of airborne radar in Great Britain 1935–1945”, in Radar Development to 1945, ed. by Russell Burns; Peter Peregrinus, 1988, ISBN 0-86341-139-8 - Bowen, E. G., Radar Days, Institute of Physics Publishing, Bristol, 1987, ISBN 0-7503-0586-X - Bragg, Michael., RDF1 The Location of Aircraft by Radio Methods 1935–1945, Hawkhead Publishing, 1988, ISBN 0-9531544-0-8 - Brown, Jim, Radar – how it all began, Janus Pub., 1996, ISBN 1-85756-212-7 - Brown, Louis, A Radar History of World War 2 – Technical and Military Imperatives, Institute of Physics Publishing, 1999, ISBN 0-7503-0659-9 - Buderi, Robert: The invention that changed the world: the story of radar from war to peace, Simon & Schuster, 1996, ISBN 0-349-11068-9 - Burns, Peter (editor): Radar Development to 1945, Peter Peregrinus Ltd., 1988, ISBN 0-86341-139-8 - Clark, Ronald W., Tizard, MIT Press, 1965, ISBN 0-262-03010-1 (An authorized biography of radar's champion in the 1930s.) - Dummer, G. W. A., Electronic Inventions and Discoveries, Elsevier, 1976, Pergamon, 1977, ISBN 0-08-020982-3 - Erickson, John; “Radio-location and the air defense problem: The design and development of Soviet Radar 1934–40”, Social Studies of Science, vol. 2, p. 241, 1972 - Frank, Sir Charles, Operation Epsilon: The Farm Hall Transcripts U. Cal. Press, 1993 (How German scientists dealt with Nazism.) - Guerlac, Henry E., Radar in World War II,(in two volumes), Tomash Publishers / Am Inst. of Physics, 1987, ISBN 0-88318-486-9 - Hanbury Brown, Robert, Boffin: A Personal Story of the early Days of Radar and Radio Astronomy and Quantum Optics, Taylor and Francis, 1991, ISBN 978-0-750-30130-5 - Howse, Derek, Radar At Sea The Royal Navy in World War 2, Naval Institute Press, Annapolis, Maryland, USA, 1993, ISBN 1-55750-704-X - Jones, R. V., Most Secret War, Hamish Hamilton, 1978, ISBN 0-340-24169-1 (Account of British Scientific Intelligence between 1939 and 1945, working to anticipate Germany's radar and other developments.) - Kroge, Harry von, GEMA: Birthplace of German Radar and Sonar, translated by Louis Brown, Inst. of Physics Publishing, 2000, ISBN 0-471-24698-0 - Latham, Colin, and Anne Stobbs, Radar A Wartime Miracle, Sutton Publishing Ltd, 1996, ISBN 0-7509-1643-5 (A history of radar in the UK during WWII told by the men and women who worked on it.) - Latham, Colin, and Anne Stobbs, The Birth of British Radar: The Memoirs of Arnold 'Skip' Wilkins, 2nd Ed., Radio Society of Great Britain, 2006, ISBN 9781-9050-8675-7 - Lovell, Sir Bernard Lovel, Echoes of War – The History of H2S, Adam Hilger, 1991, ISBN 0-85274-317-3 - Nakagawa, Yasudo; Japanese Radar and Related Weapons of World War II, translated and edited by Louis Brown, John Bryant, and Naohiko Koizumi, Aegean Park Press, 1997, ISBN 0-89412-271-1 - Pritchard, David., The Radar War Germany's Pioneering Achievement 1904–1945 Patrick Stephens Ltd, Wellingborough 1989, ISBN 1-85260-246-5 - Rawnsley, C. F., and Robert Wright, Night Fighter, Mass Market Paperback, 1998 - Sayer, A. P., Army Radar – historical monograph, War Office, 1950 - Swords, Seán S., Technical History of the Beginnings of Radar, IEE/Peter Peregrinus, 1986, ISBN 0-86341-043-X - Watson, Raymond C., Jr. Radar Origins Worldwide: History of Its Evolution in 13 Nations Through World War II. Trafford Pub., 2009, ISBN 978-1-4269-2111-7 - Watson-Watt, Sir Robert, The Pulse of Radar, Dial Press, 1959, (no ISBN) (An autobiography of Sir Robert Watson-Watt) - Zimmerman, David., Britain's Shield Radar and the Defeat of the Luftwaffe, Sutton Publishing, 2001, ISBN 0-7509-1799-7 - Radarworld.org: "Radar Family Tree" — by Martin Hollmann. - PenleyRadarArchives.org: "Early Radar History – an Introduction" — by Bill + Jonathan Penley (2002). - Juliantrubin.com: RADAR Milestones − Famous Radar Pioneers and Notable Contributions - Fas.org: "Introduction to Naval Weapons Engineering" — Radar fundamentals section. - Foundation Centre for German Communications and Related Technologies: “Christian Hülsmeyer and about the early days of radar inventions" — by Arthur O. Bauer. - Purbeckradar.org: Early radar development in the UK - Hist.rloc.ru: "A History of Radio Location in the USSR"—(in Russian) - Jahre-radar.de: "100 Years of Radar"—(in German) - Jahre-radar.de: "The Century of Radar – from Christian Hülsmeyer to Shuttle Radar Topography Mission"—(in German), by Wolfgang Holpp. - World War II - The Radar Pages.uk: All you ever wanted to know about British air defence radar". — history and details of various British radar systems, by Dick Barrett. - The Radar Pages.uk: Deflating British Radar Myths of World War II — by Maj. Gregory C. Clark (1997). - The Secrets of Radar Museum: "Canada's involvement in WWII Radar" - Navweaps.com: German Radar Equipment of World War II |Wikimedia Commons has media related to |
Classroom Resources: Quantitative Chemistry 1 – 25 of 77 Classroom Resources Combustion, Chemical Change, Balancing Equations, Reaction Rate, Conservation of Mass, Conservation of Matter, Stoichiometry, Limiting Reactant, Chemical Change, Conservation of Matter, Conservation of Mass, Graphing, Error Analysis, Accuracy, Observations, Inferences, Interdisciplinary, Reaction Rate, Catalysts, Measurements, Mole Concept | High School Lesson Plan: Clean Air Chemistry In this lesson, students will learn about air pollution and some steps toward mitigating it. First, they will burn a candle and measure its mass and the concentration of CO2 over time. Students will discuss which data set they have more confidence in and why and then use stoichiometry to predict outcomes. Next, students explore incomplete combustion in a model-based worksheet that shows how a lack of O2 in the burning of fuels can produce air pollution. Students work together to interpret the models, define terms, and draw conclusions. Lastly, students work in groups using Lego models to illustrate how a catalytic converter works. They race “Nature” against catalysts “Palladium,” “Platinum,” and “Rhodium” to see what breaks down air pollution molecules fastest. Measurements, SI Units, Mole Concept, Physical Properties, Density | High School, Middle School Activity: Animation Activity: Units of Chemistry In this activity, students will view an animation that introduces them to the importance of including units to communicate the value of measurements effectively. The animation presents definitions, units of measurement, and measuring tools for physical properties that are commonly measured or calculated in chemistry class: mass, length, temperature, volume, amount (moles), and density. Percent Composition, Measurements, Chemistry Basics, Observations | High School Lab: Dehydration of Hydrated Salt In this lab, students are introduced to chemical measurement in a hands-on investigation using a heat source and a hydrated compound. Students will determine the percentage water lost, by mass, from a hydrated compound during the heating process. Additionally, students will analyze and interpret their results in a claim, evidence, reasoning format. SI Units, Mole Concept, Measurements, Physical Properties, Density | High School, Middle School Animation: Units of Chemistry Animation In this animation, students will be introduced to the importance of including units to communicate the value of measurements effectively. The animation presents definitions, units of measurement, and measuring tools for physical properties that are commonly measured or calculated in chemistry class: mass, length, temperature, volume, amount (moles), and density. **This video has no audio** Measurements, SI Units, Dimensional Analysis, Scientific Notation, Molecular Structure , Elements, History, Interdisciplinary | High School Lesson Plan: The Discovery of Fullerenes In this lesson, students will learn about a class of compounds called fullerenes through a reading about their discovery. Metric conversions, organic chemistry, and allotropes are all touched on in this lesson. There are a series of activities to help promote literacy in the science classroom related to the reading. This lesson could be easily used as plans for a substitute teacher, as most of the activities are self-guided. Measurements, SI Units, Physical Properties, Observations | High School Lesson Plan: Setting the Standards of Excellence In this lesson, students will learn about standards through a reading about the National Institute of Standards and Technology (NIST), which is the U.S. body that defines standards. There are a series of activities to help promote literacy in the science classroom related to the reading. This lesson could be easily used as plans for a substitute teacher, as most of the activities are self-guided. Introduction, Lab Safety, Measurements | High School Lab: Cleaning Up the Lab In this lab, students will learn how to mass a solid, properly wash glassware, and clean up their lab area. Heat, Temperature, Specific Heat, Law of Conservation of Energy, Enthalpy, Calorimetry, Exothermic & Endothermic, Balancing Equations, Chemical Change, Measurements, Mole Concept, Dimensional Analysis, Culminating Project, Interdisciplinary, Review, Graphing, Observations, Chemical Properties, Physical Properties | High School Project: Handwarmer Design Challenge In this project, students will use their knowledge of thermodynamics to design a handwarmer for a manufacturing company that can maintain a temperature of 30-40°C for at least 5 minutes and is designed for the average human hand. Students will create a final product after rounds of testing and an advertising poster that summarizes the results of their testing and promotes their design. Stoichiometry, Balancing Equations, Predicting Products, Chemical Change, Mole Concept, Dimensional Analysis, Measurements, Chemical Change, Culminating Project | High School Project: Chemical Reaction Soda Bottle Boat Race In this project, students will design and build a soda bottle boat with the goal of having the fastest boat to get to the other end of the rain gutter racetrack. Students will have to complete stoichiometric calculations to determine an appropriate amount of “fuel” (baking soda + vinegar) to power their boat. Introduction, Lab Safety, Chemical Properties, Physical Properties, Chemical Change, Physical Change, History, Separating Mixtures, Elements, Mixtures, Density, Measurements, SI Units, Significant Figures, Dimensional Analysis, Scientific Notation, Accuracy, Molecular Motion, Phase Changes | High School Lesson Plan: The Chemistry Basics and Measurement Quick Start Unit Plan This Quick Start Unit Plan includes all the materials that a teacher will need for the first 10 class meetings of the school year. Each day is outlined with teacher notes, and includes slide presentations as well as directions for demonstrations, activities and labs to use. The fundamental topics covered in the 10 days of lessons are: laboratory safety, laboratory equipment, experimental design, classification of matter, chemical properties, physical properties, chemical change, physical change, phase changes, separation techniques, dimensional analysis, unit conversions, factor label method, accuracy, precision, significant figures, and percent error calculations. This Quick Start Unit plan aims to help students to build a foundation of understanding, and master important topics before moving deeper into the chemistry curriculum. Molecular Structure, Molecular Formula, Measurements, Significant Figures, Molecular Structure , Saturated vs. Unsaturated | High School, Middle School Project: Discovering Chemical Elements in Food In this project, students will analyze nutrition labels of some of the foods and drinks that they recently consumed. They will identify which type of macromolecule (carbohydrates, lipids, proteins) is mainly supplied by the item and they will compare their consumption with the daily recommended intake for that type of macromolecule. Students will also investigate salt and added sugar as well as vitamins and minerals in the item. Finally, students will present their findings through short, spoken messages that are recorded and presented through a QR code. These can become a source of information for the school community at large upon completion of the project. Significant Figures, Measurements, Beer's Law, Concentration, Molarity | High School Lab: Investigating Shades of Blue In this lab investigation, students will create a copper(II) nitrate solution. Each group will be given a different measurement device in order to see how the accuracy of the preparation of the solution is affected by the limitations of the measurement device. The goal is for students to have a true understanding of why significant figures are important. Measurements, Accuracy, Dimensional Analysis, Significant Figures, SI Units | High School, Middle School Activity: Measurement Tools, Significant Figures and Conversions In this activity, students will complete several hands-on measurements, using a variety of common measuring tools. They will carefully consider how to properly report each measurement based on the tool used. Students will then complete measurement conversions, and apply their knowledge of significant figures. Partial Pressure, Gas Laws, Ideal Gas, Pressure, Molar Mass, Measurements, Error Analysis | High School Lab: Molar Mass of Butane In this lab, students will experimentally determine the molar mass of butane using Dalton’s law and the ideal gas law. They will also calculate the percent error and explain possible sources of error. Atomic Radius, Scientific Notation, Measurements | Middle School, High School Activity: Powers of 10 - How Small Is an Atom? In this activity, students will use an online interactive to investigate the size of an atom, and compare the size of the atom to other objects using scientific notation. Molecular Structure, Intermolecular Forces, Measurements, SI Units | High School Activity: Designing an Effective Respiratory Cloth Mask In this activity students will use unit conversion to help compare sizes of molecules, viruses, and droplets and then use them to interpret graphical data. They will then use their findings to design a cloth mask that helps protect its wearer against infection by SARS-CoV-2, the coronavirus that causes COVID-19. Measurements, Error Analysis, Accuracy, Accuracy, Significant Figures, Error Analysis | Middle School, High School Lab: Accuracy, Precision, and Error in Measurements In this lab, students make measurements of length and width using four measuring tools. They will measure the same object using measuring sticks of different precision. They will observe that the exactness of a measurement is limited by the precision of the measuring instrument. Measurements, Scientific Notation, Significant Figures, Subatomic Particles | High School Activity: Quantitatively Puzzling In this activity, students will analyze sixteen chemistry-based clues and use the numbers, zero through fifteen as possible answer choices for each one. The clues cover content related to measurement, scientific notation, significant digits, atomic structure and the periodic table. Dimensional Analysis, Measurements | Middle School, High School Activity: Dimensional Analysis with Notecards In this activity, students will practice dimensional analysis using pre-made conversion factors on notecards to demonstrate the importance of canceling units to solve problems. Measurements, Dimensional Analysis | Middle School, High School Animation: Measurement Animation In this animation, students review the fundamentals of measurement in length, mass, and volume. The animation also provides opportunities for students to practice unit conversions to confirm their understanding. **This video has no audio** Partial Pressure, Gas Laws, Ideal Gas, Molar Mass, Pressure, Measurements, Error Analysis | High School Lab: Determination of the Molar Mass of Butane In this lab, students will experimentally determine the molar mass of a gas, specifically butane (C4H10), by collection over water. This experiment is an inquiry based experiment for 2nd year chemistry or AP chemistry students who have previously collected an insoluble gas. Measurements, Dimensional Analysis | Middle School, High School Activity: Animation Activity: Measurement In this animation, students will become familiar with three forms of measurement, including length, mass and volume. Various units of measurement will be presented for comparison, and several conversion calculations will be demonstrated using dimensional analysis. Significant Figures, Measurements, SI Units | High School Activity: Investigating Significant Figures through Inquiry In this activity, students will develop an understanding of why significant figures are important in chemistry and learn how to determine the number of significant figures in a measurement. Measurements, Dimensional Analysis, Chemical Properties, Physical Properties, Chemical Change, Physical Change, Matter, Observations, Mixtures | Middle School, High School Activity: Cupcake Conversions, Bench to Bakery This activity will help to reinforce the importance of scientific measurement and apply it to the introduction of chemical reactions. Using an example of baking a single batch of cupcakes, students will plan for a larger production scale in a commercial bakery. This will help to introduce the idea of producing a reaction at the lab bench and converting it to mass production. In addition this activity investigates how chemistry is used in everyday life and challenges students to consider potentials errors that may occur when completing chemical reactions in the kitchen. Measurements, Dimensional Analysis, Physical Change, Matter, Mixtures | Middle School, High School Activity: Cooking with Conversions In this activity, students will be given a common homemade recipe for German chocolate cake with measurements in English units. They will be asked to convert the English ingredients list to metric units through scientific calculations. Students will also be asked to identify the ingredients as solid, liquid or gas. While reviewing the cooking procedures, students will classify certain steps as containing compounds or mixtures as well as identify whether chemical or physical changes are taking place. The culinary chemistry involved in this lesson should be introduced throughout the activity. Subtopics: ✖ Measurements Grade Level: ✖ High SchoolClear All Filters
* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project Mathematics 102 — Fall 1999 Tangents In this chapter we shall finish off our analysis of Galileo’s experiments. The principal idea of differential calculus Let’s arrange things so we are dealing with a ball rolling down an inclined ramp at such an angle that the position coordinate s at time t is t2 . We have therefore the graph of s versus t (shown at the far left): We are going to try to figure out what the exact velocity at some specific time is, say at t = 2. Our principal motivation for doing what we are going to do is that if we want to find out what the velocity is at t = 2, then we don’t really have to look at the whole graph, but just that part of it near where t = 2. So we zoom in to that part, as indicated in the two figures at the right above. First we zoom in to the range from t = 1 to 3, and then to the range 1.7 to 2.3. What we can see from these pictures is that • If we look only at a small part of the graph, then it is almost indistinguishable from a straight line. You can see this very roughly without any help, but the pictures make this clearer by adding something. Recall that the tangent line to the graph at a popint (x, y) is a straight line which just grazes the graph at (x, y). In this case, a tangent line will touch the graph at (x, y) without crossing it. In the picture above we have drawn the tangent line to the graph at (2, 4) for comparison with the graph itself. You can see that in the middle figure the graph looks still a bit curved at this scale, but that in the third figure the curvature is just about invisible. Of course the curvature does not vanish completely, but it does vanish almost up to the dimensions of the thickness of the lines we are drawing. This discussion also suggests a way to make the principle more precise: • If we look only at a small part of the graph around a point (x, y), then it is almost indistinguishable from the tangent line at that point. Velocity and tangent slopes 2 This simple idea, believe it or not, is what Calculus is all about. We would like to point out that there is at least something subtle about this idea, because it is not true of all graphs. Here, for example, is the graph of the function y = |x|, the absolute value of x. For example, |4| = 4 but | − 4| = 4 also. Explicitly we have −x if x < 0 |x| = x if x ≥ 0 This graph does not look like a straight line at (0, 0), and in fact has no tangent line there. Velocity and tangent slopes We zoom in so that the graph of position versus time looks like a straight line. We want to know how to calculate the velocity from what we see. Now what we have seen already in class is that if the graph were a straight line, then the velocity would be equal to its slope. It is not a straight line, but it is very very close to a straight line, namely the tangent line. So we see that • The velocity at any point on the graph is equal to the slope of the tangent line at that point. Keep in mind that this is an entirely theoretical idea. We have no way at the moment to calculate the slope of the tangent line; all we know how to do is to calculate the slope of a line through points close together on the graph according to the rule ∆s . ∆t Nonetheless a bit of logic can get what we want. So learning how to calculate the slope of the tangent line is our next order of business. slope = Calculating the slope of the tangent line Let’s continue to keep the point t = 2 in mind, and let’s look at the graph in the middle above, with a few modifications. Calculating the slope of the tangent line 3 (t + ∆t)2 t2 (t − ∆t)2 ∆t ∆t First of all, we choose some time interval ∆t, not too big. Say ∆t = 0.5. Then we plot on the graph the points at t-distance ∆t away from t = 2. These will have t-values 2 + ∆t = 1.5 and 2 − ∆t = 2.5, and the s-values will be (2 + ∆t)2 = 6.25 (2 − ∆t)2 = 2.25 We also plot what are called the secant lines from the lower point to the middle one, and from the middle one to the upper one, and the triangles representing their slopes. We calculate the lower slope to be 1.75 4 − 2.25 = = 3.5 0.5 0.5 and that of the upper one to be 2.25 6.25 − 4 = = 4.5 . 0.5 0.5 Now what we can see directly from the picture is that the lower secant line is not as steep as the tangent line, and the upper secant line is steeper. Therefore the slope of the tangent line is bracketed between the slopes of the upper and lower secants: 3.5 < slope of tangent line < 4.5 . Now do this over again with a smaller value of ∆t, say ∆t = 0.1. Then the upper point has height 2.12 = 4.41 and the lower one has height 1.92 = 3.61. The lower secant slope is 4 − 3.61 = 3.9 0.1 Back to Galileo 4 while that of the upper is 4.41 − 4 = 4.1 . 0.1 Again we have an inequality, this time 3.9 < slope of tangent line < 4.1 . We leave it to you to verify that if ∆t = 0.01 then we get 3.99 < slope of tangent line < 4.01 . To understand exactly what is going on, we use a bit of high school algebra. We recall that (a + b)2 = a2 + 2ab + b2 . Therefore (2 + ∆t)2 = 4 + 4∆t + (∆t)2 (2 − ∆t)2 = 4 − 4∆t + (∆t)2 This means that the slope of the higher secant is 4∆t + (∆t)2 (2 + ∆t)2 − 22 = = 4 + ∆t ∆t ∆t and that of the lower secant is 4∆t − (∆t)2 22 − (2 + ∆t)2 = = 4 − ∆t ∆t ∆t This means that for every possible value of ∆t, no matter how small, we have a ‘sandwich’ 4 − ∆t < slope of tangent line < 4 + ∆t . What else can the slope of the tangent line be except 4 exactly? Now instead of 2 we look at an arbitrary value of t and inquire what the slope of the tangent line is there. We get for each ∆t a sandwich 2t − ∆t < slope of tangent line < 2t + ∆t . and we deduce that the slope of the tangent line must be 2t. Exercise 1. Graph the function y = x2 + x + 1 for x = −5 to x = 5. Find by the same method the slope of the tangent line to the graph of at (x, y). Back to Galileo If we set s = ct2 a similar calculation will tell us that v = 2ct. In particular the velocity of any falling object is proportional to time. Galileo understood this, but his argument was arguably much clumsier than ours. Nonetheless, he understood also that the rate of increase of velocity was constant. This is called acceleration. To us it is hard to imagine the satisfaction that Galileo derived from the simple statement that the acceleration of any falling body is constant during its fall. It was in fact the first time in all of history, as far as we can tell, that a natural law was formulated so precisely and simply. A few years later Isaac Newton put together his Laws of Motion. Analyzing Galileo’s work, he saw that the best way to understand it was to introduce the notion of force, which Galileo probably did not arrive at. Newton’s second law asserts that force and acceleration are proportional. This was important because force is something that people develop an intuitive perception of, and also because it often has a simple mathematical expression—for example, in any gravitational field, even one far from the Earth. Newton’s Laws of motion reduced the analysis of motion to two steps: (1) the physical one of understanding forces; (2) the mathematical one of deducing the motion of objects from the description of the forces on them. In this he was he broke the apparently intractable problem of understanding motion, even of complicated systems like the Solar system, into two relatively simpler ones. Even the man in the street was impressed. Other slopes 5 Other slopes We want now to forget about falling objects and look at the mathematical problem of finding the slopes of the tangent lines for a wide range of graphs. In a later chapter we shall look at arbitrary polynomial functions, but to finish off this one and to motivate the later discussion we shall look at the graph of y = x4 . The pictures are essentially the same here. Only the algebra is different. We look at a point (x, x4 ) and then the points (x − ∆x, (x − ∆x)4 ), (x + ∆x, (x + ∆x)4 ) before it and after it on the curve. We then sandwich the slope of the tangent line (x − ∆x)4 − x4 x4 − (x − ∆x)4 < slope of tangent line < . ∆x ∆x Here we have to calculate (a + b)4 . We shall look at similar calculations later on in more detail, but here we just do it directly: (a + b)2 = a2 + 2ab + b2 (a + b)3 = (a + b)(a + b)2 = (a + b)(a2 + 2ab + b2 ) = a3 + 3a2 b + 3ab2 + b3 (a + b)4 = (a + b)(a + b)3 = (a + b)(a3 + 3a2 b + 3ab2 + b3 ) = a4 + 4a3 b + 6a2 b2 + 6ab3 + b4 . Therefore we get a new sandwich or 4x3 ∆x − 6x2 (∆x)2 + 4x(∆x)3 − (∆x)4 < slope of tangent line ∆x 4x3 ∆x + 6x2 (∆x)2 + 4x(∆x)3 + (∆x)4 < ∆x 4x3 − 6x2 ∆x + 4x(∆x)2 − (∆x)3 < slope of tangent line < 4x3 + 6x2 ∆x + 4x(∆x)2 + (∆x)3 This is much more complicated than what we had for the parabola, but not a great deal. The point here is that all the terms with a ∆x in them become very small as ∆x does, and we conclude that the slope has to be 4x 3 . Exercise 2. Let ∆x = 0.1, 0.01, 0.001, 0.0001 in succession. Write down explicitly the ‘sandwich’ equations for the slope of y = x4 for x = 1. Exercise 3. Let f (x) = x3 . Calculate the algebraic expressions f (x) − f (x − ∆x) f (x + ∆x) − f (x) , ∆x ∆x What do these become as ∆x becomes smaller and smaller?
A lichen (//, LEYE-ken but in UK often //, LICH-en) is a composite organism that arises from algae or cyanobacteria living among filaments of multiple fungi in a mutualistic relationship. The combined lichen has properties different from those of its component organisms. Lichens come in many colors, sizes, and forms. The properties are sometimes plant-like, but lichens are not plants. Lichens may have tiny, leafless branches (fruticose), flat leaf-like structures (foliose), flakes that lie on the surface like peeling paint (crustose), a powder-like appearance (leprose), or other growth forms. A macrolichen is a lichen that is either bush-like or leafy; all other lichens are termed microlichens. Here, "macro" and "micro" do not refer to size, but to the growth form. Common names for lichens may contain the word moss (e.g., "reindeer moss", "Iceland moss"), and lichens may superficially look like and grow with mosses, but lichens are not related to mosses or any plant.:3 Lichens do not have roots that absorb water and nutrients as plants do,:2 but like plants, they produce their own nutrition by photosynthesis. When they grow on plants, they do not live as parasites, but instead use the plants as a substrate. Lichens occur from sea level to high alpine elevations, in many environmental conditions, and can grow on almost any surface. Lichens are abundant growing on bark, leaves, mosses, on other lichens, and hanging from branches "living on thin air" (epiphytes) in rain forests and in temperate woodland. They grow on rock, walls, gravestones, roofs, exposed soil surfaces, and in the soil as part of a biological soil crust. Different kinds of lichens have adapted to survive in some of the most extreme environments on Earth: arctic tundra, hot dry deserts, rocky coasts, and toxic slag heaps. They can even live inside solid rock, growing between the grains. It is estimated that 6% of Earth's land surface is covered by lichens.:2 There are about 20,000 known species of lichens. Some lichens have lost the ability to reproduce sexually, yet continue to speciate. Lichens can be seen as being relatively self-contained miniature ecosystems, where the fungi, algae, or cyanobacteria have the potential to engage with other microorganisms in a functioning system that may evolve as an even more complex composite organism. Lichens may be long-lived, with some considered to be among the oldest living things. They are among the first living things to grow on fresh rock exposed after an event such as a landslide. The long life-span and slow and regular growth rate of some lichens can be used to date events (lichenometry). - 1 Pronunciation and etymology - 2 Growth forms - 3 Physiology - 4 Reproduction and dispersal - 5 Taxonomy and classification - 6 Ecology and interactions with environment - 7 Human use - 8 History - 9 Gallery - 10 See also - 11 Notes - 12 References - 13 Further Publications. - 14 External links Pronunciation and etymology English lichen derives from Greek λειχήν leichēn ("tree moss, lichen, lichen-like eruption on skin") via Latin lichen. The Greek noun, which literally means "licker", derives from the verb λείχειν leichein, "to lick". Lichens grow in a wide range of shapes and forms (morphologies). The shape of a lichen is usually determined by the organization of the fungal filaments. The nonreproductive tissues, or vegetative body parts, is called the thallus. Lichens are grouped by thallus type, since the thallus is usually the most visually prominent part of the lichen. Thallus growth forms typically correspond to a few basic internal structure types. Common names for lichens often come from a growth form or color that is typical of a lichen genus. Common groupings of lichen thallus growth forms are: - fruticose – growing like a tuft or multiple-branched leafless mini-shrub, upright or hanging down, 3-dimensional branches with nearly round cross section (terete) or flattened - foliose – growing in 2-dimensional, flat, leaf-like lobes - crustose – crust-like, adhering tightly to a surface (substrate) like a thick coat of paint - squamulose – formed of small leaf-like scales crustose below but free at the tips - leprose – powdery - gelatinous – jelly like - filamentous – stringy or like matted hair - byssoid – wispy, like teased wool There are variations in growth types in a single lichen species, grey areas between the growth type descriptions, and overlapping between growth types, so some authors might describe lichens using different growth type descriptions. When a crustose lichen gets old, the center may start to crack up like old-dried paint, old-broken asphalt paving, or like the polygonal "islands" of cracked-up mud in a dried lakebed. This is called being rimose or areolate, and the "island" pieces separated by the cracks are called areolas. The areolas appear separated, but are (or were) connected by an underlying "prothallus" or "hypothallus". When a crustose lichen grows from a center and appears to radiate out, it is called crustose placodioid. When the edges of the areolas lift up from the substrate, it is called squamulose.:159 These growth form groups are not precisely defined. Foliose lichens may sometimes branch and appear to be fruticose. Fruticose lichens may have flattened branching parts and appear leafy. Squamulose lichens may appear where the edges lift up. Gelatinous lichens may appear leafy when dry.:159 Means of telling them apart in these cases are in the sections below. Structures involved in reproduction often appear as discs, bumps, or squiggly lines on the surface of the thallus.:4 The thallus is not always the part of the lichen that is most visually noticeable. Some lichens can grow inside solid rock between the grains (endolithic lichens), with only the sexual fruiting part visible growing outside the rock. These may be dramatic in color or appearance. Forms of these sexual parts are not in the above growth form categories. The most visually noticeable reproductive parts are often circular, raised, plate-like or disc-like outgrowths, with crinkly edges, and are described in sections below. Lichens come in many colors.:4 Coloration is usually determined by the photosynthetic component. Special pigments, such as yellow usnic acid, give lichens a variety of colors, including reds, oranges, yellows, and browns, especially in exposed, dry habitats. In the absence of special pigments, lichens are usually bright green to olive gray when wet, gray or grayish-green to brown when dry. This is because moisture causes the surface skin (cortex) to become more transparent, exposing the green photobiont layer. Different colored lichens covering large areas of exposed rock surfaces, or lichens covering or hanging from bark can be a spectacular display when the patches of diverse colors "come to life" or "glow" in brilliant displays following rain. Different colored lichens may inhabit different adjacent sections of a rock face, depending on the angle of exposure to light. Colonies of lichens may be spectacular in appearance, dominating much of the surface of the visual landscape in forests and natural places, such as the vertical "paint" covering the vast rock faces of Yosemite National Park. Color is used in identification.:4 Color changes depending on when a lichen is wet or dry. Color descriptions when used for identification are based on when the lichen is dry. Dry lichens with a cyanobacterium as the photosynthetic partner tend to be dark grey, brown, or black. The underside of the leaf-like lobes of foliose lichens is a different color from the top side (dorsiventral), often brown or black, sometimes white. A fruticose lichen may have flattened "branches", appearing similar to a foiliose lichen, but the underside of a leaf-like structure on a fruticose lichen is the same color as the top side. The leaf-like lobes of a foliose lichen may branch, giving the appearance of a fruticose lichen, but the underside will be a different color from the top side. Internal structure and growth forms A lichen consists of a simple photosynthesizing organism, usually a green alga or cyanobacterium, surrounded by filaments of a fungus. Generally, most of a lichen's bulk is made of interwoven fungal filaments, although in filamentous and gelatinous lichens this is not the case. The fungus is called a mycobiont. The photosynthesizing organism is called a photobiont. Algal photobionts are called phycobionts. Cyanobacteria photobionts are called cyanobionts. The part of a lichen that is not involved in reproduction, the "body" or "vegetative tissue" of a lichen, is called the thallus. The thallus form is very different from any form where the fungus or alga are growing separately. The thallus is made up of filaments of the fungus called hyphae. The filaments grow by branching then rejoining to create a mesh, which is called being "anastomose". The mesh of fungal filaments may be dense or loose. Generally, the fungal mesh surrounds the algal or cyanobacterial cells, often enclosing them within complex fungal tissues that are unique to lichen associations. The thallus may or may not have a protective "skin" of densely packed fungal filaments, often containing a second fungal species, which is called a cortex. Fruticose lichens have one cortex layer wrapping around the "branches". Foliose lichens have an upper cortex on the top side of the "leaf", and a separate lower cortex on the bottom side. Crustose and squamulose lichens have only an upper cortex, with the "inside" of the lichen in direct contact with the surface they grow on (the substrate). Even if the edges peel up from the substrate and appear flat and leaf-like, they lack a lower cortex, unlike foliose lichens. Filamentous, byssoid, leprose, gelatinous, and other lichens do not have a cortex, which is called being ecorticate. Fruticose, foliose, crustose, and squamulose lichens generally have up to three different types of tissue, differentiated by having different densities of fungal filaments. The top layer, where the lichen contacts the environment, is called a cortex. The cortex is made of densely tightly woven, packed, and glued together (agglutinated) fungal filaments. The dense packing makes the cortex act like a protective "skin", keeping other organisms out, and reducing the intensity of sunlight on the layers below. The cortex layer can be up to several hundred micrometers (μm) in thickness (less than a millimeter). The cortex may be further topped by an epicortex of secretions, not cells, 0.6–1 μm thick in some lichens. This secretion layer may or may not have pores. Below the cortex layer is a layer called the photobiontic layer or symbiont layer. The symbiont layer has less densely packed fungal filaments, with the photosynthetic partner embedded in them. The less dense packing allows air circulation during photosynthesis, similar to the anatomy of a leaf. Each cell or group of cells of the photobiont is usually individually wrapped by hyphae, and in some cases penetrated by a haustorium. In crustose and foliose lichens, algae in the photobiontic layer are diffuse among the fungal filaments, decreasing in gradation into the layer below. In fruticose lichens, the photobiontic layer is sharply distinct from the layer below. The layer beneath the symbiont layer called is called the medulla. The medulla is less densely packed with fungal filaments than the layers above. In foliose lichens, there is usually, as in Peltigera,:159 another densely packed layer of fungal filaments called the lower cortex. Root-like fungal structures called rhizines (usually):159 grow from the lower cortex to attach or anchor the lichen to the substrate. Fruticose lichens have a single cortex wrapping all the way around the "stems" and "branches". The medulla is the lowest layer, and may form a cottony white inner core for the branchlike thallus, or it may be hollow.:159 Crustose and squamulose lichens lack a lower cortex, and the medulla is in direct contact with the substrate that the lichen grows on. In crustose areolate lichens, the edges of the areolas peel up from the substrate and appear leafy. In squamulose lichens the part of the lichen thallus that is not attached to the substrate may also appear leafy. But these leafy parts lack a lower cortex, which distinguishes crustose and squamulose lichens from foliose lichens. Conversely, foliose lichens may appear flattened against the substrate like a crustose lichen, but most of the leaf-like lobes can be lifted up from the substrate because it is separated from it by a tightly packed lower cortex. In lichens that include both green algal and cyanobacterial symbionts, the cyanobacteria may be held on the upper or lower surface in small pustules called cephalodia. In August 2016, it was reported that macrolichens have more than one species of fungus in their tissues. A lichen is a composite organism that emerges from algae or cyanobacteria living among the filaments (hyphae) of the fungi in a mutually beneficial symbiotic relationship. The fungi benefit from the carbohydrates produced by the algae or cyanobacteria via photosynthesis. The algae or cyanobacteria benefit by being protected from the environment by the filaments of the fungi, which also gather moisture and nutrients from the environment, and (usually) provide an anchor to it. Although some photosynthetic partners in a lichen can survive outside the lichen, the lichen symbiotic association extends the ecological range of both partners, whereby most descriptions of lichen associations describe them as symbiotic. However, while symbiotic, the relationship is probably not mutualistic, since the algae give up a disproportionate amount of their sugars (see below). Both partners gain water and mineral nutrients mainly from the atmosphere, through rain and dust. The fungal partner protects the alga by retaining water, serving as a larger capture area for mineral nutrients and, in some cases, provides minerals obtained from the substrate. If a cyanobacterium is present, as a primary partner or another symbiont in addition to a green alga as in certain tripartite lichens, they can fix atmospheric nitrogen, complementing the activities of the green alga. In three different lineages the fungal partner has independently lost the mitochondrial gene atp9, which has key functions in mitochondrial energy production. The loss makes the fungi completely dependent on their symbionts. The algal or cyanobacterial cells are photosynthetic and, as in plants, they reduce atmospheric carbon dioxide into organic carbon sugars to feed both symbionts. Phycobionts (algae) produce sugar alcohols (ribitol, sorbitol, and erythritol), which are absorbed by the mycobiont (fungus). Cyanobionts produce glucose. Lichenized fungal cells can make the photobiont "leak" out the products of photosynthesis, where they can then be absorbed by the fungus.:5 It appears many, probably the majority, of lichen also live in a symbiotic relationship with an order of basidiomycete yeasts called Cyphobasidiales. The absence of this third partner could explain the difficulties of growing lichen in the laboratory. The yeast cells are responsible for the formation of the characteristic cortex of the lichen thallus, and could also be important for its shape. The lichen combination of alga or cyanobacterium with a fungus has a very different form (morphology), physiology, and biochemistry than the component fungus, alga, or cyanobacterium growing by itself, naturally or in culture. The body (thallus) of most lichens is different from those of either the fungus or alga growing separately. When grown in the laboratory in the absence of its photobiont, a lichen fungus develops as a structureless, undifferentiated mass of fungal filaments (hyphae). If combined with its photobiont under appropriate conditions, its characteristic form associated with the photobiont emerges, in the process called morphogenesis. In a few remarkable cases, a single lichen fungus can develop into two very different lichen forms when associating with either a green algal or a cyanobacterial symbiont. Quite naturally, these alternative forms were at first considered to be different species, until they were found growing in a conjoined manner. Evidence that lichens are examples of successful symbiosis is the fact that lichens can be found in almost every habitat and geographic area on the planet. Two species in two genera of green algae are found in over 35% of all lichens, but can only rarely be found living on their own outside of a lichen. In a case where one fungal partner simultaneously had two green algae partners that outperform each other in different climates, this might indicate having more than one photosynthetic partner at the same time might enable the lichen to exist in a wider range of habitats and geographic locations. Algae produce sugars that are absorbed by the fungus by diffusion into special fungal hyphae called appressoria or haustoria in contact with the wall of the algal cells. The appressoria or haustoria may produce a substance that increases permeability of the algal cell walls, and may penetrate the walls. The algae may lose up to 80% of their sugar production to the fungus. Lichen associations may be examples of mutualism, commensalism or even parasitism, depending on the species. There is evidence to suggest that the lichen symbiosis is parasitic or commensalistic, rather than mutualistic. The photosynthetic partner can exist in nature independently of the fungal partner, but not vice versa. Photobiont cells are routinely destroyed in the course of nutrient exchange. The association is able to continue because reproduction of the photobiont cells matches the rate at which they are destroyed. The fungus surrounds the algal cells, often enclosing them within complex fungal tissues unique to lichen associations. In many species the fungus penetrates the algal cell wall, forming penetration pegs (haustoria) similar to those produced by pathogenic fungi that feed on a host. Cyanobacteria in laboratory settings can grow faster when they are alone rather than when they are part of a lichen. Miniature ecosystem and holobiont theory Symbiosis in lichens is so well-balanced that lichens have been considered to be relatively self-contained miniature ecosystems in and of themselves. It is thought that lichens may be even more complex symbiotic systems that include non-photosynthetic bacterial communities performing other functions as partners in a holobiont. Many lichens are very sensitive to environmental disturbances and can be used to cheaply assess air pollution, ozone depletion, and metal contamination. Lichens have been used in making dyes, perfumes, and in traditional medicines. A few lichen species are eaten by insects or larger animals, such as reindeer. Lichens are widely used as environmental indicators or bio-indicators. If air is very badly polluted with sulphur dioxide there may be no lichens present, just green algae may be found. If the air is clean, shrubby, hairy and leafy lichens become abundant. A few lichen species can tolerate quite high levels of pollution and are commonly found on pavements, walls and tree bark in urban areas. The most sensitive lichens are shrubby and leafy while the most tolerant lichens are all crusty in appearance. Since industrialisation many of the shrubby and leafy lichens such as Ramalina, Usnea and Lobaria species have very limited ranges, often being confined to the parts with the purest air. Some fungi can only be found living on lichens as obligate parasites. These are referred to as lichenicolous fungi, and are a different species from the fungus living inside the lichen; thus they are not considered to be part of the lichen. Reaction to water Moisture makes the cortex become more transparent.:4 This way, the algae can conduct photosynthesis when moisture is available, and is protected at other times. When the cortex is more transparent, the algae show more clearly and the lichen looks greener. Metabolites, metabolite structures and bioactivity Lichens can show intense antioxidant activity. Secondary metabolites are often deposited as crystals in the apoplast. Secondary metabolites are thought to play a role in preference for some substrates over others. Lichens often have a regular but very slow growth rate of less than a millimeter per year. Different lichen species have been measured to grow as slowly as 0.5 mm, and as fast as 0.5 meter per year. In crustose lichens, the area along the margin is where the most active growth is taking place.:159 Most crustose lichens grow only 1–2 mm in diameter per year. Lichens may be long-lived, with some considered to be among the oldest living organisms. Lifespan is difficult to measure because of what defines as the "same" individual lichen is not precise. Lichens grow by vegetatively breaking off a piece, which may or may not be defined as the "same" lichen, and two lichens can merge, then becoming the "same" lichen. An Arctic species called "map lichen" (Rhizocarpon geographicum) has been dated at 8,600 years, apparently the world's oldest living organism. Response to environmental stress Unlike simple dehydration in plants and animals, lichens may experience a complete loss of body water in dry periods. Lichens are capable of surviving extremely low levels of water content (poikilohydric).:5–6 They quickly absorb water when it becomes available again, becoming soft and fleshy. Reconfiguration of membranes following a period of dehydration requires several minutes or more. In tests, lichen survived and showed remarkable results on the adaptation capacity of photosynthetic activity within the simulation time of 34 days under Martian conditions in the Mars Simulation Laboratory (MSL) maintained by the German Aerospace Center (DLR). The European Space Agency has discovered that lichens can survive unprotected in space. In an experiment led by Leopoldo Sancho from the Complutense University of Madrid, two species of lichen—Rhizocarpon geographicum and Xanthoria elegans—were sealed in a capsule and launched on a Russian Soyuz rocket 31 May 2005. Once in orbit, the capsules were opened and the lichens were directly exposed to the vacuum of space with its widely fluctuating temperatures and cosmic radiation. After 15 days, the lichens were brought back to earth and were found to be unchanged in their ability to photosynthesize. Reproduction and dispersal Many lichens reproduce asexually, either by a piece breaking off and growing on its own (vegetative reproduction) or through the dispersal of diaspores containing a few algal cells surrounded by fungal cells. Because of the relative lack of differentiation in the thallus, the line between diaspore formation and vegetative reproduction is often blurred. Fruticose lichens can easily fragment, and new lichens can grow from the fragment (vegetative reproduction). Many lichens break up into fragments when they dry, dispersing themselves by wind action, to resume growth when moisture returns. Soredia (singular: "soredium") are small groups of algal cells surrounded by fungal filaments that form in structures called soralia, from which the soredia can be dispersed by wind. Isidia (singular: "isidium") are branched, spiny, elongated, outgrowths from the thallus that break off for mechanical dispersal. Lichen propagules (diaspores) typically contain cells from both partners, although the fungal components of so-called "fringe species" rely instead on algal cells dispersed by the "core species". Structures involved in reproduction often appear as discs, bumps, or squiggly lines on the surface of the thallus.:4 Only the fungal partner in a lichen reproduces sexually. Many lichen fungi reproduce sexually like other fungi, producing spores formed by meiosis and fusion of gametes. Following dispersal, such fungal spores must meet with a compatible algal partner before a functional lichen can form. Most lichen fungi belong to Ascomycetes (ascolichens). Among the ascolichens, spores are produced in spore-producing structures called ascomata. The most common types of ascomata are the apothecium (plural: apothecia) and perithecium (plural: perithecia).:14 Apothecia are usually cups or plate-like discs located on the top surface of the lichen thallus. When apothecia are shaped like squiggly line segments instead of like discs, they are called lirellae.:14 Perithecia are shaped like flasks that are immersed in the lichen thallus tissue, which has a small hole for the spores to escape the flask, and appear like black dots on the lichen surface.:14 The three most common spore body types are raised discs called apothecia (singular: apothecium), bottle-like cups with a small hole at the top called perithecia (singular: perithecium), and pycnidia (singular: pycnidium), shaped like perithecia but without asci (an ascus is the structure that contains and releases the sexual spores in fungi of the Ascomycota). The apothecium has a layer of exposed spore-producing cells called asci (singular: ascus), and is usually a different color from the thallus tissue.:14 When the apothecium has an outer margin, the margin is called the exciple.:14 When the exciple has a color similar to colored thallus tissue the apothecium or lichen is called lecanorine, meaning similar to members of the genus Lecanora.:14 When the exciple is blackened like carbon it is called lecideine meaning similar to members of the genus Lecidea.:14 When the margin is pale or colorless it is called biatorine.:14 A "podetium" (plural: podetia) is a lichenized stalk-like structure of the fruiting body rising from the thallus, associated with some fungi that produce a fungal apothecium. Since it is part of the reproductive tissue, podetia are not considered part of the main body (thallus), but may be visually prominent. The podetium may be branched, and sometimes cup-like. They usually bear the fungal pycnidia or apothecia or both. Many lichens have apothecia that are visible to the naked eye. Most lichens produce abundant sexual structures. Many species appear to disperse only by sexual spores. For example, the crustose lichens Graphis scripta and Ochrolechia parella produce no symbiotic vegetative propagules. Instead, the lichen-forming fungi of these species reproduce sexually by self-fertilization (i.e. they are homothallic). This breeding system may enable successful reproduction in harsh environments. Mazaedia (singular: mazaedium) are apothecia shaped like a dressmaker's pin in (pin lichen)s, where the fruiting body is a brown or black mass of loose ascospores enclosed by a cup-shaped exciple, which sits on top of a tiny stalk.:15 Taxonomy and classification Lichens are classified by the fungal component. Lichen species are given the same scientific name (binomial name) as the fungus species in the lichen. Lichens are being integrated into the classification schemes for fungi. The alga bears its own scientific name, which bears no relationship to that of the lichen or fungus. There are about 13,500–17,000 identified lichen species. Nearly 20% of known fungal species are associated with lichens. "Lichenized fungus" may refer to the entire lichen, or to just the fungus. This may cause confusion without context. A particular fungus species may form lichens with different algae species, giving rise to what appear to be different lichen species, but which are still classified (as of 2014) as the same lichen species. Formerly, some lichen taxonomists placed lichens in their own division, the Mycophycophyta, but this practice is no longer accepted because the components belong to separate lineages. Neither the ascolichens nor the basidiolichens form monophyletic lineages in their respective fungal phyla, but they do form several major solely or primarily lichen-forming groups within each phylum. Even more unusual than basidiolichens is the fungus Geosiphon pyriforme, a member of the Glomeromycota that is unique in that it encloses a cyanobacterial symbiont inside its cells. Geosiphon is not usually considered to be a lichen, and its peculiar symbiosis was not recognized for many years. The genus is more closely allied to endomycorrhizal genera. Fungi from Verrucariales also form marine lichens with the brown algae Petroderma maculiforme, and have a symbiotic relationship with seaweed like (rockweed) and Blidingia minima, where the algae are the dominant components. The fungi is thought to help the rockweeds to resist desiccation when exposed to air. In addition, lichens can also use yellow-green algae (Heterococcus) as their symbiotic partner. Lichens independently emerged from fungi associating with algae and cyanobacteria multiple times throughout history. The fungal component of a lichen is called the mycobiont. The mycobiont may be an Ascomycete or Basidiomycete. The associated lichens are called either ascolichens or basidiolichens, respectively. Living as a symbiont in a lichen appears to be a successful way for a fungus to derive essential nutrients since about 20% of all fungal species have acquired this mode of life. Thalli produced by a given fungal symbiont with its differing partners may be similar, and the secondary metabolites identical, indicating that the fungus has the dominant role in determining the morphology of the lichen. But the same mycobiont with different photobionts may also produce very different growth forms. Lichens are known in which there is one fungus associated with two or even three algal species. Although each lichen thallus generally appears homogeneous, some evidence seems to suggest that the fungal component may consist of more than one genetic individual of that species. Two or more fungal species can interact to form the same lichen. The photosynthetic partner in a lichen is called a photobiont. The photobionts in lichens come from a variety of simple prokaryotic and eukaryotic organisms. In the majority of lichens the photobiont is a green alga (Chlorophyta) or a cyanobacterium. In some lichens both types are present. Algal photobionts are called phycobionts, while cyanobacterial photobionts are called cyanobionts. According to one source, about 90% of all known lichens have phycobionts, and about 10% have cyanobionts, while another source states that two thirds of lichens have green algae as phycobiont, and about one third have a cyanobiont. Approximately 100 species of photosynthetic partners from 40 genera and five distinct classes (prokaryotic: Cyanophyceae; eukaryotic: Trebouxiophyceae, Phaeophyceae, Chlorophyceae) have been found to associate with the lichen-forming fungi. Common algal photobionts are from the genera Trebouxia, Trentepohlia, Pseudotrebouxia, or Myrmecia. Trebouxia is the most common genus of green algae in lichens, occurring in about 40% of all lichens. "Trebouxioid" means either a photobiont that is in the genus Trebouxia, or resembles a member of that genus, and is therefore presumably a member of the class Trebouxiophyceae. The second most commonly represented green alga genus is Trentepohlia. Overall, about 100 species of eukaryotes are known to occur as photobionts in lichens. All the algae are probably able to exist independently in nature as well as in the lichen. A "cyanolichen" is a lichen with a cyanobacterium as its main photosynthetic component (photobiont). Most cyanolichen are also ascolichens, but a few basidiolichen like Dictyonema and Acantholichen have cyanobacteria as their partner. The most commonly occurring cyanobacterium genus is Nostoc. Other common cyanobacterium photobionts are from Scytonema. Many cyanolichens are small and black, and have limestone as the substrate. Another cyanolichen group, the jelly lichens of the genera Collema or Leptogium are gelatinous and live on moist soils. Another group of large and foliose species including Peltigera, Lobaria, and Degelia are grey-blue, especially when dampened or wet. Many of these characterize the Lobarion communities of higher rainfall areas in western Britain, e.g., in the Celtic rain forest. Strains of cyanobacteria found in various cyanolichens are often closely related to one another. They differ from the most closely related free-living strains. The lichen association is a close symbiosis. It extends the ecological range of both partners but is not always obligatory for their growth and reproduction in natural environments, since many of the algal symbionts can live independently. A prominent example is the alga Trentepohlia, which forms orange-coloured populations on tree trunks and suitable rock faces. Lichen propagules (diaspores) typically contain cells from both partners, although the fungal components of so-called "fringe species" rely instead on algal cells dispersed by the "core species". The same cyanobiont species can occur in association with different fungal species as lichen partners. The same phycobiont species can occur in association with different fungal species as lichen partners. More than one phycobiont may be present in a single thallus. A single lichen may contain several algal genotypes. These multiple genotypes may better enable response to adaptation to environmental changes, and enable the lichen to inhabit a wider range of environments. Controversy over classification method and species names There are about 20,000 known lichen species. But what is meant by "species" is different from what is meant by biological species in plants, animals, or fungi, where being the same species implies that there is a common ancestral lineage. Because lichens are combinations of members of two or even three different biological kingdoms, these components must have a different ancestral lineage from each other. By convention, lichens are still called "species" anyway, and are classified according to the species of their fungus, not the species of the algae or cyanobacteria. Lichens are given the same scientific name (binomial name) as the fungus in them, which may cause some confusion. The alga bears its own scientific name, which has no relationship to the name of the lichen or fungus. Depending on context, "lichenized fungus" may refer to the entire lichen, or to the fungus when it is in the lichen, which can be grown in culture in isolation from the algae or cyanobacteria. Some algae and cyanobacteria are found naturally living outside of the lichen. The fungal, algal, or cyanobacterial component of a lichen can be grown by itself in culture. When growing by themselves, the fungus, algae, or cyanobacteria have very different properties than those of the lichen. Lichen properties such as growth form, physiology, and biochemistry, are very different from the combination of the properties of the fungus and the algae or cyanobacteria. The same fungus growing in combination with different algae or cyanobacteria, can produce lichens that are very different in most properties, meeting non-DNA criteria for being different "species". Historically, these different combinations were classified as different species. When the fungus is identified as being the same using modern DNA methods, these apparently different species get reclassified as the same species under the current (2014) convention for classification by fungal component. This has led to debate about this classification convention. These apparently different "species" have their own independent evolutionary history. There is also debate as to the appropriateness of giving the same binomial name to the fungus, and to the lichen that combines that fungus with an alga or cyanobacterium (synecdoche). This is especially the case when combining the same fungus with different algae or cyanobacteria produces dramatically different lichen organisms, which would be considered different species by any measure other than the DNA of the fungal component. If the whole lichen produced by the same fungus growing in association with different algae or cyanobacteria, were to be classified as different "species", the number of "lichen species" would be greater. The largest number of lichenized fungi occur in the Ascomycota, with about 40% of species forming such an association. Some of these lichenized fungi occur in orders with nonlichenized fungi that live as saprotrophs or plant parasites (for example, the Leotiales, Dothideales, and Pezizales). Other lichen fungi occur in only five orders in which all members are engaged in this habit (Orders Graphidales, Gyalectales, Peltigerales, Pertusariales, and Teloschistales). Overall, about 98% of lichens have an ascomycetous mycobiont. Next to the Ascomycota, the largest number of lichenized fungi occur in the unassigned fungi imperfecti, a catch-all category for fungi whose sexual form of reproduction has never been observed. Comparatively few Basidiomycetes are lichenized, but these include agarics, such as species of Lichenomphalia, clavarioid fungi, such as species of Multiclavula, and corticioid fungi, such as species of Dictyonema. Lichen identification uses growth form and reactions to chemical tests. "Pd" refers to the outcome of the Pd test or is used as an abbreviation for the chemical used in the test, para-phenylenediamine. If putting a drop on a lichen turns an area bright yellow to orange, this helps identify it as belonging to either the genus Cladonia or Lecanora. Evolution and paleontology The fossil record for lichens is poor. The extreme habitats that lichens dominate, such as tundra, mountains, and deserts, are not ordinarily conducive to producing fossils. There are fossilized lichens embedded in amber. The fossilized Anzia is found in pieces of amber in northern Europe and dates back approximately 40 million years. Lichen fragments are also found in fossil leaf beds, such as Lobaria from Trinity County in northern California, USA, dating back to the early to middle Miocene. The oldest fossil lichens in which both symbiotic partners have been recovered date to the Early Devonian Rhynie chert, about 400 million years old. The slightly older fossil Spongiophyton has also been interpreted as a lichen on morphological and isotopic grounds, although the isotopic basis is decidedly shaky. It has been demonstrated that Silurian-Devonian fossils Nematothallus and Prototaxites were lichenized. Thus lichenized Ascomycota and Basidiomycota were a component of early Silurian-Devonian terrestrial ecosystems The ancestral ecological state of both Ascomycota and Basidiomycota was probably saprobism, and independent lichenization events may have occurred multiple times. In 1995, Gargas and colleagues proposed that there were at least five independent origins of lichenization; three in the basidiomycetes and at least two in the Ascomycetes. However, Lutzoni et al. (2001) indicate that lichenization probably evolved earlier and was followed by multiple independent losses. Some non-lichen-forming fungi may have secondarily lost the ability to form a lichen association. As a result, lichenization has been viewed as a highly successful nutritional strategy. Lichenized Glomeromycota may extend well back into the Precambrian. Winfrenatia, an early zygomycetous (Glomeromycota) lichen symbiosis that may have involved controlled parasitism, is permineralized in the Rhynie Chert of Scotland, of early Devonian age. Lichen-like fossils consisting of coccoid cells (cyanobacteria?) and thin filaments (mucoromycotinan Glomeromycota?) are permineralized in marine phosphorite of the Doushantuo Formation in southern China. These fossils are thought to be 551 to 635 million years old or Ediacaran. Ediacaran acritarchs also have many similarities with Glomeromycotan vesicles and spores. It has also been claimed that Ediacaran fossils including Dickinsonia, were lichens, although this claim is controversial. Endosymbiotic Glomeromycota comparable with living Geosiphon may extend back into the Proterozoic in the form of 1500 million year old Horodyskia and 2200 million year old Diskagma. Discovery of these fossils suggest that fungi developed symbiotic partnerships with photoautotrophs long before the evolution of vascular plants. Ecology and interactions with environment Substrates and habitats Lichens grow on and in a wide range of substrates and habitats, including some of the most extreme conditions on earth. They are abundant growing on bark, leaves, and hanging from branches "living on thin air" (epiphytes) in rain forests and in temperate woodland. They grow on bare rock, walls, gravestones, roofs, exposed soil surfaces. They can survive in some of the most extreme environments on Earth: arctic tundra, hot dry deserts, rocky coasts, and toxic slag heaps. They can even live inside solid rock, growing between the grains, and in the soil as part of a biological soil crust in arid habitats such as deserts. Some lichens do not grow on anything, living out their lives blowing about the environment. When growing on mineral surfaces, some lichens slowly decompose their substrate by chemically degrading and physically disrupting the minerals, contributing to the process of weathering by which rocks are gradually turned into soil. While this contribution to weathering is usually benign, it can cause problems for artificial stone structures. For example, there is an ongoing lichen growth problem on Mount Rushmore National Memorial that requires the employment of mountain-climbing conservators to clean the monument. Lichens are not parasites on the plants they grow on, but only use them as a substrate to grow on. The fungi of some lichen species may "take over" the algae of other lichen species. Lichens make their own food from their photosynthetic parts and by absorbing minerals from the environment. Lichens growing on leaves may have the appearance of being parasites on the leaves, but they are not. However, some lichens, notably those of the genus Diploschistes are known to parasitise other lichens. Diploschistes muscorum starts its development in the tissue of a host Cladonia species.:30:171 In the arctic tundra, lichens, together with mosses and liverworts, make up the majority of the ground cover, which helps insulate the ground and may provide forage for grazing animals. An example is "Reindeer moss", which is a lichen, not a moss. A crustose lichen that grows on rock is called a saxicolous lichen.:159 Crustose lichens that grow on the rock are epilithic, and those that grow immersed inside rock, growing between the crystals with only their fruiting bodies exposed to the air, are called endolithic lichens.:159 A crustose lichen that grows on bark is called a corticolous lichen.:159 A lichen that grows on wood from which the bark has been stripped is called a lignicolous lichen. Lichens that grow immersed inside plant tissues are called endophloidic lichens or endophloidal lichens.:159 Lichens that use leaves as substrates, whether the leaf is still on the tree or on the ground, are called epiphyllous or foliicolous. A terricolous lichen grows on the soil as a substrate. Many squamulous lichens are terricolous.:159 Umbillicate lichens are foliose lichens that are attached to the substrate at only one point. A vagrant lichen is not attached to a substrate at all, and lives its life being blown around by the wind. Lichens and soils In addition to distinct physical mechanisms by which lichens break down raw stone, recent studies indicate lichens attack stone chemically, entering newly chelated minerals into the ecology. The lichen exudates, which have powerful chelating capacity, the widespread occurrence of mineral neoformation, particularly metal oxalates, together with the characteristics of weathered substrates, all confirm the significance of lichens as chemical weathering agents. Over time, this activity creates new fertile soil from lifeless stone. Lichens may be important in contributing nitrogen to soils in some deserts through being eaten, along with their rock substrate, by snails, which then defecate, putting the nitrogen into the soils. Lichens help bind and stabilize soil sand in dunes. In deserts and semi-arid areas, lichens are part of extensive, living biological soil crusts, essential for maintaining the soil structure. Lichens have a long fossil record in soils dating back 2.2 billion years. Lichens are pioneer species, among the first living things to grow on bare rock or areas denuded of life by a disaster. Lichens may have to compete with plants for access to sunlight, but because of their small size and slow growth, they thrive in places where higher plants have difficulty growing. Lichens are often the first to settle in places lacking soil, constituting the sole vegetation in some extreme environments such as those found at high mountain elevations and at high latitudes. Some survive in the tough conditions of deserts, and others on frozen soil of the Arctic regions. A major ecophysiological advantage of lichens is that they are poikilohydric (poikilo- variable, hydric- relating to water), meaning that though they have little control over the status of their hydration, they can tolerate irregular and extended periods of severe desiccation. Like some mosses, liverworts, ferns, and a few "resurrection plants", upon desiccation, lichens enter a metabolic suspension or stasis (known as cryptobiosis) in which the cells of the lichen symbionts are dehydrated to a degree that halts most biochemical activity. In this cryptobiotic state, lichens can survive wider extremes of temperature, radiation and drought in the harsh environments they often inhabit. Lichens do not have roots and do not need to tap continuous reservoirs of water like most higher plants, thus they can grow in locations impossible for most plants, such as bare rock, sterile soil or sand, and various artificial structures such as walls, roofs and monuments. Many lichens also grow as epiphytes (epi- on the surface, phyte- plant) on plants, particularly on the trunks and branches of trees. When growing on plants, lichens are not parasites; they do not consume any part of the plant nor poison it. Lichens produce allelopathic chemicals that inhibit the growth of mosses. Some ground-dwelling lichens, such as members of the subgenus Cladina (reindeer lichens), produce allelopathic chemicals that leach into the soil and inhibit the germination of seeds, spruce and other plants. Stability (that is, longevity) of their substrate is a major factor of lichen habitats. Most lichens grow on stable rock surfaces or the bark of old trees, but many others grow on soil and sand. In these latter cases, lichens are often an important part of soil stabilization; indeed, in some desert ecosystems, vascular (higher) plant seeds cannot become established except in places where lichen crusts stabilize the sand and help retain water. Lichens may be eaten by some animals, such as reindeer, living in arctic regions. The larvae of a number of Lepidoptera species feed exclusively on lichens. These include Common Footman and Marbled Beauty. However, lichens are very low in protein and high in carbohydrates, making them unsuitable for some animals. Lichens are also used by the Northern Flying Squirrel for nesting, food, and a water source during winter. Effects of air pollution If lichens are exposed to air pollutants at all times, without any deciduous parts, they are unable to avoid the accumulation of pollutants. Also lacking stomata and a cuticle, lichens may absorb aerosols and gases over the entire thallus surface from which they may readily diffuse to the photobiont layer. Because lichens do not possess roots, their primary source of most elements is the air, and therefore elemental levels in lichens often reflect the accumulated composition of ambient air. The processes by which atmospheric deposition occurs include fog and dew, gaseous absorption, and dry deposition. Consequently, many environmental studies with lichens emphasize their feasibility as effective biomonitors of atmospheric quality. Not all lichens are equally sensitive to air pollutants, so different lichen species show different levels of sensitivity to specific atmospheric pollutants. The sensitivity of a lichen to air pollution is directly related to the energy needs of the mycobiont, so that the stronger the dependency of the mycobiont on the photobiont, the more sensitive the lichen is to air pollution. Upon exposure to air pollution, the photobiont may use metabolic energy for repair of its cellular structures that would otherwise be used for maintenance of its photosynthetic activity, therefore leaving less metabolic energy available for the mycobiont. The alteration of the balance between the photobiont and mycobiont can lead to the breakdown of the symbiotic association. Therefore, lichen decline may result not only from the accumulation of toxic substances, but also from altered nutrient supplies that favor one symbiont over the other. Lichens are eaten by many different cultures across the world. Although some lichens are only eaten in times of famine, others are a staple food or even a delicacy. Two obstacles are often encountered when eating lichens: lichen polysaccharides are generally indigestible to humans, and lichens usually contain mildly toxic secondary compounds that should be removed before eating. Very few lichens are poisonous, but those high in vulpinic acid or usnic acid are toxic. Most poisonous lichens are yellow. In the past, Iceland moss (Cetraria islandica) was an important source of food for humans in northern Europe, and was cooked as a bread, porridge, pudding, soup, or salad. Wila (Bryoria fremontii) was an important food in parts of North America, where it was usually pitcooked. Northern peoples in North America and Siberia traditionally eat the partially digested reindeer lichen (Cladina spp.) after they remove it from the rumen of caribou or reindeer that have been killed. Rock tripe (Umbilicaria spp. and Lasalia spp.) is a lichen that has frequently been used as an emergency food in North America, and one species, Umbilicaria esculenta, is used in a variety of traditional Korean and Japanese foods. Lichenometry is a technique used to determine the age of exposed rock surfaces based on the size of lichen thalli. Introduced by Beschel in the 1950s, the technique has found many applications. it is used in archaeology, palaeontology, and geomorphology. It uses the presumed regular but slow rate of lichen growth to determine the age of exposed rock.:9 Measuring the diameter (or other size measurement) of the largest lichen of a species on a rock surface indicates the length of time since the rock surface was first exposed. Lichen can be preserved on old rock faces for up to 10,000 years, providing the maximum age limit of the technique, though it is most accurate (within 10% error) when applied to surfaces that have been exposed for less than 1,000 years. Lichenometry is especially useful for dating surfaces less than 500 years old, as radiocarbon dating techniques are less accurate over this period. The lichens most commonly used for lichenometry are those of the genera Rhizocarpon (e.g. the species Rhizocarpon geographicum) and Xanthoria. Lichens have been shown to degrade polyester resins, as can be seen in archaeological sites in the Roman city of Baelo Claudia in Spain. Lichens can accumulate several environmental pollutants such as lead, copper, and radionuclides. Many lichens produce secondary compounds, including pigments that reduce harmful amounts of sunlight and powerful toxins that reduce herbivory or kill bacteria. These compounds are very useful for lichen identification, and have had economic importance as dyes such as cudbear or primitive antibiotics. In the Highlands of Scotland, traditional dyes for Harris tweed and other traditional cloths were made from lichens including the orange Xanthoria parietina and the grey foliaceous Parmelia saxatilis common on rocks known as "crottle". There are reports dating almost 2000 years old of lichens being used to make purple and red dyes. Of great historical and commercial significance are lichens belonging to the family Roccellaceae, commonly called orchella weed or orchil. Orcein and other lichen dyes have largely been replaced by synthetic versions. Traditional medicine and research Historically in traditional medicine of Europe, Lobaria pulmonaria was collected in large quantities as "Lungwort", due to its lung-like appearance (the doctrine of signatures suggesting that herbs can treat body parts that they physically resemble). Similarly, Peltigera leucophlebia was used as a supposed cure for thrush, due to the resemblance of its cephalodia to the appearance of the disease. Lichens produce metabolites in research for their potential therapeutic or diagnostic value. Some metabolites produced by lichens are structurally and functionally similar to broad-spectrum antibiotics while few are associated respectively to antiseptic similarities. Usnic acid is the most commonly studied metabolite produced by lichens. It is also under research as an bactericidal agent against Escherichia coli and Staphylococcus aureus. Colonies of lichens may be spectacular in appearance, dominating the surface of the visual landscape as part of the aesthetic appeal to paying visitors of Yosemite National Park and Sequoia National Park.:2 Orange and yellow lichens add to the ambience of desert trees, rock faces, tundras, and rocky seashores. Intricate webs of lichens hanging from tree branches add a mysterious aspect to forests. Fruticose lichens are used in model railroading and other modeling hobbies as a material for making miniature trees and shrubs. In early Midrashic literature, the Hebrew word "vayilafeth" in Ruth 3:8 is explained as referring to Ruth entwining herself around Boaz like lichen. The tenth century Arab physician, Al-Tamimi, mentions lichens dissolved in vinegar and rose water being used in his day for the treatment of skin diseases and rashes. Although lichens had been recognized as organisms for quite some time, it was not until 1867, when Swiss botanist Simon Schwendener proposed his dual theory of lichens, that lichens are a combination of fungi with algae or cyanobacteria, whereby the true nature of the lichen association began to emerge. Schwendener's hypothesis, which at the time lacked experimental evidence, arose from his extensive analysis of the anatomy and development in lichens, algae, and fungi using a light microscope. Many of the leading lichenologists at the time, such as James Crombie and Nylander, rejected Schwendener's hypothesis because the common consensus was that all living organisms were autonomous. Other prominent biologists, such as Heinrich Anton de Bary, Albert Bernhard Frank, Melchior Treub and Hermann Hellriegel were not so quick to reject Schwendener's ideas and the concept soon spread into other areas of study, such as microbial, plant, animal and human pathogens. When the complex relationships between pathogenic microorganisms and their hosts were finally identified, Schwendener's hypothesis began to gain popularity. Further experimental proof of the dual nature of lichens was obtained when Eugen Thomas published his results in 1939 on the first successful re-synthesis experiment. In the 2010s, a new facet of the fungi-algae partnership was discovered. Toby Spribille and colleagues found that many types of lichen that were long thought to be ascomycete-algae pairs were actually ascomycete-basidiomycete-algae trios. Lobaria pulmonaria, tree lungwort, lung lichen, lung moss; Upper Bavaria, Germany Cladonia macilenta var. bacillaris 'Lipstick Cladonia' Xanthoparmelia cf. lavicola, a foliose lichen, on basalt. Map lichen (Rhizocarpon geographicum) on rock Reindeer moss (Cladonia rangiferina) A crusty crustose lichen on a wall Microscopic view of lichen growing on a piece of concrete dust.[a] - This was scraped from a dry, concrete-paved section of a drainage ditch. This entire image covers a square that is approximately 1.7 millimeters on a side. The numbered ticks on the scale represent distances of 230 micrometers, or slightly less than 0.25 millimeter. - Spribille, Toby; Tuovinen, Veera; Resl, Philipp; Vanderpool, Dan; Wolinski, Heimo; Aime, M. Catherine; Schneider, Kevin; Stabentheiner, Edith; Toome-Heller, Merje (2016-07-21). "Basidiomycete yeasts in the cortex of ascomycete macrolichens". Science. 353 (6298): 488–92. Bibcode:2016Sci...353..488S. doi:10.1126/science.aaf8287. ISSN 0036-8075. PMC 5793994. PMID 27445309. - "What is a lichen?". Australian National Botanic Gardens. Retrieved 10 October 2014. - Introduction to Lichens – An Alliance between Kingdoms. University of California Museum of Paleontology. - Brodo, Irwin M. and Duran Sharnoff, Sylvia (2001) Lichens of North America. ISBN 978-0300082494. - Galloway, D.J. (13 May 1999). "Lichen Glossary". Australian National Botanic Gardens. Archived from the original on 6 December 2014. - Margulis, Lynn; Barreno, EVA (2003). "Looking at Lichens". BioScience. 53 (8): 776. doi:10.1641/0006-3568(2003)053[0776:LAL]2.0.CO;2. - Sharnoff, Stephen (2014) Field Guide to California Lichens, Yale University Press. ISBN 978-0-300-19500-2 - Speer, Brian R; Ben Waggoner (May 1997). "Lichens: Life History & Ecology". University of California Museum of Paleontology. Retrieved 28 April 2015. - Gadd, Geoffrey Michael (March 2010). "Metals, minerals and microbes: geomicrobiology and bioremediation". Microbiology. 156 (Pt 3): 609–643. doi:10.1099/mic.0.037143-0. PMID 20019082. - McCune, B.; Grenon, J.; Martin, E.; Mutch, L.S.; Martin, E.P. (Mar 2007). "Lichens in relation to management issues in the Sierra Nevada national parks". North American Fungi. 2: 1–39. doi:10.2509/pnwf.2007.002.003. - "Lichens: Systematics, University of California Museum of Paleontology". Retrieved 10 October 2014. - Lendemer, J. C. (2011). "A taxonomic revision of the North American species of Lepraria s.l. that produce divaricatic acid, with notes on the type species of the genus L. incana". Mycologia. 103 (6): 1216–1229. doi:10.3852/11-032. PMID 21642343. - Casano, L. M.; Del Campo, E. M.; García-Breijo, F. J.; Reig-Armiñana, J; Gasulla, F; Del Hoyo, A; Guéra, A; Barreno, E (2011). "Two Trebouxia algae with different physiological performances are ever-present in lichen thalli of Ramalina farinacea. Coexistence versus competition?". Environmental Microbiology (Submitted manuscript). 13 (3): 806–818. doi:10.1111/j.1462-2920.2010.02386.x. hdl:10251/60269. PMID 21134099. - Honegger, R. (1991) Fungal evolution: symbiosis and morphogenesis, Symbiosis as a Source of Evolutionary Innovation, Margulis, L., and Fester, R. (eds). Cambridge, MA, USA: The MIT Press, pp. 319–340. - Grube, M; Cardinale, M; De Castro Jr, J. V.; Müller, H; Berg, G (2009). "Species-specific structural and functional diversity of bacterial communities in lichen symbioses". The ISME Journal. 3 (9): 1105–1115. doi:10.1038/ismej.2009.63. PMID 19554038. - Barreno, E., Herrera-Campos, M., García-Breijo, F., Gasulla, F., and Reig-Armiñana, J. (2008) "Non photosynthetic bacteria associated to cortical structures on Ramalinaand Usnea thalli from Mexico"[permanent dead link]. Asilomar, Pacific Grove, CA, USA: Abstracts IAL 6- ABLS Joint Meeting. - Morris J, Purvis W (2007). Lichens (Life). London: The Natural History Museum. p. 19. ISBN 978-0-565-09153-8. - "Lichen". spectator.co.uk. 17 November 2012. - "Lichens – Horticulture and Home Pest News". iastate.edu. - "Lichen". Oxford Dictionaries. Oxford University Press. Retrieved 2014-11-02. - Harper, Douglas. "lichen". Online Etymology Dictionary. - lichen. Charlton T. Lewis and Charles Short. A Latin Dictionary on Perseus Project. - λειχήν. Liddell, Henry George; Scott, Robert; A Greek–English Lexicon at the Perseus Project. - λείχειν in Liddell and Scott. - Beekes, Robert S. P. (2010). "s.v. λειχήν, λείχω". Etymological Dictionary of Greek. Leiden Indo-European Etymological Dictionary Series. 1. With the assistance of Lucien van Beek. Leiden, Boston: Brill. pp. 846–47. ISBN 9789004174184. - "Lichens and Bryophytes, Michigan State University, 10-25-99". Retrieved 10 October 2014. - Lichen Vocabulary, Lichens of North America Information, Sylvia and Stephen Sharnoff, - "Alan Silverside's Lichen Glossary (p-z), Alan Silverside". Retrieved 10 October 2014. - Dobson, F.S. (2011). Lichens, an illustrated guide to the British and Irish species. Slough, UK: Richmond Publishing Co. ISBN 9780855463151. - "Foliose lichens, Lichen Thallus Types, Allan Silverside". Retrieved 10 October 2014. - Mosses Lichens & Ferns of Northwest North America, Dale H. Vitt, Janet E. Marsh, Robin B. Bovey, Lone Pine Publishing Company, ISBN 0-295-96666-1 - "Lichens, Saguaro-Juniper Corporation". Retrieved 10 October 2014. - Michigan Lichens, Julie Jones Medlin, B. Jain Publishers, 1996, ISBN 0877370397, 9780877370390, - Lichens: More on Morphology, University of California Museum of Paleontology, - Lichen Photobionts, University of Nebraska Omaha Archived 6 October 2014 at the Wayback Machine. - "Alan Silverside's Lichen Glossary (g-o), Alan Silverside". Retrieved 10 October 2014. - Büdel, B.; Scheidegger, C. (1996). Thallus morphology and anatomy. Lichen Biology. pp. 37–64. doi:10.1017/CBO9780511790478.005. ISBN 9780511790478. - Heiđmarsson, Starri; Heidmarsson, Starri (1996). "Pruina as a Taxonomic Character in the Lichen Genus Dermatocarpon". The Bryologist. 99 (3): 315–320. doi:10.2307/3244302. JSTOR 3244302. - Sharnoff, Sylvia and Sharnoff, Stephen. "Lichen Biology and the Environment". sharnoffphotos.com - Reductions in complexity of mitochondrial genomes in lichen‐forming fungi shed light on genome architecture of obligate symbioses - Wiley Online Library - Basidiomycete yeasts in the cortex of ascomycete macrolichens – Science - Skaloud, P; Peksa, O (2010). "Evolutionary inferences based on ITS rDNA and actin sequences reveal extensive diversity of the common lichen alga Asterochloris (Trebouxiophyceae, Chlorophyta)". Molecular Phylogenetics and Evolution. 54 (1): 36–46. doi:10.1016/j.ympev.2009.09.035. PMID 19853051. - Ramel, Gordon. "What is a Lichen?". Earthlife Web,. Retrieved 20 January 2015. - Ahmadjian V. (1993). The Lichen Symbiosis. New York: John Wiley & Sons. ISBN 978-0-471-57885-7. - Honegger, R. (1988). "Mycobionts". In Nash III, T.H. Lichen Biology. Cambridge: Cambridge University Press (published 1996). ISBN 978-0-521-45368-4. - Ferry, B. W., Baddeley, M. S. & Hawksworth, D. L. (editors) (1973) Air Pollution and Lichens. Athlone Press, London. - Rose C. I., Hawksworth D. L. (1981). "Lichen recolonization in London's cleaner air". Nature. 289 (5795): 289–292. Bibcode:1981Natur.289..289R. doi:10.1038/289289a0. - Hawksworth, D.L. and Rose, F. (1976) Lichens as pollution monitors. Edward Arnold, Institute of Biology Series, No. 66. ISBN 0713125551 - "Oak Moss Absolute Oil, Evernia prunastri, Perfume Fixative". - Skogland, Terje (1984). "Wild reindeer foraging-niche organization". Ecography. 7 (4): 345. doi:10.1111/j.1600-0587.1984.tb01138.x. - Lawrey, James D.; Diederich, Paul (2003). "Lichenicolous Fungi: Interactions, Evolution, and Biodiversity" (PDF). The Bryologist. 106: 80. doi:10.1639/0007-2745(2003)106[0080:LFIEAB]2.0.CO;2. - Hagiwara K, Wright PR, et al. (March 2015). "Comparative analysis of the antioxidant properties of Icelandic and Hawaiian lichens". Environmental Microbiology. 18 (8): 2319–2325. doi:10.1111/1462-2920.12850. PMID 25808912. - Odabasoglu F, Aslan A, Cakir A, et al. (March 2005). "Antioxidant activity, reducing power and total phenolic content of some lichen species". Fitoterapia. 76 (2): 216–219. doi:10.1016/j.fitote.2004.05.012. PMID 15752633. - Hauck, Markus; Jürgens, Sascha-René; Leuschner, Christoph (2010). "Norstictic acid: Correlations between its physico-chemical characteristics and ecological preferences of lichens producing this depsidone". Environmental and Experimental Botany. 68 (3): 309. doi:10.1016/j.envexpbot.2010.01.003. - "The Earth Life Web, Growth and Development in Lichens". earthlife.net. - "Lichens". National Park Service, US Department of the Interior, Government of the United States. 22 May 2016. Retrieved 4 April 2018. - Nash III, Thomas H. (2008). "Introduction". In Nash III, T.H. Lichen Biology (2nd ed.). Cambridge: Cambridge University Press. pp. 1–8. doi:10.1017/CBO9780511790478.002. ISBN 978-0-521-69216-8. - Baldwin, Emily (26 April 2012). "Lichen survives harsh Mars environment". Skymania News. Retrieved 27 April 2012. - "ESA — Human Spaceflight and Exploration – Lichen survives in space". Retrieved 2010-02-16. - Sancho, L. G.; De La Torre, R.; Horneck, G.; Ascaso, C.; De Los Rios, A.; Pintado, A.; Wierzchos, J.; Schuster, M. (2007). "Lichens survive in space: results from the 2005 LICHENS experiment". Astrobiology. 7 (3): 443–454. Bibcode:2007AsBio...7..443S. doi:10.1089/ast.2006.0046. PMID 17630840. - Eichorn, Susan E., Evert, Ray F., and Raven, Peter H. (2005). Biology of Plants. New York: W. H. Freeman and Company. p. 1. ISBN 0716710072. - Cook, Rebecca; McFarland, Kenneth (1995). General Botany 111 Laboratory Manual. Knoxville, TN: University of Tennessee. p. 104. - A. N. Rai; B. Bergman; Ulla Rasmussen (31 July 2002). Cyanobacteria in Symbiosis. Springer. p. 59. ISBN 978-1-4020-0777-4. Retrieved 2 June 2013. - Ramel, Gordon. "Lichen Reproductive Structures". Retrieved 22 August 2014. - Murtagh GJ, Dyer PS, Crittenden PD (April 2000). "Sex and the single lichen". Nature. 404 (6778): 564. Bibcode:2000Natur.404..564M. doi:10.1038/35007142. PMID 10766229. - Kirk PM, Cannon PF, Minter DW, Stalpers JA (2008). Dictionary of the Fungi (10th ed.). Wallingford: CABI. pp. 378–381. ISBN 978-0-85199-826-8. - "Form and structure – Sticta and Dendriscocaulon". Australian National Botanic Gardens. - Lutzoni, F.; Kauff, F.; Cox, C. J.; McLaughlin, D.; Celio, G.; Dentinger, B.; Padamsee, M.; Hibbett, D.; et al. (2004). "Assembling the fungal tree of life: progress, classification, and evolution of subcellular traits". American Journal of Botany. 91 (10): 1446–1480. doi:10.3732/ajb.91.10.1446. PMID 21652303. - The intertidal marine lichen formed by the pyrenomycete fungus Verrucaria tavaresiae (Ascomycotina) and the brown alga Petroderma maculiforme (Phaeophyceae): thallus organization and symbiont interaction – NCBI - Mutualisms between fungi and algae – New Brunswick Museum - Challenging the lichen concept: Turgidosculum ulvae – Cambridge - Congruence of chloroplast – BMC Evolutionary Biology – BioMed Central - Lutzoni, Francois; Pagel, Mark; Reeb, Valerie (June 21, 2001). "Major fungal lineages are derived from lichen symbiotic ancestors". Nature. 411 (6840): 937–940. doi:10.1038/35082053. PMID 11418855. - Hawksworth, D.L. (1988). "The variety of fungal-algal symbioses, their evolutionary significance, and the nature of lichens". Botanical Journal of the Linnean Society. 96: 3–20. doi:10.1111/j.1095-8339.1988.tb00623.x. - Rikkinen J. (1995). "What's behind the pretty colors? A study on the photobiology of lichens". Bryobrothera. 4 (3): 1–226. doi:10.2307/3244316. JSTOR 3244316. - Friedl, T.; Büdel, B. (1996). "Photobionts". In Nash III, T.H. Lichen Biology. Cambridge: Cambridge University Press. pp. 9–26. doi:10.1017/CBO9780511790478.003. ISBN 978-0-521-45368-4. - "Alan Silverside's Lichen Glossary (a-f), Alan Silverside". Retrieved 10 October 2014. - Modern Topics in the Phototrophic Prokaryotes: Environmental and Applied Aspects - Rikkinen, J. (2002). "Lichen Guilds Share Related Cyanobacterial Symbionts". Science. 297 (5580): 357. doi:10.1126/science.1072961. PMID 12130774. Retrieved 10 October 2014. - O'Brien, H.; Miadlikowska, J.; Lutzoni, F. (2005). "Assessing host specialization in symbiotic cyanobacteria associated with four closely related species of the lichen fungus Peltigera". European Journal of Phycology. 40 (4): 363–378. doi:10.1080/09670260500342647. - Guzow-Krzeminska, B (2006). "Photobiont ?exibility in thelichen Protoparmeliopsis muralis as revealed by ITS rDNA analyses". Lichenologist. 38 (5): 469–476. doi:10.1017/s0024282906005068. - Ohmura, Y.; Kawachi, M.; Kasai, F.; Watanabe, M. (2006). "Genetic combinations of symbionts in a vegetatively reproducing lichen, Parmotrema tinctorum, based on ITS rDNA sequences" (2006)". Bryologist. 109: 43–59. doi:10.1639/0007-2745(2006)109[0043:gcosia]2.0.co;2. - Piercey-Normore (2006). "The lichen-forming asco-mycete Evernia mesomorpha associates with multiplegenotypes of Trebouxia jamesii". New Phytologist. 169 (2): 331–344. doi:10.1111/j.1469-8137.2005.01576.x. PMID 16411936. - Lutzoni, François; Pagel, Mark; Reeb, Valérie (2001). "Major fungal lineages are derived from lichen symbiotic ancestors". Nature. 411 (6840): 937–940. doi:10.1038/35082053. PMID 11418855. - "Lichens: Fossil Record", University of California Museum of Paleontology. - Speer BR, Waggoner B. "Fossil Record of Lichens". University of California Museum of Paleontology. Retrieved 2010-02-16. - Poinar Jr., GO. (1992). Life in Amber. Stanford University Press. - Peterson EB. (2000). "An overlooked fossil lichen (Lobariaceae)". Lichenologist. 32 (3): 298–300. doi:10.1006/lich.1999.0257. - Taylor, T. N.; Hass, H.; Remy, W.; Kerp, H. (1995). "The oldest fossil lichen". Nature. 378 (6554): 244. Bibcode:1995Natur.378..244T. doi:10.1038/378244a0. Archived from the original on 11 January 2007. - Taylor WA, Free CB, Helgemo R, Ochoada J (2004). "SEM analysis of spongiophyton interpreted as a fossil lichen". International Journal of Plant Sciences. 165 (5): 875–881. doi:10.1086/422129. - Jahren, A.H.; Porter, S.; Kuglitsch, J.J. (2003). "Lichen metabolism identified in Early Devonian terrestrial organisms". Geology. 31 (2): 99–102. Bibcode:2003Geo....31...99J. doi:10.1130/0091-7613(2003)031<0099:LMIIED>2.0.CO;2. ISSN 0091-7613. - Fletcher, B. J.; Beerling, D. J.; Chaloner, W. G. (2004). "Stable carbon isotopes and the metabolism of the terrestrial Devonian organism Spongiophyton". Geobiology. 2 (2): 107–119. doi:10.1111/j.1472-4677.2004.00026.x. - Edwards D, Axe L (2012). "Evidence for a fungal affinity for Nematasketum, a close ally of Prototaxites". Botanical Journal of the Linnean Society. 168: 1–18. doi:10.1111/j.1095-8339.2011.01195.x. - Retallack G.J.; Landing, E. (2014). "Affinities and architecture of Devonian trunks of Prototaxites loganii". Mycologia. 106 (6): 1143–1156. doi:10.3852/13-390. PMID 24990121. - Karatygin IV, Snigirevskaya NS, Vikulin SV., I. V.; Snigirevskaya, N. S.; Vikulin, S. V. (2009). "The most ancient terrestrial lichen Winfrenatia reticulata : A new find and new interpretation". Paleontological Journal. 43 (1): 107–114. doi:10.1134/S0031030109010110. - Karatygin IV, Snigirevskaya NS, Vikulin SV (2007). "Two types of symbiosis with participation of Fungi from Early Devonian Ecosystems". XV Congress of European Mycologists, Saint Petersburg, Russia, September 16–21, 2007. 1 (1): 226. - Schoch CL; Sung GH; López-Giráldez F; Townsend JP; Miadlikowska J; Hofstetter V; Robbertse B; Matheny PB; et al. (2009). "The Ascomycota tree of life: a phylum-wide phylogeny clarifies the origin and evolution of fundamental reproductive and ecological traits". Syst. Biol. 58 (2): 224–239. doi:10.1093/sysbio/syp020. PMID 20525580. - Gargas, A; Depriest, PT; Grube, M; Tehler, A (1995). "Multiple origins of lichen symbioses in fungi suggested by SSU rDNA phylogeny". Science. 268 (5216): 1492–1495. Bibcode:1995Sci...268.1492G. doi:10.1126/science.7770775. PMID 7770775. - Honegger R. (1998). "The lichen symbiosis – what is so spectacular about it?". Lichenologist. 30 (3): 193–212. doi:10.1017/s002428299200015x. - Wedin M, Döring H, Gilenstam G (2004). "Saprotrophy and lichenization as options for the same fungl species on different substrata: environmental plasticity and fungal lifestyles in the Strictis-Conotrema complex". New Phytologist. 16 (3): 459–465. doi:10.1111/j.1469-8137.2004.01198.x. - Taylor TN.; Hass, Hagen; Kerp, Hans (1997). "A cyanolichens from the Lower Devnian Rhynie chert". American Journal of Botany. 84 (7): 992–1004. doi:10.2307/2446290. JSTOR 2446290. PMID 21708654. - Yuan X, Xiao S, Taylor TN (2005). "Lichen-like symbiosis 600 million years ago". Science. 308 (5724): 1017–1020. Bibcode:2005Sci...308.1017Y. doi:10.1126/science.1111347. PMID 15890881. - Retallack G.J. (2015). "Acritarch evidence of a late Precambrian adaptive radiation of Fungi" (PDF). Botanica Pacifica. 4 (2): 19–33. doi:10.17581/bp.2015.04203. - Retallack GJ. (2007). "Growth, decay and burial compaction of Dickinsonia, an iconic Ediacaran fossil". Alcheringa: An Australasian Journal of Palaeontology. 31 (3): 215–240. doi:10.1080/03115510701484705. - Retallack GJ. (1994). "Were the Ediacaran Fossils Lichens?". Paleobiology. 20 (4): 523–544. doi:10.1017/s0094837300012975. JSTOR 2401233. - Switek B (2012). "Controversial claim puts life on land 65 million years early". Nature. doi:10.1038/nature.2012.12017. - Retallack, G.J., Dunn, K.L., and Saxby, J. (2015). "Problematic Mesoproterozoic fossil Horodyskia from Glacier National Park, Montana, USA". Precambrian Research. 226: 125–142. Bibcode:2013PreR..226..125R. doi:10.1016/j.precamres.2012.12.005. - Retallack, G.J., Krull, E.S., Thackray, G.D., and Parkinson, D. (2013). "Problematic urn-shaped fossils from a Paleoproterozoic (2.2 Ga) paleosol in South Africa". Precambrian Research. 235: 71–87. Bibcode:2013PreR..235...71R. doi:10.1016/j.precamres.2013.05.015. - "Pollution, The Plant Underworld". Australian National Botanic Gardens. Retrieved 10 October 2014. - Chen, Jie; Blume, Hans-Peter; Beyer, Lothar (2000). "Weathering of rocks induced by lichen colonization — a review" (PDF). CATENA. 39 (2): 121. doi:10.1016/S0341-8162(99)00085-5. - Jones, Clive G.; Shachak, Moshe (1990). "Fertilization of the desert soil by rock-eating snails". Nature. 346 (6287): 839. Bibcode:1990Natur.346..839J. doi:10.1038/346839a0. - Walker, T. R. (2007). "Lichens of the boreal forests of Labrador, Canada: A checklist". Evansia. 24 (3): 85–90. doi:10.1639/0747-9859-24.3.85. - Oksanen, I. (2006). "Ecological and biotechnological aspects of lichens". Applied Microbiology and Biotechnology. 73 (4): 723–734. doi:10.1007/s00253-006-0611-3. PMID 17082931. - Lawrey, James D. (1994). "Lichen Allelopathy: A Review". In Inderjit; K. M. M. Dakshini; Frank A. Einhellig. Allelopathy. Organisms, Processes, and Applications. ACS Symposium Series. 582. American Chemical Society. pp. 26–38. doi:10.1021/bk-1995-0582.ch002. ISBN 978-0-8412-3061-3. - Nash III, Thomas H. (2008). "Lichen sensitivity to air pollution". In Nash III, T.H. Lichen Biology (2nd ed.). Cambridge: Cambridge University Press. pp. 299–314. doi:10.1017/CBO9780511790478.016. ISBN 978-0-521-69216-8. - Knops, J.M.H.; Nash, T. H. III; Boucher, V.L.; Schlesinger, W.H. (1991). "Mineral cycling and epiphytic lichens: Implications at the ecosystem level". Lichenologist. 23 (3): 309–321. doi:10.1017/S0024282991000452. - Halonen P, Hyvarinen M, Kauppi M (1993). "Emission related and repeated monitoring of element concentrations in the epiphytic lichen Hypogymnia physodes in a coastal area, western Finland". Annales Botanici Fennici. 30: 251–261. - Walker T. R.; Pystina T. N. (2006). "The use lichens to monitor terrestrial pollution and ecological impacts caused by oil and gas industries in the Pechora Basin, NW Russia". Herzogia. 19: 229–238. - Walker T. R.; Crittenden P. D.; Young S. D.; Prystina T. (2006). "An assessment of pollution impacts due to the oil and gas industries in the Pechora basin, north-eastern European Russia". Ecological Indicators. 6 (2): 369–387. doi:10.1016/j.ecolind.2005.03.015. - Walker T. R., Crittenden P. D., Young S. D.; Crittenden; Young (2003). "Regional variation in the chemical composition of winter snowpack and terricolous lichens in relation to sources of acid emissions in the Usa River Basin, northeastern European Russia". Environmental Pollution. 125 (3): 401–412. doi:10.1016/s0269-7491(03)00080-0. PMID 12826418. - Hogan, C. Michael (2010). "Abiotic factor". Encyclopedia of Earth. Washington, D.C.: National Council for Science and the Environment. Archived from the original on 8 June 2013. Retrieved 27 October 2013. - Beltman IH, de Kok LJ, Kuiper PJC, van Hasselt PR (1980). "Fatty acid composition and chlorophyll content of epiphytic lichens and a possible relation to their sensitivity to air pollution". Oikos. 35 (3): 321–326. doi:10.2307/3544647. JSTOR 3544647. - Emmerich R, Giez I, Lange OL, Proksch P (1993). "Toxicity and antifeedant activity of lichen compounds against the polyphagous herbivorous insect Spodoptera littoralis". Phytochemistry. 33 (6): 1389–1394. doi:10.1016/0031-9422(93)85097-B. - Beschel RE (1950). "Flecten als altersmasstab Rezenter morainen". Zeitschrift für Gletscherkunde und Glazialgeologie. 1: 152–161. - Curry, R. R. (1969) "Holocene climatic and glacial history of the central Sierra Nevada, California", pp. 1–47, Geological Society of America Special Paper, 123, S. A. Schumm and W. C. Bradley, eds. - Sowers, J. M., Noller, J. S., and Lettis, W. R. (eds.) (1997) Dating and Earthquakes: Review of Quaternary Geochronology and its Application to Paleoseismology. U.S. Nuclear Regulatory Commission, NUREG/CR 5562. - Innes, J. L. (1985). "Lichenometry". Progress in Physical Geography. 9 (2): 187. doi:10.1177/030913338500900202. - Cappitelli, Francesca; Sorlini, Claudia (2008). "Microorganisms Attack Synthetic Polymers in Items Representing Our Cultural Heritage". Applied and Environmental Microbiology. 74 (3): 564–569. doi:10.1128/AEM.01768-07. PMC 2227722. PMID 18065627. - Casselman, Karen Leigh; Dean, Jenny (1999). Wild color: [the complete guide to making and using natural dyes]. New York: Watson-Guptill Publications. ISBN 978-0-8230-5727-6. - Muller, K (2001). "Pharmaceutically Relevant Metabolites from Lichens". Applied Microbiology and Biotechnology. 56: 9–10. doi:10.1007/s002530100684. - Morton, E.; Winters, J. and Smith, L. (2010). "An Analysis of Antiseptic and Antibiotic Properties of Variously Treated Mosses and Lichens". University of Michigan Biological Station - Bustinza, F. (1952). "Antibacterial Substances from Lichens". Economic Botany. 6 (4): 402–406. doi:10.1007/bf02984888. - "Themodelrailroader.com". Archived from the original on 15 October 2014. Retrieved 10 October 2014. - Thus explained by Rabbi Enoch Zundel ben Joseph, in his commentary Etz Yosef ("Tree of Joseph"), on Sefer Midrash Rabbah, vol. 2, New York 1987, s.v. Ruth Rabba 6:3 - Zohar Amar and Yaron Serri, The Land of Israel and Syria as Described by Al-Tamimi, Ramat-Gan 2004, pp. 56, 108–109 ISBN 965-226-252-8 (Hebrew) - Honegger R. (2000). "Simon Schwender (1829–1919) and the dual hypothesis in lichens". Bryologist. 103 (2): 307–313. doi:10.1639/0007-2745(2000)103[0307:SSATDH]2.0.CO;2. ISSN 0007-2745. JSTOR 3244159. - Treub, Melchior (1873) Onderzoekingen over de natuur der lichenen. Dissertation Leiden University. - Yong, Ed (2016-07-21), "How a guy from a Montana trailer park overturned 150 years of biology", The Atlantic, retrieved 2017-07-23. Jorgensen, Per M. and Lucking, R. 2018 The "Rustici Pauperrimi" A Linnaean Myth about Lichens Rectified. The Linnean 34 pp 9 - 12 |Wikimedia Commons has media related to Lichens.| |Look up lichen in Wiktionary, the free dictionary.| Identification and classification - University of California Museum of Paleontology microscopic image of cross section of crustose or squamulose lichen - Earth Life Web – Schematic drawings of internal lichen structures for various growth forms - University of Sydney lichen biology - Memorial University's NL Nature project, focusing primarily on lichens - Fungi that discovered agriculture - Lichens of Armenia - Lichens of Ireland - Lichens of North America - Pacific Northwest Fungi Online Journal, includes articles on lichens - Pictures of Tropical Lichens - Lichen species found in Joshua Tree National Park - Very High Resolution Image of a Lichen Covered Rock
Originally written in February 1998. At least one person seems to have found it useful. It’s a little incomplete, because it doesn’t deal with malloc and free, nor with C++ references. If you find this useful, feel free to copy and pass along, with attribution. Thanks! All data and code are stored in memory. The location in memory where they are stored is known as the address of that data or code. Usually they are accessed through variable names that represent them, such as counter, printf, etc. We can, however, also access data using its address, rather than a formal name. This is done using pointers, special variables which store the address of data. Following are several annotated examples of simple pointers at work. int* x; // Declare x, a pointer to an integer. int y; // Declare y, an integer. float* r; // Declare r, a pointer to a float. float s; // Declare s, a float. x = &y; // x gets y's address -- it points to y. r = &s; // r gets s's address -- it points to s. The next few are a tad trickier. We use the “*” to dereference the pointer. Basically, this means to access whatever it is the pointer is pointing to. You can think of it as counteracting the “*” used in the declaration of the pointer; they neutralize each other, making the result a regular variable. *x = 15; // Set value pointed to by x -- y -- to 15. cout << *r; // Print value pointed to by r: s. If it were that simple, of course, then nobody would have trouble with pointers. The fact of the matter is, however, that there are a number of complications — extensions to the idea of pointer — that can be hard to keep track of. Pointers as Lists The first is the idea of pointers being equivalent to lists. This is a crucial idea in C and C++. Essentially, instead of thinking of a pointer as pointing to a single variable, you can think of it as pointing to the first variable in a list of variables. Likewise, a list can be accessed without any subscripts to find the pointer to the first element in the list. It works like this: x = new int; // Allocate array of 8 integers. *y = 8; // Set first element of y-list to 8. x = 7; // Set fourth element of x-list to 7. Note how pointer notation can be used for the list y, and how list notation can be used for the pointer x. This brings up a similar topic: pointer arithmetic. Since a pointer is a memory address, you might think that adding 1 to a pointer would simply make it point to the next byte of memory. The C compiler, however, is smarter than that; it realizes that you’re adding something to a pointer, you probably want to make it point to the next element of whatever you’re pointing at. So instead of adding whatever you specified to the pointer, it adds that times the size of the object the pointer points to. For example: int* x = new int; x++; // Add four to x pointer, to // point at next integer. x = 5; // This was originally the second // element in the array. x--; // Subtract 4 again from pointer. *(x + 2) = 6; // Set third element (second after the // first) to 6. cout << x; // Will now print "5". cout << x; // Prints "6". Pointers to Pointers (aka “The Middleman”) Another pointer curiosity that C throws our way is pointers that point to other pointers. This may seem like a needless feature, but it comes in very handy when you have multidimensional data whose size you don’t know before-hand. You can then use these pointers to pointers to set up an arbitrary-sized multidimensional array. It works like this: you can think of a pointer to a pointer as being essentially a list of lists. It’s kind of like the words in the dictionary: the first pointer tells you where to find each of the lists for the letters of the alphabet. Each letter is then itself a pointer that forms a list (by pointing to the first element) of all of the words beginning with that letter. If you add another dimension (and make a pointer to a pointer to a pointer), you can have each word also be a list, pointing to the first out of several different meanings for the word. Here’s how it works in C++. The following program reads in a table of numbers and finds the average for each row and column. The first two numbers it reads in tell how many rows and columns there are. The rest of the numbers are the ones in the table. int** table; // *Two*-dimensional pointer. int rows, cols, i, j, sum; // Dimensions of table. cin >> rows >> cols; // Find out the number of rows & cols. // What we'd really like to do here is say: // table = new int[rows][cols]; // Unfortunately, this doesn't work in C++, since it instead tries to set up // a one-dimensional array like this: // table = new int[rows * cols]; // And a one-dimensional array (a pointer to integers) is absolutely // incompatible with a two-dimensional array (a pointer to a pointer to // integers), so our program will crash. Note that the syntax above will // work, however, in Java. table = new (int*)[rows]; // Allocate the rows. for(i = 0; i < rows; i++) table[i] = new int[cols]; // Allocate each row's columns. for(i = 0; i < rows; i++) // Find row sums. sum = 0; for(j = 0; j < cols; j++) sum += table[i][j]; cout << "Row sum for row " << i << ": " << sum << endl; for(j = 0; j < cols; j++) // Find column sums. sum = 0; for(i = 0; i < rows; i++) sum += table[i][j]; cout << "Col sum for col " << j << ": " << sum << endl; Notice how we had to explicitly allocate both dimensions of the array, starting with the first dimension, the rows; and then for each row we allocated its columns. You may be wondering why the outer dimension is allocated using “new (int*)[n]”, while the inner dimension is allocated using “new int[n]”. This is because each dimension but the last is a pointer to the next dimension. The final array dimension is obviously simply a list of integers, so in the inner loop we merely allocate a list of integers. The next dimension up, however, is not a list of integers; it’s a list of lists of integers. As such, each entry in this list will itself be a pointer to a list of integers — the final dimension. Therefore, the code allocating the outside dimension must allocate a list of pointers to integers. If we had additional dimensions, for each column in each row we would then have to allocate the list of cells, and so forth. Here’s how a three-dimensional array allocation might look like. Assume that x, y, and z are the size of each array dimension. Notice how the outer dimension is now a list of (int**)’s — a list of lists of lists — and the second dimension is now the one that is (int*) — a list of lists — while the final dimension is still a list of integers. int i, j; array3d = new (int**)[x]; for(i = 0; i < x; i++) array3d[i] = new(int*)[y]; for(j < 0; j < y; j++) array3d[i][j] = new int[z];
Polynomials are a form of algebraic expressions that consist of variables, coefficients, and constants. This chapter deals with a number of sums focused on simplifying different exponential polynomial expressions. The Polynomials Class 9 worksheet with answers PDF will help you evaluate your understanding of the concepts of this chapter. The Class 9 polynomial worksheet is one of the most useful study resources that aims to teach students the application of various theories of polynomials. Students can self-assess the understanding of the basic concepts by referring to the Worksheet for Class 9 CBSE Maths Polynomials. The Polynomials Class 9 worksheet PDF focuses on explaining the term, according to the worksheet, Polynomials are expressions that can be related to one or more terms and used seamlessly with a non-zero coefficient, in a way that it can carry more than one term. In the polynomials worksheet class 9, each expression that is used in the sum of a polynomial is defined as a term. Let’s suppose that x2 + 5x + 2 is polynomial. In the given example, we can say that the expressions are laid in a way that x2, 5x, and 2 are the terms that are laid in the form of a polynomial. Remember, every single term that is given in a polynomial comprises a coefficient. Further, the real numbers that are used in the polynomials can also be used to express different terms in the grade 9 math polynomial worksheets. Similar to how the certain numbers are polynomials without any variables, they are known as constant polynomials. In theory, the constant polynomial 0 is also known as zero polynomial. Degree of the polynomial is the highest power that is available to the suggested polynomial. Consider an example where x3 + y3 + 3xy(x + y), the degree of the polynomial is 3. In a situation where the degree of the sum is a zero, the constant polynomial is a non zero. Apart from these, polynomials can be further categorised into the suggested three types: Linear Polynomial – of degree one. Quadratic Polynomial – of degree two. Cubic Polynomial – of degree three. Q1. Define the suggested degree of each polynomial that is listed below. (i) 5x3 + 4x2 + 7x (ii) 4 – y2 (iii) 5t – √7 (i) The given polynomial is 5x3 + 4x2 + 7x. The suggested equation provides us with a situation where 3 is the highest power of the variable x. So, the degree of the polynomial is 3. (ii) The given polynomial is 4 - y2. 2 becomes the highest power of the suggested variable that is, y = 2. So, the degree of the polynomial is 2. (iii) In the suggested polynomial of the situation where 5t – √7. The highest power of variable t is 1. So, the degree of the polynomial is 1. (iv) Since, 3 = 3x° [∵ x°=1] The equation suggests that the degree of the polynomial for the given equation is a 0. Q2. Verify whether 2 and 0 are zeroes of the polynomial x2 – 2x. Let p(x) = x2 – 2x Then p(2) = 22 – 4 = 4 – 4 = 0 and p(0) = 0 – 0 = 0 The solution suggests that the sum 0 and 2 are both the zeroes of the polynomial x2 – 2x. Listed below are the list of observations around the sums: (i) The resultant sum of a polynomial doesn’t really have to be a 0. (ii) The term of a zero polynomial might be a 0. (iii) Polynomials might comprise of more than one zero Q1. What are the Key Takeaways From the Polynomials Class 9 Worksheet? Answer: Listed below are some of the key takeaways from polynomials worksheet Class 9. Terms that are present in the situation of a polynomial are either a variable or a single number or they might also be a combination of variable and numbers. Polynomial degrees are the highest power of the variable in a polynomial. A monomial is a term used for the polynomial of 1. A binomial is referred to as the polynomial of 2 terms. Q2. In the Below Question, Find the Coefficients of x and Find the Degree of the Polynomial in the Second: i. 3x + 1 ii. 23x2 – 5x + 1 Answer: For the first question, we need to calculate the coefficients of x: i. 3x + 1 ii. 23x2 – 5x + 1 Here, in 3x + 1, the coefficient of x is 3. Further, in the given numerical of 23 x2 – 5x + 1, the coefficient of x is -5. Moving to the second part of the question that requires us to find the degrees of following polynomials: 3a2 + a – 1 32x3 + x – 1 3a2 + a – 1: 2 is the coefficient degree 23x3 + x – 1 : 3 is the coefficient degree
[27-JUN-20] This text serves as a introduction to infinitesimal calculus for science and engineering students. We take time to introduce the fundamental concepts of infinitesimal calculus, and illustrate these with numerical calculations and geometry. We proceed through differential equations and integrals with brief mathematical discussions followed by detailed examples of their application. Our intention is to teach calculus by showing calculus at work. We trust that the interested student can fill in their own table of derivatives and integrals as they proceed in their scientific endeavors. What we want to do is demonstrate the power calculus to make sense of the physical world, and so convince the student that a working understanding of infinitesimal calculus will advance their careers in science. Let x be a real number. We say x ∈ ℝ. Consider the expression x + x2. How does its value vary as x gets smaller? |x||x + x2||Difference| As x gets smaller, the expression x + x2 gets closer to x. The x2 part becomes insignificant. At some point, x + x2 becomes indistinguishable from x. We may not agree about when we can first ignore the x2 part, but as we keep making x smaller, we will eventually all be forced to agree that there is no significance difference between x and x + x2. The fact that x2 becomes insignificant compared to x for very small values of x is a fundamental principle of infinitesimal calculus. We say x is infinitesimal when we allow its value to approach zero, but never actually reach zero, and we write x→0. To express the behavior of x + x2 as x→0 we say, "The limit of x + x2 as x→0 is x." A function of x is a mathematical expression whose value depends on x. The expression x + x2 is a function of x. We use the notation f (x) to indicate a function of x. The function need not provide a value for every x, but it must provide only one value for a given x. The function √x is the positive square root of x, so that √9 = 3, not −3. The same √x does not return a real value when x = −1. Nor does the function 1/x return a numerical value when x = 0. When the value of a function is always a real number, we say it is real-valued, or f (x) ∈ ℝ. When we determine the limit of f (x) as x→0, we are seeking to produce an answer that contains one and only one term in x. If we consider the function 1 + x2, its limit as x→0 is also 1 + x2. It is true that the function becomes arbitrarily close to 1 as x→0, but the point is not the determine the value of the function as x→0, but rather to summarise how the value of the function changes as x→0. In this case, the change in the function as x→0 is represented entirely by the term x2. But if we have the function 1 + x + x2, its limit as x→0 is 1 + x. The variation represented by x2 is negligible compared to the variation represented by x. Example: What is the limit of sin(x) as x→0? Here, we assume that x is in radians, so that sin(π/2) = 1. We set our calculator to work with angles in radians, and get sin(0.01) = 0.0099998, which is only 0.002% different from 0.01, so the limit of sin(x) as x→0 is x. Example: What is the limit of ex as x→0? We enter exp(0.01) on our calculator and get 1.01005, which is close to 1.01, so the limit of ex as x→0 is 1 + x. Exercise: What is the limit of (x + x4) as x→0? Exercise: What is the limit of (1 + x) as x→0? Exercise: What is the limit of (1 + 1/x) as x→0? Exercise: What is the limit of cos(x) as x→0? Let f (x) be a real-valued function of x ∈ ℝ. We could have f (x) = x + x2, or f (x) = sinx + ex − 1, or f (x) = √x. We plot each of these functions below. Consider f (x) = x + x2 in the plot above. For each value of x, the function f (x) has a value. At x = 0, we have f (x) = 0. When x = 1, we have f (x) = 2. The slope of the line varies also. If we imagine a ball on the line at x = −1, the ball would roll to the right. We say the slope of f (x) at x = −1 is negative, or downwards. At x = 1, the same ball would roll to the left. We say the slope of f (x) at x = 1 is positive, or upwards. So long as the graph of f (x) versus x is smooth (has no infinitely sharp corners), we can approximate small sections of the graph with short straight lines. The smaller the sections, the better the approximation. In the figure below, we see a close-up of f (x) = x + x2. We let y = f (x) for brevity. In the close-up, x increases from x to x + δx, and y increases from y to y + δy. Here we use δ to mean "a small change in". When we increase x by δx, y increases by δy. The slope of the line in the close-up is defined as δy/δx. We have y = x + x2, so we can find δy in terms of x, and so determine the slope as a function of x and δx. Now let δx→0. Now we can obtain the slope in terms of x alone, because the term in δx becomes vanishingly small compared to 1 + x. As δx→0, the slope depends only on x, not δx. It turns out that this is true for any smooth function of x. The limit of the slope as δx→0 is the derivative of y with respect to x, written dy/dx. It is the slope of the graph of y in an infinitesimal neighborhood of x. Example: When x = −0.5, our equation predicts that the slope will be zero, and indeed the slope is zero in our plot of x + x2 above. We also predict that the slope will be +1 at x = 0 and +2 at x = 1. An examination of the same plot shows that these predictions are also correct. We can also write df (x)/dx for the derivative of f (x) with respect to x, and we can write d(x + x2)/dx to mean the derivative of x + x2 with respect to x. Example: What is the derivative of x2 with respect to x? Let x increase to x + δx. We have x2 increasing to (x + δx)2 = x2 + 2xδx + (δx)2. Subtract x2 to get δ(x2), which is 2xδx + (δx)2. Divide by δx to get slope δ(x2)/δx = 2x + δx. The limit of the slope as δx→0 is d(x2)/dx = 2x. Exercise: What is the derivative of y = x3 with respect to x? Exercise: What is the derivative of y = 6x with respect to x? Exercise: What is the derivative of y = 5 with respect to x? Exercise: What is the derivative of y = x3 + 6x + 5 with respect to x? Exercise: What is the derivative of y = ex with respect to x? Use eδx = 1 + δx as δx→0 and recall that ea + b = ea eb. To differentiate a function with respect to x is to calculate its derivative with respect to x. When we differentiate f (x) = x + x2 with respect to x, we obtain df (x)/dx = 2x. The derivative is another function of x. We denote this derivative function f '(x), where the single mark indicates "differentiated once with respect to x". Because f '(x) is itself a function of x, we can differentiate it and obtain f ''(x) = d(2x)/dx = 2. The function f '(x) is the first derivative of f (x) and the function f ''(x) is the second derivative. We can also write f ''(x) as d2f (x)/dx2. The third derivative of f (x) is d3f (x)/dx3 = f '''(x). In our example, f '''(x) = 0. The higher derivatives after that are all zero also. Rule: When f (x) = xn for any real number n, we have f '(x) = nxn−1. Example: Let f (x) = x5. Then f '(x) = 5x4, f ''(x) = 20x3, f '''(x) = 60x2, f ''''(x) = 120x, f '''''(x) = 120, and f ''''''(x) = 0. Consider the function f (x) = x3 and its derivatives, shown below. As x increases from −3, the slope of x3 decreases until it reaches zero at x = 0. After that, the slope increases again. We say the graph of x3 undergoes an inflection at x = 0. The slope of x3 is given exactly by the plot of 3x2. Looking at 3x2, this graph also has a slope, and its slope is always increasing. For x < 0 the slope is negative, at x = 0 the slope is 0, for x > 0 the slope is positive. We say the graph of 3x2 is at a minimum at x = 0. The slope of 3x2 is given exactly by the plot of 6x. Looking at 6x, we see it has its own constant slope, and this slope is given by the last line in the plot: the constant value 6. Exercise: What is the derivative of f (x) = 2x3 + 4x2? Exercise: What is the derivative of f (x) = e4x? Use eδx = 1 + δx as δx→0 and recall that ea + b = ea eb The derivative of a function is zero when the function reaches a maximum, an inflection, or a minimum. The second derivative is negative at a maximum, zero at an inflection, and positive at a minimum. Exercise: At what value of x does the function f (x) = x2 − 2x have a minimum? Exercise: Find any minima, maxima, or inflections of the function f (x) = x3 − 2x2. The rules of differentiation are mathematical relationships that help us obtain the derivatives of complicated functions. We can derive the rules ourselves with the addition of δx to x, but the proofs take long enough that the rules are worth remembering. Power Rule: When f (x) = g(x)n for any real number n and any other function g(x), we have f '(x) = ng'(x)g(x)n−1. Example: Let f (x) = (3-x2)2. Then f '(x) = −4x(3−x2). Example: Let f (x) = √x. Well, √x = x½ so f '(x) = ½x½−1 = ½x−½ = 1/(2√x). Exercise: If f (x) = 1/x, what is f '(x)? Product Rule: When f (x) = g(x)h(x), we have f '(x) = g'(x)h(x) + g(x)h'(x). The product rule is easy to prove, so we will so so here. We can use the product rule to prove the power rule, but we leave that as an exercise for the reader. Derivation of the Product Rule: Let us derive the product rule. Suppose f (x) = g(x)h(x), the product of two other functions of x. For a small increase δx in x we have f (x + δx) = g(x + δx)h(x + δx). But g(x + δx) is just g(x) + δx.g'(x): the change in g(x) is the slope of g(x) multiplied by δx. So we have f (x + δx) = [g(x) + δx.g'(x)][h(x) + δx.h'(x)] = g(x)h(x) + δx.g'(x).h(x) + δx.h'(x)g(x) + (δx)2g'(x)h'(x). But at δx→0 we can ignore terms in (δx)2, so the increase in f (x + δx) is just δx.g'(x)h(x) + δx.h'(x)g(x). When we divide by δx to get the slope we are left with f '(x) = g'(x)h(x) + g(x)h'(x). Now we state the product rule. Exercise: What is the derivative of e3xx2 with respect to x? Exercise: What is the derivative of x cos(x) with respect to x? In the diagram below, we have a function f (x) plotted with respect to x. The function f (x) draws out a smooth curve. Although the slope of this curve may be steeply down or up, we assume it is never vertical. The shaded area A is the area under the curve from x = 0 to some positive value of x. Suppose we increase x by a small amount δx. Then A increases by the cross-hatched area δA. This cross-hatched area consists of a rectangle of area f (x)δx, and a triangle of area ½( f (x + δx) − f (x) )δx = ½δf (x)δx. But as δx→0, the triangle becomes insignificant. If this is not obvious to you, consider the following argument. As δx→0, we have δf (x)→f '(x)δx. So the triangular part is ½f '(x)(δx)2, while the rectangular part is f (x)δx. We have assumed that the slope of f (x) is never vertical, so f '(x) is finite. As δx→0, the triangular term becomes insignificant compared to the rectangular term. As δx→0, therefore, we have δA = f (x)δx and consequently δA/δx = f (x). But δA/δx as δx→0 is the derivative of A with respect to x, or dA/dx. Thus dA/dx = f (x). Example: The area under the curve f (x) = 5 has constant derivative 5. The area under f (x) = 5 from x = 0 to x = 10 is 50. Exercise: What is the area under the curve f (x) = x from x = 0 to x = 10? When a function of x and all its derivatives are continuous with respect to x, we call it a smooth function of x. Let y = f (x) be a smooth function of x. In the diagram below, we show how we can use to the slope of the function to determine its change in value as x increases from a to b. We increase x in small steps, δx, from a to b. For each step we obtain an estimate, δy, of the change in y by multiplying the slope, dy/dx, on the left edge of the step by the width of the step, δx. Our δy is too small by a distance ε because, in our example, dy/dx is increasing with x. But dy/dx is continuous, so as δx→0, the change in the slope across the step becomes negligible, and ε/δy→0. For small enough steps, we can ignore ε. The total change in y from x = a to b is marked as Δy in the diagram. As δx→0, Δy becomes equal to the sum of all the δy = (dy/dx)δx for all the infinitesimal steps δx. We say that Δy is the integral of dy/dx with respect to x from a to b. The value of f (x) is the derivative of the area under the graph of f (x). If this area A can be described by some function g(x), then g'(x) = f (x). We say that g(x) is the integral of f (x). The act of determining the integral we call integration. Integration is the opposite of differentiation. When f '(x) is the derivative of f (x), then f (x) is the integral of f '(x). Example: Velocity is the derivative of position with respect to time. If x is the position of an object along a straight line, and t is time, then velocity v = dx/dt. Conversely, the displacement of our object (its change in position) is the integral of its velocity. If we plot v versus t, the displacement between two moments in time is the area under the curve of between these two moments in time. Thus we obtain displacement by integrating velocity. Example: Suppose our function is f (x) = 2x. What is the area under f (x) from x = 0 to x = 10? We figured out in an earlier exercise that d(x2)/dx = 2x. So the integral of 2x is x2. The area under the curve 2x is x2. From 0 to 10 the area is 102 = 100. If f (x) is negative, then g'(x) is negative, which means the area under the curve is decreasing. In the regions where f (x) is negative, the area under f (x) is negative also. Example: The function sin(x) is symmetric about zero in the interval x = 0 to x = 2π. The area under the curve from 0 to π is positive. The area under the curve from π to 2π is negative. The positive and negative areas have equal magnitudes. They add to zero. The total area under sin(x) from 0 to 2π is zero. If we plot f (x) ourselves, we could measure the integral by dividing the area under the curve into many thin, vertical strips, just like the δA strip in our earlier diagram. Each of these strips would have area f (x)δx. This procedure for finding the integral is called numerical integration. As δx→0, our measurement becomes impractical for a human being, but not for a computer. When all other methods of finding an integral have failed, numerical integration is the method we turn to. The notation below expresses the concept of summing an infinite number of vertical slices each of area f (x) dx, where dx is an infinitesimal change in x. The integral symbols is a stretched letter S for "sum". The sum is from one value of x to another. In the example above, we are summing from x = a to x = b. We call a and b the lower and upper limits of the integration. The term f (x) dx denotes the infinitesimal slices that are to be summed together to obtain the total area. Each slice has finite height f (x) and infinitesimal width dx. When we say, "the integral of f (x)," we mean, "the integral from zero to x." The lower limit is zero and the upper limit is some unspecified value of x. The integral of 2x is x2. The value of an integral with limits a and b is the value of the integral from 0 to b minus the value of the integral from 0 to a. Example: What is the area under 2x from x = 4 to x = 5? The area from zero to four is 16, and the area from zero to five is 25, so the area between four and five is 25 − 16 = 9. As we saw earlier, we have a procedure for determining the derivative of any function. We calculate the change in the function for dx, then divide by dx to obtain the derivative. No such procedure exists for determining the integral of a function. If we know the limits of our integral, we can use numerical integration to obtain a numerical value for our answer. An integral with known limits is called a definite integral. An indefinite integral is one where we do not specify the limits, but instead obtain a formula that gives the integral for all values of x. We cannot obtain the indefinite integral of a function by numerical integration. To obtain the indefinite integral we must either guess the integral and then check our guess, or examine a table of integrals and adapt a similar integral to our problem. We make a table of integrals by differentiating a bunch of functions and tabulating our results. When we see our function in the derivative column, our integral is the function we differentiated. The derivative of u(x)v(x) is the product rule of differentiation, which we derived earlier. The integral of u(x)v'(x) is the rule of integration by parts, which you can derive from the product rule by noting that the integral of v'(x) is v(x). The function exp(−x2) appears in the normal distribution. Its integral is the error function, denoted erf(x), which we obtain by numerical integration. The error function is an indefinite integral: we have no formla for it. We can look it up in a table of values, or we can calculate it with a computer. You will find our own error function routine in this library. Example: What is the derivative of sin2x? We don't have this one in the table. But we do have the derivative of sinax. We can set a = 1. And we have the derivative of f (x)n. We can set n = 2. So let f (x) = sinx. Then d(sin2x)/dx = 2cosx sinx. We could also use the product rule, by letting u(x)v(x) = sinx sinx, and then we have d(sin2x)/dx = cosx sinx + sinx cosx = 2cosx sinx. Example: What is the integral of sin2x? We use the trigonometric identity sin2x = ½ − ½cos2x. One row of the table tells us the integral of cos2x is ½cos2x. Another row tells us that the integral of 1 is x. The integral of sin2x is: x/2 − sin2x/4. In the following example, obtain the formula for the area of a circle by integrating the formula for the circumference of a circle. We divide the circle's area, A, into infinitesimal annuli of radius x, and thickness dx. We obtain a formula for the area of each of these infinitesimal annuli. We integrate to this formula from radius zero to the full radius of the circle so as to obtain the total area of the circle. The notation we use at the end of the derivation, with square brackets and the limits of integration on the top-right and bottom-right, is standard notation for a definite integral. We subtract the value of the indefinite integral at the bottom limit from its value at the top limit, and so obtain the definite integral. Exercise: Consider a circular cone pointed at one end, radius r at the other end, and length l. Obtain a formula for the volume of the cone by integration of the above-derived formula for the area of a circle. Coming soon, we demonstrate the evolution of the Boltzmann distribution by numerical simulation, then we take the infinitesimal limit of a histogram to obtain a probability distribution, and so derive the Boltzmann factor. The diagram below is a page giving an outline of the integration. The average value of sinx is zero for the interval 0 to 2π radians. The average of sin2x is not zero, because the square of both negative and positive values are both positive. The root mean square of sinx is the square root of the average value sin2x from 0 to 2π. The root mean square of a function is measure of how much the function deviates from zero, even when its average value is zero. One way to calculate the mean square of sin2x is to pick a large number of evenly-spaced values of x, calculate the value of sin2x at each of these values of x, and take the average. That's what we do in the SIN2X sheet of Calculus.ods (Open Office spreadsheet), which we invite you to download and examine. The figure above shows the top rows of the spreadsheet, showing x in units of radians and in multiples of π, sinx, and sin2x at one hundred points from 0 to 2π. There is also a plot of sinx and sin2x, which shows how sin2x is always positive, which we reproduce below. Our values of x start at 0.00π and proceed in steps of 0.02π until we get to 1.98π. We do not continue to 2.00π, because that would be 101 points. Furthermore, we would be repeating our consideration of x = 0.00π, because the value 2.00π begins a new cycle of sinx, where 2.00π is equivalent to 0.00π. Adding these 100 values together, we get a sum of 50.00. We divide by 100 and find that the mean square of sinx is 0.5. The root mean square is 0.707 = 1/√2. Now consider our spreadsheet calculation this way: we take 100 values of x, and at each value of x we imagine a thin vertical slice of sin2x, of height sin2x and width 0.02π. Its area is 0.02πsin2x. We add the areas of all these slices together to get and estimate of the total area under the graph of sin2x from 0.00π to 2.00π, because the final slice extends to 2.00π. We say an estimate because the top edges of the individual slices always horizontal. They do not follow the continuous line of sin2x. The area of slice number n is 0.02πsin2xn and the sum of all their areas is 0.02π times the sum all 100 values of sin2xn. The length of this area is 2.00π. If we divide the area by 2.00π we the average height of the area, which we see is 0.01 times the sum of 100 values of sin2xn, which is the average value of sin2xn. Rule: The average value of a function f (x) between x = a and x = b is the integral of f (x) from a to b divided by (b−a). What we call numerical integration is where we divide the area under a curve into a large number of slices, and we add the slice areas together to obtain an estimate of the total area under the curve, which is an estimate of the integral of the curve. With a large enough number of slices, our estimate can be as accurate as we need it to be. What we did in our spreadsheet is calculate the area under sin2x by numerical integration, and then used the above rule to obtain the average value of sin2x from our estimate of the integra. Suppose we dig up a 1-g fragment of charcoal from an archaeological site. We want to figure out how old it is using carbon-dating. The concentration of carbon-14 in charcoal made from new wood is one part per trillion = 1 ppt = 10−12, because that's the ratio of carbon-14 to carbon-12 in the atmosphere. Our 1-g piece of charcoal is almost entirely made of carbon-12, so it contains 1/12 mole of carbon-12. If the charcoal were new, one in 1012 of these carbon atoms would be carbon-14. Using Avogadro's constant, the number of carbon-14 atoms in the 1-g fragment when it was first created was 6.0×1023 × 1/12 × 10−12 = 4.2×109 = 4.2 billion carbon-14 atoms. But one in 8000 carbon-14 atoms decays into nitrogen-14 every year, and when they do so, they emit an electron, which we can detect with a suitable instrument. So we can count the rate at which carbon-14 atoms are decaying in our sample. If the sample were new, we would wee 520,000 decays per year, or 59 per hour. But we measure only 3.7 decays per hour. How old is our sample? Let us start by writing down what we know about the rate of decay of carbon-14 as a differential equation. Now we integrate our differential equation to obtain an expression for the number of carbon-14 atoms, N, versus time, t, in years. The notation we use in the following hand-written derivation is similar to the notation we used in the derivation of the area of the circle, but it's not immediately obvious what infinitesimal areas are represented by dN/N and −αdt. The first is the area of a slice of width dN under the graph of 1/N plotted against N. The second is the area of a slice of width dt under the graph of −α plotted against t. The differential equation tells us this: if a change in N of dN occurs in an interval of time dt, then the area dN/N must be equal to the area −αt. We note that dN is going to be negative, because carbon-14 atoms are decaying, while dt is going to be positive, because time is always increasing. We re-arrange this equation to obtain an expression for the age of the sample as a function of our observed rate of decay. The observed rate of decay is dN/dt, which we assign the symbol D. Before we proceed, we check the units of our expression to make sure they are right. Note that 1/α = 8000 yrs. As another check, try D = 520,000 /yr. We get t = 0 yr. Good. Now, we observe 3.7 decays per hour, or 32,000 decays per year. So our sample is 8000 × Ln(4.2×109 ÷ 8000 ÷ 32,000) = 22,000 yr old. A differential equation is an equation that equates a function to its derivatives. In the above example, we have the derivative of N with respect to time being equated to the value of N multiplied by a negative constant. Many real-world physical systems may be well-described by differential equations. Solving them, we obtain the behavior of these systems with time. As we solve them, we make use of known values of physical constants. In the above example, we knew the value of time started at zero, and the value of N at time zero was N0. These known values are essential to solving the differential equation, and we call them the boundary conditions of the solution. When we borrow money from a bank, we pay the bank interest, which is like rent for money. The more money we borrow, the more interest we must pay. As we pay off the loan, the amount of money we still owe the bank is what we call the principle of the loan. The interest rate is the interest we must pay per year for each dollar of principle that remains. If our principle is $100,000 and our interest rate is 3%/yr, we owe $3,000 interest per year. If we pay only $3,000/yr to the bank, we will be paying off only the interest, and the principle will remain $100,000. If we pay $10,000 in the first year, we will pay off $7,000 of the loan, leaving $93,000, and the next year we will owe less interest. Suppose we want to pay off the entire loan in 10 years, making small, frequent payments. What will our total annual payment be? We need to solve a differential equation to obtain the annual payment. In the following derivation, we imagine that we are paying continuously in infinitesimal amounts for each infinitesimal payment period. Equation (1) relates dP/dt to P. It is a differential equation. The differential equation itself does not tell us the value of P at time t. We assume, however, that there exists some equation that will give us the value of P in terms of t. We now guess what this equation will look like, and then test our guess to see if it satisfies the differential equation. For brevity's sake, our first guess is correct. Any other guess would result in a contradiction when we tested it against Equation (1). Our value of M is the fixed annual payment rate that repays the entire loan with interest in time T. The time T is the loan term. We divide M into twelve parts to make monthly payments. The following table gives the payment rate for various interest rates, loan terms, and amount borrowed. We also give the total amount paid over the course of the loan. In the case of a 30-year mortgage at 3%, we end up paying the bank a total of 50% more than the initial loan amount. At 10%, we pay triple the loan amount over thirty years. Exercise: We throw a ball straight up in the air with velocity 32 m/s. Until it hits the ground again, the ball decelerates at g = 10 m/s/s due to gravity. Let h be its height above the ground. Write down a differential equation relating the second derivative of height to g. Solve the differential equation to obtain h as a function of time, using boundary condition h = 2 m at t = 0. At what time does the ball hit the ground?
A number of researchers are exploring the use of 3D printing/additive manufacturing in space applications. One experiment is testing the possibility of using 3D printing to make spacecraft components directly in orbit. The Additive Manufacturing In Space (AIMIS-FYT) team at Munich University of Applied Sciences is developing and researching an additive manufacturing process in which the production of structures takes place in zero gravity. The benefit here is that elements produced this way for space travel do not have to meet the high launch requirements. The process is being researched on parabolic flights in zero gravity – supported by a uEye CP industrial camera from IDS. For this additive manufacturing process, the AIMIS-FYT team developed a 3D printer with an extruder that dispenses a liquid photopolymer. “Our 3D printing process can directly print three-dimensional structures in space using a UV-curing adhesive or potting compound,” says Torben Schaefer, press officer of the AIMIS-FYT team. Rather than create components layer by layer, the team created a 3D printer that builds parts directly using the three-dimensional movement of the print head. UV light cures the resin that is freely extruded into space in zero gravity, hardening the material in a short time. In combination with weightlessness, this process enables manufacturing without shape restrictions that normally exist due to gravity on Earth. Typical shape limitations are, for example, long overhangs that are not possible on earth or that can only be manufactured with elaborate support structures. In zero gravity, it is even possible to create components without a fixed anchor point, such as a pressure plate. This production process enables a variety of designs, such as printed structures for solar panels or antennas. For example, the production of mirrors for parabolic antennas or the manufacture of truss structures for the mounting of solar generators is possible. Those who develop small and micro-satellites or even entire satellite constellations can make them in orbit, rather than on Earth, reducing unit and launch costs for transporting their systems into orbit. Building satellites in space also enables developers to take more fuel on board, extending the useful life. “For satellites, the fuel is usually the limiting factor; at present, it usually lasts for around 15 years,” explains Torben Schaefer. One of the first tests was the printing of straight rods, connections of rods and the creation of free-form rods. In one case, a conventional printing plate was used as the starting point for printing; in another case, the behavior of printing, free-floating rods were investigated. The 3D printing process The main parameters of the printing process are the extrusion speed of the resin, the UV light intensity, the UV light time and the trajectory– or the movement path of the printer. “In our printing process, precise, pressure-stable and constant delivery of the medium is important. At the same time, the parameters should be kept constant during the entire process,” says Torben Schaefer. The USB 3 camera sponsored by IDS keeps a close eye on the process: It watches the nozzle of the printer in close-up and always moves relative to it. This way, the camera follows the nozzle with every movement. The image is cropped in such a way that the formation of the rods is captured around 4.5 cm below the nozzle. “The IDS camera provides important results for the discharge of the resin and its curing. The UV LEDs produce a strong overexposure, which means that difficult lighting conditions exist. These are no problems for the U3-3260CP from the IDS portfolio: with the cost-effective 2.30 MPixel Sony sensor IMX249 (1920 x 1200 px). It makes the global shutter CMOS sensor with its 5.86 µm pixels predestined for applications like these, which are supposed to deliver a perfect result even in difficult lighting conditions – in this case, strong brightness due to overexposure. To further analyze the exit behavior from the nozzle in zero gravity, the process is carried out at a slower speed. The contour of the rod must be precisely captured. “For this, the high frame rate and resolution of the camera are crucial for a high-quality evaluation,” says Torben Schaefer from the AIMIS team. With a frame rate of 47.0 fps, the IDS camera ensures good image quality with minimal noise – perfect conditions for its task in space. In addition, the camera was easy to install. “We were able to seamlessly integrate the camera into our C++-based monitoring system with the help of the IDS SDK,” says Torben Schäfer. According to him, this is where all the data from the sensors converge and provide a comprehensive overview of the current status of the printer and the individual print parameters. “We can start and stop the recording of the IDS camera and all other measurements with one click. Since there are only twenty seconds of zero gravity on a parabolic flight and there is a break of around one and a half minutes between two parabolas, we only save the most important information by starting and stopping measurements and recordings in a targeted manner.” In addition, a live image of the printing process is displayed on the monitor with the help of the IDS software. “This live feed makes it easier for us to set up and quickly analyze the printhead.” The findings from the experiments will be used to further optimize the printing process of the four basic 3D printing operations (straight bar, straight bar with start/stop points, free-form bar as well as connections between bars) and to prove the primary function of additive manufacturing in zero gravity. The aim is to test the technology in space, as it offers the chance to drastically reduce the cost of components in space technology. “With the AIMIS-FYT project, we have the opportunity to actively shape the future of space travel,” says Michael Kringer, project manager of the AIMIS-FYT team. The powerful little IDS camera has successfully recommended itself for future missions – on Earth and in space. IDS Imaging Development Systems GmbH Filed Under: Make Parts Fast
The precise control of the rotational temperature of molecular ions opens up new possibilities for laboratory-based astrochemistry Chemical reactions taking place in outer space can now be more easily studied on Earth. An international team of researchers from the University of Aarhus in Denmark and the Max Planck Institute for Nuclear Physics in Heidelberg, discovered an efficient and versatile way of braking the rotation of molecular ions. Ions in a gaseous crystal: An alternating field between rod-shaped electrodes confines magnesium and magnesium hydride ions (red spheres) in a trap. A laser beam is used to cool the particles until they solidify to a crystal in which the distances between the ions are much greater than in a mineral crystal. A German-Danish team of researchers is able to slow down the rotation of the molecular ions with a highly tenuous, cold helium gas (spheres to the left and right of the ion crystal). © J. R. Crespo/O. O. Versolato/MPI for Nuclear Physics Cooling down an ion crystal: A cloud of magnesium (blue spheres) and magnesium ions (tied blue and green spheres) is confined between the four cylindrical electrodes of a Paul trap. A laser, depicted in this image as a bright transparent strip in the centre, cools the ions so that they solidify into a Coulomb crystal. When helium atoms (purple), which flow into the trap, collide with magnesium hydride ions, the rotation of the latter slows down - the rotation temperature drops. © Alexander Gingell/Aarhus University The spinning speed of these ions is related to a rotational temperature. Using an extremely tenuous, cooled gas, the researchers have lowered this temperature to about -265 °C. From this record-low value, the researchers could vary the temperature up to -210 °C in a controlled manner. Exact control of the rotation of molecules is not only of importance for studying astrochemical processes, but could also be exploited to shed more light on the quantum mechanical aspects of photosynthesis or to use molecular ions for quantum information technology. Cold does not equal cold for physicists. This is because in physics, there is a different temperature associated with each type of motion that a particle can have. How fast molecules move through space determines the translational temperature, which comes closest to our everyday notion of temperature. However, there is also a temperature for the internal vibrations of a molecule, as well as for the rotational motion around their own axes. Similar to a stationary car with its engine running, the internal rotation (the engine, in this case) does not translate into motion before the clutch is released. In the case of molecules, the many microscopic collisions between the particles which constitute gases, fluids, and solids couple the various forms of motion with each other. The different temperatures thus approach each other over time. Physicists then say that a thermal equilibrium has been established. However, how fast this equilibrium is reached depends on the collision rate, as well as on any external influences working against this equilibration. For example, the infrared radiation emanating from the contraction of an interstellar gas cloud can cause the rotation of molecules to quicken, even without changing the speed at which the molecules are travelling. These kinds of processes take a very long time in the emptiness of space, as there are very few collisions there. The cooling method for the rotational temperature is quick and versatile Time is totally irrelevant at cosmic dimensions but with physical experiments it is crucial. Indeed, physicists can nowadays reduce the flight speed of molecules relatively quickly to almost absolute zero at -273.15 °C. However, it takes several minutes or hours for the rotation of non-colliding particles to cool to a similar level, making some experiments almost impossible. This may be about to change. “We have managed to cool down the rotation of molecular ions in milliseconds, and down to lower temperatures than previously possible,” says José R. Crespo López-Urrutia, Group Leader at the Max Planck Institute for Nuclear Physics. The researchers from the Max Planck Institute in Heidelberg and the group led by Michael Drewsen at Aarhus University froze molecular rotational motion at 7.5 K (or -265.65 °C). And not only that, as Oscar Versolato from the Max Planck Institute in Heidelberg, who played an important role in the experiments, explains: “With our methods we can choose and set a rotational temperature between about seven and 60 Kelvin, and are able to accurately measure this temperature in our experiments.” Unlike other methods, this cooling principle is very versatile, being applicable to many different molecular ions. In their experiments, the team used a cloud of magnesium ions and magnesium hydride ions using methods pioneered in Aarhus. This ensemble was “confined” in an ion trap known as CryPTEx, which was developed by researchers at the Max Planck Institute for Nuclear Physics (see Background). The trap consists of four rod-shaped electrodes that are arranged in parallel, in pairs aligned one above the other and having opposite electrical polarities. A high-frequency alternating voltage is applied to the electrodes to confine the ions in the centre close to the longitudinal axis of the trap. The trap is cooled to a few degrees above absolute zero, and there is an excellent vacuum so that adverse collisions are very rare. Collisions with cold helium atoms slow down the rotation of the molecular ions In the trap, the physicists cooled the magnesium ions using laser beams which, to put it simply, slow down the ions with their photon pressure. The magnesium hydride ions in turn cool because of their interaction with the magnesium ions. This allowed the researchers to cool the translational temperature of the cloud to minus 273 degrees Celsius until several hundred particles solidify to form a regular crystal. In such crystals, the distances between the particles are very large, in contrast to the situation in crystals familiar from minerals. The particles which the cold laser causes to emit light can thus be seen at their fixed positions under the optical microscope. To apply a brake to the rotation of the molecular ions, and thus to reduce their rotational temperature, the team injected an extremely tenuous, cold helium gas into the trap. In the ion crystal, the helium atoms flying at a leisurely speed collide with the magnesium hydride ions rotating about their own axis trillions of times per second. The collisions cause the helium atoms to gradually slow down the molecular ions. “This process is similar to the tides,” explains José Crespo: "The rotating ion polarizing the neutral helium atom is a little bit like the moon producing the tidal bulges.” A dipole is thus induced in the helium atom, which tugs at the rotating molecular ion such that it rotates a little slower. The helium atoms in the experiment mediate between the various temperatures as they transfer translational kinetic energy to the molecular ions in some collisions and remove rotational energy in others. This effect is also exploited by the team to heat the rotational motion of the molecular ions through the amplification of the regular micro-motion of trapped particles. Crystal size and shape control the heating of molecular ions The physicists increase the micro-motion velocity of the molecular ions by varying the shape and size of the ion crystal in the trap: they knead the crystal as it were by means of the alternating voltage which is applied to the trap electrodes. The alternating field that the electrodes produce is equal to zero only along the trap axis. The further the molecular ions are located away from this axis, the more they feel the oscillating force of the field and the more violent is their micro-motion. Part of the kinetic energy of the swirling molecular ions is absorbed by the helium atoms in collisions, and these atoms in turn transfer it to the rotational motion of the ions, thus raising their rotational temperature. For the Danish-German collaboration, the ability to control the rotation of the molecular ions not only enables the manipulation of the micro-motion, and thus the rotational temperature, but also the quantum-mechanical measurement of this temperature. The scientists do this by exploiting the fact that the rotational motion of the molecules is quantised. Put simply: the quantum states of a molecule correspond to certain speeds of its rotation. At very cold temperatures the molecules occupy only very few quantum states. The researchers remove the molecules of one quantum state from the crystal by means of laser pulses whose energy is matched to that particular state. They determine how many ions are lost in this process, in other words how many ions take on this particular quantum state, from the size of the crystal remaining. They determine the rotational temperature of the molecular ions by thus scanning a few quantum states. Accurate control of quantum states is a prerequisite for many experiments “Being able to control the rotation of the molecular ions and thus the quantum state so accurately is important for many experiments,” says José Crespo. Scientists can therefore recreate in the laboratory chemical reactions that take place in space if they can bring the reactants into the same quantum state in which they drift through interstellar space. Only in this way can one quantitatively understand how molecules are formed in space, and ultimately explain how interstellar clouds, the hotbeds of stars and planets, evolve both physically and chemically. This speed control knob for rotating molecules could also contribute to a better understanding of the quantum physics of photosynthesis. In photosynthesis, plants use the chlorophyll in their leaves to collect sunlight, whose energy is ultimately used to form sugars and other molecules. It is not yet entirely clear how the energy required for this is quantum mechanically transferred within the chlorophyll molecules. To understand this, the researchers must once again very accurately control and measure the quantum states and the rotation of the molecules involved. The findings thus obtained could serve as the basis for imitating or optimising the photosynthesis at some time in the future in order to supply us with energy. Last but not least, this control is a prerequisite for quantum simulations as well as for many concepts of universal quantum computations. In quantum simulations physicists mimic a quantum mechanical system that is difficult, or even impossible, to examine directly with another quantum system that is well-known and controllable. In universal quantum computers which physicists are trying to develop, the aim is to process information extremely quickly using the quantum states of particles. Molecules are possible candidates for this, their chances now growing as molecular rotation can be quantum mechanically controlled. “Our method for the cooling of the rotation of molecules opens up new possibilities in a variety of different fields,” says José Crespo. His team, too, will now use the new method to investigate open questions about the quantum mechanical world. CryPTEx – a trap for cold ions CryPTEx, the Cryogenic Paul Trap Experiment, is a cryogenically cooled trap setup developed and built by the team of José R. Crespo López-Urrutia at the Max Planck Institute for Nuclear Physics (MPIK) in Heidelberg based on a trap design by collaborator Michael Drewsen of Aarhus University (AU) in order to investigate highly charged ions (HCI). Production, trapping and spectroscopy with HCI are the fields of expertise of the Max Planck group, which uses various ultrahigh vacuum cryogenic settings for their investigation. These specific conditions required for HCI studies are also very beneficial for the study of molecular ions. The Heidelberg team then moved CryPTEx to Aarhus and commissioned the apparatus there together with the local team. Trapping and manipulation of molecular ions is the specialty of the Aarhus group, which has pioneered many of the laser-based techniques now used in the field. Drewsen saw the novel opportunities for cooling molecular ions in the cryogenic setting, including the application of an ultra-tenuous helium buffer gas. Thus, CryPTEx stayed in Aarhus for one year, where the young scientists from both groups carried out long series of experiments and tested new ideas. During those experiments, ion crystallisation and buffer gas cooling could be achieved simultaneously over a wide range of effective temperatures, down to the lowest ever recorded for a molecular ion. Dr. José R. Crespo López-Urrutia | Max-Planck-Institute NASA's SDO sees partial eclipse in space 29.05.2017 | NASA/Goddard Space Flight Center Strathclyde-led research develops world's highest gain high-power laser amplifier 29.05.2017 | University of Strathclyde The world's highest gain high power laser amplifier - by many orders of magnitude - has been developed in research led at the University of Strathclyde. The researchers demonstrated the feasibility of using plasma to amplify short laser pulses of picojoule-level energy up to 100 millijoules, which is a 'gain'... Staphylococcus aureus is a feared pathogen (MRSA, multi-resistant S. aureus) due to frequent resistances against many antibiotics, especially in hospital infections. Researchers at the Paul-Ehrlich-Institut have identified immunological processes that prevent a successful immune response directed against the pathogenic agent. The delivery of bacterial proteins with RNA adjuvant or messenger RNA (mRNA) into immune cells allows the re-direction of the immune response towards an active defense against S. aureus. This could be of significant importance for the development of an effective vaccine. PLOS Pathogens has published these research results online on 25 May 2017. Staphylococcus aureus (S. aureus) is a bacterium that colonizes by far more than half of the skin and the mucosa of adults, usually without causing infections.... Physicists from the University of Würzburg are capable of generating identical looking single light particles at the push of a button. Two new studies now demonstrate the potential this method holds. The quantum computer has fuelled the imagination of scientists for decades: It is based on fundamentally different phenomena than a conventional computer.... An international team of physicists has monitored the scattering behaviour of electrons in a non-conducting material in real-time. Their insights could be beneficial for radiotherapy. We can refer to electrons in non-conducting materials as ‘sluggish’. Typically, they remain fixed in a location, deep inside an atomic composite. It is hence... Two-dimensional magnetic structures are regarded as a promising material for new types of data storage, since the magnetic properties of individual molecular building blocks can be investigated and modified. For the first time, researchers have now produced a wafer-thin ferrimagnet, in which molecules with different magnetic centers arrange themselves on a gold surface to form a checkerboard pattern. Scientists at the Swiss Nanoscience Institute at the University of Basel and the Paul Scherrer Institute published their findings in the journal Nature Communications. Ferrimagnets are composed of two centers which are magnetized at different strengths and point in opposing directions. Two-dimensional, quasi-flat ferrimagnets... 24.05.2017 | Event News 23.05.2017 | Event News 22.05.2017 | Event News 29.05.2017 | Earth Sciences 29.05.2017 | Life Sciences 29.05.2017 | Physics and Astronomy
- Based on the balanced chemical equation, be able to calculate the masses of reactants or products generated in a given reaction. - Based on the balanced chemical equation, be able to calculate the moles of reactants or products generated in a given reaction. - Understand how to convert between masses and moles in a chemical reaction using mole ratios and molar masses. Check Your Understanding Recalling Prior Knowledge - How much hydrogen is needed to form 3.1 moles of tin according to the following reaction? - SnO2 + 2 H2 → Sn + 2H2O Mole Ratios, Molar Masses, and Chemical Equations How can we measure out a known amount of a reactant, since actually counting atoms and molecules is not a practical approach? How can we tell what amount of product was generated in a reaction? In most cases, the mass of a reactant or product is a relatively easy quantity to measure. Recall that the molar mass of a given chemical species can be determined by referencing the periodic table. If we know the identity of the substance we wish to measure, molar mass can be used as a conversion factor between mass and amount (in moles). For any given chemical reaction, we can describe the following relationships in the Figure below. This image depicts how moles, mass, and mole ratios are related for a given chemical equation. AgNO3(aq) + NaCl(aq) → AgCl(s) + NaNO3(aq) How many grams of each reactant are needed to produce 0.500 mol of silver chloride? First, we need to relate the mass of silver nitrate to the amount in moles of the product silver chloride. This can be accomplished by using a series of conversion factors in which all units cancel except for those of the desired answer (grams of silver nitrate). To do this, we will need the mole ratio between these two reaction components and the molar mass of silver nitrate. Then, we can perform the following calculation: In order to produce 0.500 moles of AgCl, we would need to start with 84.9 g of silver nitrate. A similar calculation can be performed to determine the necessary mass of sodium chloride. Mass Reactants↔Moles Reactants↔Moles Products↔Mass Products In the chemistry lab, we frequently need to calculate the relationship between two reactants or products in a chemical reaction. For example, we may know the mass of one reactant and want to know how much of a given product will be generated if the reactant is fully consumed. We may also wish to know how much of a second reactant is required to fully react with the first reactant. These types of questions can be answered by using molar masses and mole ratios as conversion factors. We will illustrate this process with an example. How many grams of lead(II) chloride would be produced if 1.67 g of lead(II) nitrate is allowed to react completely in the presence of a sodium chloride solution? How many grams of sodium chloride would be consumed in the process? - Pb(NO3)2(aq) + 2 NaCl(aq) → PbCl2(s) + 2 NaNO3(aq) First, we need to relate grams of lead(II) chloride to grams of lead(II) nitrate. We can set up the following expression, using the molar masses of each component and their mole ratio, obtained from the balanced equation: 1.67 Pb(NO3)2 × (1 mol Pb(NO3)2331.2 g Pb(NO3)2)(1 mol PbCl21 mol Pb(NO3)2)(278.11 g PbCl21 mol PbCl2)=1.40 g PbCl2 Therefore, 1.40 g of lead(II) chloride would be produced if 1.67 g of lead(II) nitrate is fully consumed. The amount of NaCl that would be used in this process can be calculated as follows: 1.67 Pb(NO3)2 × (1 mol Pb(NO3)2331.2 g Pb(NO3)2)(2 mol NaCl1 mol Pb(NO3)2)(58.44 g NaCl1 mol NaCl) = 0.589 g NaCl In order to fully consume 1.67 g of lead(II) nitrate, we would need at least 0.589 g of NaCl. - Using molar masses and mole ratios, we can find the relationships between the masses of various reaction components for a given reaction. Lesson Review Questions - Aluminum reacts with oxygen to produce aluminum oxide according to the following equation: 4Al + 3O2 → 2Al2O3 - How many grams of O2 are needed to produce 5 moles of Al2O3? - How many grams of Al2O3 are produced from the reaction of 5 moles of Al? - How many grams of Al are needed to produce 86.0 grams of Al2O3? - How many grams of each reactant are needed to produce 0.500 mol of barium sulfate according the following equation? BaCl2 + Na2SO4 → BaSO4 + 2NaCl - How many grams of each reactant are needed to produce 28.6 grams copper (II) sulfide by the following reaction? Cu + SO2 → CuS + O2 Further Reading / Supplemental Links - Stoichiometry Calculator: http://mmsphyschem.com/stoichiometry.htm - Practice Balancing Chemical Equations: - Chemical equation balances: http://www.personal.psu.edu/jzl157/balance.htm Points to Consider - Our study of masses and amounts, as described by a given chemical equation, has assumed that mass is conserved for all chemical processes. How might you determine experimentally that this is the case for a given chemical reaction? - So far, we have assumed that all of the reactants are utilized in the formation of products during each reaction. Can you think of a case in which one or more reactants would not be completely consumed?
Students should start out working individually on tables_linear_functions_warmup and then work with a partner after about 3-5 minutes. The warmup should follow the Think-Pair-Share model. While students work make note of how students attempt to find the 10th term in the sequence. Some students will use additive reasoning and others will use multiplicative reasoning. The formula for finding the nth term of an arithmetic sequence can be given to students. However, they are much more apt to know how to use the formula if they can derive it themselves. When students share out their answers to the whole class, try to start with those that used additive reasoning and then have students who used multiplicative reasoning show their thought process (MP2). Both of these techniques will get you to the correct answer. One is just more efficient than the other (think about finding the 100th term). The students who used the multiplicative process may notice that there is one less common difference being added than there are terms (simple example: 1, 3, 5, 7, n. If we wanted to know the 5th term we know we are adding four 2's to the first term. So 1 + 8 = 9) In our example 2, 5, 8, 11,...n we will be adding nine 3's to the first term to come up with the 10th term, so 27+2 = 29. Make sure to bring this observation to the attention of the whole class. See tables_linear_functions_launch Slide 3 In this slide students are going to list their inputs and outputs in a table format to organize their work. Students should work with their partner to determine the following: 1) What are the first 8 terms of the sequence? 2) Put the first 8 terms into a table of values. 3) Find both a recursive and explicit rule for this function. 4) Graph the coordinates in the coordinate plane and draw a line to model this sequence using the function. When students work on this launch/investigation they will be continuing to increase their understanding of the rate of change between values of x and f(x). For the first two examples, they will also be building their recursive and explicit formulas from a concrete model that is easier to understand. The investigation will begin by having students find the same information for Slide 4. This is also a concrete model. Students can use manipulatives to help them understand the pattern and rate of change (MP6). In this slide, the concept becomes more abstract as students are not given a picture to help them understand the pattern. Students will need to use the given table to help them determine the rate of change. Teaching point: Some students may have difficulty seeing that the rate of change is constant. They will see that the changes in f(x) vary. They may not realize how this is connected to the value of x changing at a proportional rate. When working with pairs of students, guide them to fill in the missing values so that they can see that the rate of change is, in fact, a constant. When discussing these three examples with the class, draw the three tables on the board. Have students fill in their answers for the input and output values. Add a third column to each table to show the common difference for each jump from one value to the next. Next, have students write down their explicit formulas for each of the three examples. Then have them do a think-pair-share around the following question: "What do you notice about the explicit formula and the table of values that you made?" Give students a couple of minutes to write something down and then share their ideas with their partner (MP3). When students share out, guide their thinking towards the understanding that the common difference is the number that preceeds the variable in their explicit formula (the slope of the line). For example, in the first tile question the explicit formula is f(x) = 2x - 1. The common difference between each of the subsequent terms is 2. This idea will continue to develop in future lessons as we investigate the idea of slope of a line. Students who are more concrete thinkers can also connect to this by looking at how the common difference is presented in the diagrams in the first two examples. They can see how many tiles or discs are being added each time. Being able to see the same concept from multiple perspectives is key in mathematics. Tables_linear_functions_close is designed to help students make a connection between two variable equations and functions. Have students make a table of simple x values* (imputs) and find the corresponding y-values (outputs). They can then graph their solutions on a piece of graph paper. You can also encourage students to be strategic in the x-values that they choose so as to have y-values that are simpler to graph. For example, question 1 has x + 2y = 4. An x-value of 2 would give a y-value of 1. In contrast, and x-value of 1 would give a y-value of 1.5. Obviously, these are all values that students can handle but integer values are easier to graph than rational values. *Simple x-values simply refers to integer values. Student can certainly choose rational values if they would like but it may slow down their computation.
Welcome to A2Z Calculators, where precision meets simplicity! In the world of scientific measurements, understanding how to calculate percent error is a fundamental skill. Whether you’re navigating the realms of chemistry, working with data sets, or analyzing density values, we’ve got you covered. In this comprehensive guide, we’ll walk you through the step-by-step process of calculating percent error, making it easy for you to grasp this essential concept. What is Percent Error? Percent error is a powerful tool used to evaluate the accuracy of your measurements by comparing them to accepted or true values. It’s a way to quantify the difference between what you observed or measured and what the actual value should be. The formula is straightforward: Percentage Error= (Observed Value − True Value / True Value) × 100% 1. Identify the True Value Before diving into calculations, pinpoint the accepted or true value that serves as your benchmark. This could be a standard value from a reference source or an expected result. 2. Measure the Observed Value Obtain your experimental or observed value through measurements, experiments, or calculations. This is the value you’ll be comparing to the true value. 3. Calculate the Difference Find the difference between the observed and true values. This step quantifies how much your measurement deviates from the expected result. 4. Apply the Formula Plug the values into the percent error formula. Divide the difference by the true value and multiply by 100 to express the error as a percentage. 5. Round if Necessary Consider the level of precision needed and round your percentage error to an appropriate number of decimal places. Let’s walk through an example to solidify your understanding. Say you measured the density of a substance, and the accepted density is 8.00 g/cm³. Your measured density is 7.60 g/cm³. Percentage Error = (7.60 − 8.00 / 8.00) × 100% =−5% The negative sign indicates that your measured density is lower than the accepted value. Tips for Success: - Precision Matters: Be aware of the precision of your measurements and round your results accordingly. - Context is Key: Understand the context of your experiment or analysis to interpret the significance of your percentage error. - Average Percent Error: If dealing with multiple measurements, calculate the average percent error for a more comprehensive assessment. Congratulations! You’ve now mastered the art of calculating percent error. Armed with this knowledge, you can enhance the accuracy of your experiments and measurements. Remember, at A2Z Calculators, we believe in making complex concepts accessible, and percent error is no exception.
Critical thinkers think clearly and rationally, making logical connections between concepts; they are essential for investigating and comprehending our environment. Critical thinkers are always seeking to expand their knowledge and participate in autonomous self-learning. They do not accept things as they are. A critical mind is also called a rational mind or an analytical mind. These terms are used to indicate that the person has developed the ability to think critically. Some people may have been born with a natural ability to think critically, but most learn how to do it through experience. The more we engage in thinking about something, the better we get at it. Learning how to think critically is a skill that can be developed through practice. There are several methods used by educators to help students develop their critical thinking skills. One method is called "thought experiments". A thought experiment is when you question whether something could happen or not. For example, if I were to jump off a cliff would I die? Most people say no, so therefore jumping off a cliff cannot kill you. Thought experiments are used by teachers to help students understand concepts by questioning whether something could possibly happen. Another method is called "logical fallacies". Logical fallacies are mistakes people make while reasoning or thinking logically. Critical thinkers are often inquisitive and thoughtful individuals. They like exploring and probing new areas in search of information, clarification, and new answers. They pose appropriate questions, assess assertions and arguments, and differentiate between facts and views. Critical thinkers are also self-directed; they do not rely exclusively on others to make decisions for them. Skilled critical thinkers are able to recognize their own limitations and those of others. They are aware of bias and prejudice and try to remain objective when evaluating information and evidence. Finally, they strive to be honest with themselves and others. Critical thinking is not only important in academics but also in everyday life. For example, critical thinkers are needed in order to resolve conflicts effectively, come up with solutions to problems, and make well-informed decisions. Also, professionals in fields such as law, medicine, education, business, and technology require significant levels of critical thinking skills in order to do their jobs successfully. There are many ways to improve one's critical thinking abilities. One can try to learn by reading good sources of information and discussing them with others. Of course, practicing logic games such as black holes, thought experiments such as imagining what would happen if a star fell into itself, and drawing conclusions from observed data are all helpful too! Being reasonable and conscious of your own sentiments on the issue is required for critical thinking, as is the ability to reorganize your thoughts, past knowledge, and understanding to accommodate new ideas or opinions. As a result, critical reading and critical thinking are the bedrocks of meaningful learning and personal development. Critical reading is essential in any field of study because it helps you understand issues and concepts that may not be apparent from simply reading the book or article. This means that you need to read things critically so that you can draw your own conclusions and judge for yourself whether the information presented is accurate or not. Only then can you make informed decisions about what else you should know or do not want to know about. In addition to being able to read texts critically, it is important that you also ask questions when you are studying something new. This allows you to build on what you already know and not waste time studying topics that will not help you learn anything new. Finally, remember that learning is not only about remembering information, but also about applying what you have learned in different situations. It is these skills that are needed to solve problems effectively while working on projects under deadline pressures. It is also important to note that critical thinking can only be developed through practice; therefore, do not be afraid to make mistakes as long as you do not repeat them. Critical thinkers are active rather than passive. They pose inquiries and do analyses. They intentionally employ methods and procedures to discover meaning or secure comprehension. Critical thinking is a mode of thought that generates ideas, analyzes problems, discovers solutions, and makes decisions. It is an approach to thinking that can be used to solve any type of problem. Critical thinking is defined as "the process of analyzing issues by examining different points of view and using this analysis to create or make a decision." Critical thinking is used by scientists, lawyers, politicians, teachers, doctors, nurses, administrators, and others in many fields of activity. It is also useful for people who want to think more clearly about their own thoughts and feelings. Critical thinking helps us understand why some ideas appeal to some people but not to others, and it enables us to judge the merits of different positions. This means that we can use our critical faculties to decide which ideas are worth following up on and which should be ignored. Critical thinking is divided into two types: formal and informal. Formal critical thinking is based on established sets of rules, such as those found in logic books. It allows us to learn from new experiences through the process of continuous self-evaluation. Critical thinking, then, allows us to establish sound ideas and judgements, providing us with a foundation for a "logical and reasonable" emotional existence. Without this ability, we are left to merely follow our feelings at any given moment, which can lead to great confusion as well as intense pain when they change or disappear. We might make decisions based on emotion rather than reason, which could have negative consequences for ourselves and others. Critical thinking is therefore essential for us to find balance between our emotions and thoughts, and better understand how they influence each other. It also helps us develop skills that will be useful in life: for example, by allowing us to analyze situations objectively and come up with solutions that take all factors into account. Finally, learning to think critically can be fun! By trying out different approaches, arguing for and against different views, we are actually playing around with ideas and concepts, which is what science is all about. In conclusion, logical and critical thinking allow us to evaluate our experiences and relationships with humanity at large objectively, which in turn helps us build a sense of personal identity and meaning beyond just feeling alive at this moment in time.
What is a Ratio? A ratio is a mathematical term that is used to compare the size of one number to the size of another number. It is commonly used in both mathematics and in professional environments. Here are some everyday examples of times when you could use ratios: - When you convert your Pounds to Dollars or Euros when you go on holiday - When you calculate your winnings on a bet - When you work out how many bottles of beer you need for a party - When you share a packet of sweets fairly among your friends - When you calculate how much tax you must pay on your income Ratios are usually used to compare two numbers, though they can also be used to compare multiple quantities. Ratios are often included in numerical reasoning tests, where they can be presented in a number of different ways. It is therefore important that you are able to recognize and manipulate ratios however they are presented. Different Ways of Presenting Ratios Ratios are usually shown as two or more numbers separated with a colon, for example 8:5 or 1:4 or 3:2:1. However, they can also be shown in a number of other ways; the three examples below are all different expressions of the same ratio. Scaling a ratio One of the reasons that ratios are useful is that they enable us to scale amounts. This means increasing or decreasing the amount of something. This is particularly handy for things like scale models or maps, where really big numbers can be converted to much smaller representations that are still accurate. Scaling is also helpful for increasing or decreasing the amount of ingredients in a recipe or chemical reaction. Ratios can be scaled up or down by multiplying both parts of the ratio by the same number. For example: There are six practice questions below. Should you need further practice afterwards, we recommend the numerical reasoning packages available from JobTestPrep. These tests include ratio questions, with full explanations for all answers. Question 1: Scaling Ratios Olivia wants to make pancakes for nine friends, but her recipe only makes enough pancakes for three friends (the recipe is shown below). What quantity of the ingredients will she need to use? Pancake recipe (serves 3) - 100g flour - 300ml milk - 2 large eggs To solve this question, you must first identify the ratio. This is a three-part ratio, whereby 100g flour:300ml milk:2 large eggs = 100:300:2. Next you need to work out how much you need to scale the recipe by. This recipe is for three people, but Olivia needs a recipe for nine. As 9/3=3, the ratio needs to be scaled by three (this is sometimes expressed as by a factor of 3). You then need to multiply each part of the ratio by 3: - 100 x 3 = 300 - 300 x 3 = 900 - 2 x 3 = 6 Therefore, to make enough pancakes for nine people, Olivia will need to use 300g flour, 900ml milk and 6 large eggs. Reducing a Ratio Sometimes ratios are not presented in their simplest form, which makes them harder to manipulate. For example, if a farmer has seven chickens and together they lay 56 eggs every day, this would be represented by the ratio 7:56 (or presented as a fraction this would be shown as 7/56). Reducing a ratio means converting the ratio to its simplest form, and this makes it easier to use. This is done by dividing both numbers in of the ratio by the largest number that they can both be divided by (this is just the same as reducing a fraction to its simplest form). So for example: Therefore, the reduced ratio is 1:8, which tells us that each chicken lays eight eggs. This is far easier to use. For example, if the farmer needed 96 eggs per day, he could use the ratio to work out how many chickens he would need for this to be possible. We know from the ratio that each chicken lays 8 eggs, therefore, if the farmer wants 96 eggs he must divide 96 by 8 to calculate the number of chickens required: 96/8 = 12 chickens Similarly, if one of the farmer’s chickens stopped laying eggs, he could use the ratio to work out how much that would reduce his total number of eggs to, i.e. 56-8 = 48. Question 2: Reducing Ratios Ella has 18 pigeons and they eat 54kg of grain per week. Jayden has 22 pigeons and they eat 88kg or grain per week. Who has the greediest pigeons? To solve this question, you must first identify, then simplify the two ratios: Ella’s ratio = 18:54, simplify this by dividing both numbers by 18, which gives a ratio of 1:3 Jayden’s ratio = 22:88, simplify this by dividing both numbers by 22, which gives a ratio of 1:4 This tells us that Ella’s pigeons each eat 3kg of grain per week, whilst Jayden’s eat 4kg of grain per week. Therefore, Jayden’s pigeons are greedier. Finding Unknown Quantities From Existing Equivalent Ratios One of the ways that ratios are particularly useful is that they enable us to work out new and unknown quantities based on an existing (known) ratio. There are a couple of ways of solving this type of problem. The first is to use cross-multiplication. For example, there are four rings and seven bracelets in the jewellery box. If this ratio were maintained, how many necklaces would there be if there were 28 bracelets? Another method is to calculate the factor (size) of the increase in number of bracelets (28/7=4) and to then multiply the number of rings by that same number (4 x 4 = 16). Question 3: Finding Unknown Quantities From Existing Equivalent Ratios Gabriel and Mandeep are getting married. They have calculated that they will need 35 bottles of wine for their 70 guests. Suddenly they find out that another 20 guests are planning to attend. How much wine do they need in total? Firstly, you need to work out the ratio of wine to guests they used = 35 wine:70 guests. Then simplify this so 1 wine:2 guests (or 0.5 bottle wine per guest). They now have 90 guests attending (70 + the extra 20 = 90) so you need to multiply 90 by 0.5 = 45 bottles of wine in total. Watch out for the wording in this sort of question, which can sometimes ask for the total required, and sometimes the extra required. Common Mistakes and Things to Look Out For Make sure you are reading the ratio the right way. For example, a ratio of 2 pink to 6 red should be expressed 2:6 not 6:2. The first item in the sentence comes first. Be careful with reading the wording. For example, people often make mistakes with questions such as “Bob has eight pigs and four cows. Calculate the ratio of pigs to animals.” It can be tempting to say the ratio is 8:4, but that would be incorrect because the question asks for the ratio of pigs to animals. Therefore you need to calculate the total number of animals (8+4=12), so the correct ratio is therefore 8:12 (or 2:3). Don’t be put off by units or decimals. The principles remain the same whether they apply to whole numbers, fractions, £ or m2, but do ensure you note the units in your calculations, and where possible convert them to the same units. For example, if you have a ratio of 500g to 0.75kg, then convert both sides to either grams or kilos. Question 4: Scaling Ratios Otto knows that his car requires 27.5 litres of fuel to drive 110 miles. He wants to drive from London to Edinburgh, a distance of 405 miles. How much fuel will he need? - To calculate this you first need to work out the ratio of fuel to miles driven = 27.5:110 - The next step is to simplify this ratio by dividing both halves of the ratio by 27.5 to calculate how much fuel is required to drive one mile: 27.5/27.5 = 1, 110/27.5=4 the ratio is therefore 1:4. One litre of fuel is required to drive four miles. - Next you need to scale the ratio. 405/4=101.25. So Otto needs 101.25 litres of fuel to drive his car 405 miles. Question 5: Reducing Ratios Hugo and Claudia share 600g chocolate using the ratio 4:2. How much chocolate does Claudia get? To solve this question, you must first add together the two halves of the ratio i.e. 4+2=6. Then you need to divide the total amount using that number i.e. 600/6 = 100. To work out how much each person gets, you then multiply their share by 100. So Hugo gets 400g and Claudia gets 200g. Question 6: Scaling Ratios Caleb wants to paint his house. He checks the instructions on the paint tin and finds that one tin of paint will cover 5m2. The total wall area Caleb needs to paint is 40m2. How many tins of paint will he need to buy? - First you need to identify the ratio of paint tin to wall area. This is 1:5m2 - Next you need to work out how much you need to scale the ratio by. To do this you need to divide 40m2 by 5m2 (because Caleb has a total wall area of 40m2 but we only know the paint ratio of 5m2, therefore we need to know how much bigger than 5m2 40m2 is). - 40/5 = 8; we therefore need to scale the ratio by 8. - To do this you need to multiply each part of the ratio by 8: 1 x 8 = 8 5 x 8 = 40 - Use this to create your new ratio = 8:40m2. This tells us that Caleb will need 8 tins of paint. If you would like to practise more ratio-based questions, WikiJob has a numerical reasoning test app, available in both iTunes and Google Play. All tests include a timer and worked solutions at the end. Give it a try! You may also be interested in these articles on WikiJob:
Worms are a type of malicious software designed to replicate themselves and spread to other computers. Unlike viruses, worms do not need to attach themselves to a host file or program. They are standalone programs that can move through computer networks independently and propagate without any user intervention. One of the defining characteristics of worms is their ability to self-replicate. Once a worm infects a computer, it can search for vulnerabilities or security loopholes to exploit and then create copies of itself to infect other systems on the same network. This rapid replication process enables worms to spread quickly and efficiently across various devices and networks. Worms can cause significant damage to infected systems. They can consume network bandwidth, slow down computer performance, and even crash entire networks. Some worms are designed to steal sensitive information, such as login credentials or financial data. By harnessing a large number of infected computers, known as a botnet, worm creators can control and utilize these compromised machines for various malicious purposes, including launching DDoS attacks or distributing spam emails. To protect against worms, it is essential to keep your operating system and software up to date with the latest security patches. Employing a robust firewall and implementing strict access controls can also prevent unauthorized access and halt the spread of worms within a network. Regularly scanning your system with reputable antivirus or anti-malware software can help detect and remove any worm infections. In summary, worms are malicious programs that replicate themselves and spread to other computers through network vulnerabilities. Their ability to self-replicate and propagate rapidly makes them a significant threat to computer networks. Employing proactive security measures and maintaining updated software can help mitigate the risk of worm infections and protect your system from potential harm. Viruses are malicious programs that infect and alter other software or files on a computer. They are named after their biological counterparts, as they exhibit similar characteristics of spreading and causing damage. Viruses attach themselves to host files or programs, and when these files are executed or opened, the virus starts replicating and spreading throughout the system. The primary goal of a computer virus is to cause harm or disrupt the normal functioning of a computer or network. They can corrupt or delete files, slow down system performance, steal personal information, and even render a system completely inoperable. Some viruses are designed to exploit system vulnerabilities, allowing unauthorized access or control to be gained by the attacker. Viruses can spread through various vectors, such as infected email attachments, shared files or drives, or malicious websites. They often rely on user actions, such as downloading infected files or clicking on suspicious links, to initiate their infection process. Once inside a system, viruses can propagate and infect other files, spreading to other computers and networks. Protecting against viruses requires a multi-layered approach. Installing reputable antivirus software can help detect and remove known viruses from your system. Keeping your operating system and software up to date with the latest security patches is crucial, as it helps to close any vulnerabilities that viruses may exploit. Exercise caution when opening email attachments or downloading files from unknown sources, as these can often be carriers of viruses. Additionally, practicing safe browsing habits, such as avoiding suspicious websites and refraining from clicking on untrusted links, can further reduce the risk of virus infections. Regularly backing up important files and data is also recommended, as it allows for easy recovery in the event of a virus attack. In summary, viruses are malicious programs that infect and alter other software or files. They can cause a range of harms, from file corruption to system failure. Protecting against viruses involves a combination of preventive measures, such as antivirus software, system updates, safe browsing practices, and regular backups. Trojans, also known as Trojan horses, are a type of malicious software that disguises itself as legitimate and harmless programs or files. Unlike worms or viruses, Trojans do not replicate themselves. Instead, they rely on social engineering tactics to deceive users into downloading and executing them. Trojans often masquerade as free software, games, or even email attachments, enticing users to open or install them. Once executed, Trojans can perform a variety of malicious activities, depending on their specific purpose. Some Trojans are designed to steal sensitive information, such as login credentials or banking details. Others may create backdoors in the system, allowing remote access for hackers to control the infected computer. One of the distinguishing features of Trojans is their ability to remain undetected by antivirus programs. By disguising themselves as legitimate files, Trojans can bypass security measures and gain access to a system unnoticed. This makes them a significant threat to individuals and organizations alike. Protecting against Trojans requires a combination of preventive measures and user awareness. It is important to exercise caution when downloading files or installing software from unknown sources. Be wary of suspicious email attachments or links, as these can often be carriers of Trojans. Regularly updating your antivirus software and running scans can help detect and remove any Trojans that may have infiltrated your system. Furthermore, practicing good cybersecurity hygiene can help minimize the risk of Trojan infections. This includes keeping your operating system and software up to date with the latest security patches, using strong and unique passwords, and avoiding clicking on suspicious links or visiting potentially unsafe websites. In summary, Trojans are malicious software that masquerade as legitimate files or programs. They rely on social engineering tactics to deceive users into downloading and executing them. Preventing Trojan infections requires a combination of user awareness, cautious internet practices, and robust cybersecurity measures. Botnets are networks of computers that have been infected with malicious software, known as bots or zombies. These infected computers are usually controlled by a central server or command-and-control (C&C) infrastructure. Botnets are typically created by cybercriminals to carry out various malicious activities, such as launching distributed denial-of-service (DDoS) attacks, distributing malware, or stealing sensitive information. The main goal of creating a botnet is to harness the collective computing power and internet bandwidth of the infected computers. By controlling a botnet, cybercriminals can launch large-scale attacks that would be difficult to accomplish with a single computer. Through DDoS attacks, for example, botnets can overwhelm targeted websites or networks with an overwhelming amount of traffic, making them inaccessible to legitimate users. Botnets are created by infecting computers with malware, often through the use of worms or Trojans. Once a computer is compromised, it becomes part of the botnet and can be remotely controlled by the botnet operator. Botnets can consist of a few hundred to millions of infected devices, ranging from personal computers to smartphones and IoT devices. Detecting and mitigating botnets can be challenging due to their distributed nature and the ability of botnet operators to adapt their techniques. However, there are several measures that can help minimize the risk of botnet infections. Keeping your operating system, software, and antivirus programs up to date with the latest security patches is crucial. Using strong and unique passwords, enabling two-factor authentication, and practicing safe browsing habits can also reduce the chances of falling victim to botnet attacks. Furthermore, network administrators can implement stricter access controls and monitoring systems to detect and block botnet traffic. Regularly scanning your devices and network for malware and suspicious activities can help identify and remove any potential bot infections. In summary, botnets are networks of infected computers that are controlled by cybercriminals. These networks are used for carrying out various malicious activities, such as launching DDoS attacks or distributing malware. Protecting against botnets requires a combination of preventive measures, user awareness, and robust network security practices. Ransomware is a type of malicious software designed to encrypt a victim’s files or lock them out of their own system until a ransom is paid. It is a growing threat in the world of cybercrime and can cause significant financial and emotional damage to individuals and organizations. Ransomware typically infects a computer through email attachments, malicious downloads, or exploiting vulnerabilities in software or operating systems. Once inside a system, it encrypts files, making them inaccessible to the victim. The attackers then demand a ransom, usually in cryptocurrency, in exchange for the decryption key or the release of the locked system. The impact of a ransomware attack can be devastating. In addition to the loss of valuable data, individuals and businesses often face financial losses from paying the ransom or the costs associated with recovery and remediation. Ransomware attacks can also result in reputational damage, as the victims may lose the trust of their customers or clients. Preventing ransomware attacks requires a multi-layered approach. First, it is crucial to maintain up-to-date security software and operating systems, as these can help detect and block known ransomware threats. Regularly backing up important files and data offline is also essential, as it provides a means to restore data without paying the ransom. Additionally, exercising caution when opening email attachments, downloading files, or clicking on links can help prevent ransomware infections. Implementing email filtering and web security solutions can further reduce the risk of malicious content reaching users’ inboxes or browsers. Network segmentation and user access controls can also limit the spread of ransomware within an organization. In the unfortunate event of a ransomware attack, it is essential to avoid paying the ransom. There is no guarantee that the attackers will provide the decryption key or release the locked system even after payment. Instead, victims should report the incident to law enforcement agencies and seek assistance from cybersecurity professionals who may be able to help recover the encrypted data. In summary, ransomware is a severe form of malware that encrypts files or locks users out of their systems until a ransom is paid. Preventing ransomware requires a combination of proactive security measures, user awareness, and a robust backup strategy. By implementing these preventive measures, individuals and organizations can reduce the risk of falling victim to ransomware attacks. Spyware is a type of malicious software that is designed to secretly monitor and gather information about a user’s activities on their computer or mobile device. It is often installed without the user’s consent or knowledge and can be highly intrusive and damaging to privacy. The main objective of spyware is to collect sensitive information, such as passwords, banking details, browsing habits, or personal data. This information is then used for various purposes, including identity theft, unauthorized access to accounts, or targeted advertising. Spyware can also track keystrokes, capture screenshots, record audio or video, and intercept communications, effectively making the infected device a surveillance tool for cybercriminals. Spyware typically enters a system through deceptive methods, such as bundled with legitimate software downloads or disguised as a helpful utility. It can also exploit vulnerabilities in the operating system or web browsers to gain access to the victim’s device. Once installed, it runs silently in the background, constantly monitoring and transmitting the captured information to the attacker. Protecting against spyware requires a combination of proactive measures and user awareness. It is essential to install reputable antivirus and anti-spyware software that can detect and remove known spyware threats. Keeping operating systems and software up to date with the latest security patches can also help mitigate vulnerabilities that spyware may exploit. Practicing safe browsing habits is crucial in preventing spyware infections. Avoid clicking on suspicious links or downloading files from untrusted sources. Be cautious of pop-up ads or unexpected requests for personal information, as these can often be indicators of spyware activity. Regularly scanning your system for potential spyware infections and monitoring network traffic can help identify any suspicious behavior. Implementing firewall protection and using robust security settings can also help block unauthorized access and reduce the risk of spyware installations. If you suspect your device is infected with spyware, it is essential to take immediate action. Disconnect from the internet and run a comprehensive scan with your antivirus software. If the infection persists, seek professional help from cybersecurity experts who can assist in removing the spyware and securing your device. In summary, spyware is a malicious software that silently monitors and captures user information for malicious purposes. Protecting against spyware requires a combination of security software, safe browsing habits, and regular system scans. By taking proactive measures and being vigilant, users can minimize the risk of falling victim to spyware infections and protect their privacy. Adware is a type of software that displays unwanted advertisements on a user’s computer or mobile device. It is often bundled with free software downloads and is installed without the user’s consent. While not necessarily malicious, adware can be intrusive, disruptive, and impact the overall performance of a device. The primary purpose of adware is to generate revenue for the software developer by delivering targeted advertisements to users. These advertisements can appear as pop-ups, banners, or in-text links, and often disrupt the user’s browsing experience. Adware tracks the user’s online activities and gathers information about their browsing habits, allowing advertisers to deliver tailored ads. While some adware may be relatively harmless, others can be more malicious. They can display misleading or fraudulent advertisements that lead to phishing websites or malware downloads. Some adware can even modify browser settings, redirecting users to unwanted websites or search engines. In extreme cases, adware can slow down system performance or consume excessive network bandwidth. Preventing adware infections involves exercising caution when downloading and installing software. It is important to read End User License Agreements (EULAs) and privacy policies before installing any software, as these may disclose the presence of bundled adware. Opting for custom installation and carefully reviewing each step can help avoid unwanted installations. Using reputable antivirus software with adware detection capabilities is also recommended. Regularly scanning your system for adware and updating security software with the latest definitions can help detect and remove any potential adware infections. In addition, keeping your operating system and software up to date with the latest security patches is crucial. Software updates often include bug fixes and security enhancements that can protect against adware vulnerabilities. It is also helpful to enable pop-up blockers and configure privacy settings within your web browser to reduce the likelihood of encountering adware. If your device is already infected with adware, it is important to take action to remove it. You can try uninstalling the adware through the control panel or using specialized adware removal tools provided by reputable cybersecurity companies. If the adware cannot be removed manually, seeking assistance from a professional may be necessary. In summary, adware is unwanted software that displays intrusive advertisements on a user’s device. Preventing adware infections involves exercising caution when downloading software, using reputable antivirus software, and maintaining up-to-date software versions. By implementing these measures, users can minimize the impact of adware and maintain a safer and more enjoyable online experience.
kwizNET Subscribers, please login to turn off the Ads! Email us to get an instant on highly effective K-12 Math & English Online Quiz ( Questions Per Quiz = Grade 5 - Mathematics 5.37 Fractions Review A fraction is another way of expressing division. The expression x/y is also written as x ÷ y. x is known as the numerator and y is known as denominator. A fraction written as a combination of a whole number and a proper fraction is called a mixed fraction or mixed number. To generate equivalent fractions of any given fraction, we proceed as follows: Multiply the numerator and denominator by the same number (other than 0) or Divide the numerator and denominator by their common factor (other than 1), if any. The two fractions are equivalent if the product of numerator of the first and denominator of the second is equal to the product of the denominator of the first and numerator of the second. To reduce a fraction, divide both the numerator and denominator by the largest factor of both. A fraction is in its lowest term, if the numerator and denominator have no common factor other than 1. Of the two fractions having the same denominator, the one with greater numerator is greater. Of the two fractions having the same numerator, the one with greater denominator is smaller. Operations with Fractions: To add or subtract unlike fractions, first we convert them into like fractions and then perform the required operation on like fractions so obtained. The product of two or more fractions is a fraction whose numerator is the product of their numerators and whose denominator is the product of their denominators. When the product of two fractions or a fraction and a whole number is 1, then either of them is called the reciprocal of the other. The number zero (0) has no reciprocal. Dividing a fraction or a whole number by a fraction or a whole number (other than zero) is the same as multiplying the first by the reciprocal of the second. Answer the following questions. : In the fraction, 4/3, __ is the numerator. : Which of these is an equivalent fraction of 5/9 - 25/35, 15/45, 25/45, 5/25 : Find the difference 1/2 - 1/8 : Which is the greatest and the least among these fractions - 1, 12/5, 2/3 : Find the value of y in y/3 + 4/3 = 2 : What is one-third of 24? : Convert 1.3 to fraction. : 1/4 + 13/4 = ____ Question 9: This question is available to subscribers only! Question 10: This question is available to subscribers only! Subscription to kwizNET Learning System offers the following benefits: Unrestricted access to grade appropriate lessons, quizzes, & printable worksheets Instant scoring of online quizzes Progress tracking and award certificates to keep your student motivated Unlimited practice with auto-generated 'WIZ MATH' quizzes Child-friendly website with no advertisements Choice of Math, English, Science, & Social Studies Curriculums Excellent value for K-12 and ACT, SAT, & TOEFL Test Preparation Get discount offers by sending an email to
- © 2013 by the Mineralogical Society of America Meteorites originating from asteroids are the oldest-known rocks in the Solar System, and many predate formation of the planets. Refractory inclusions in primitive chondrites are the oldest-known materials, and chondrules are generally a few million years younger. Igneous achondrites and iron meteorites also formed in the first five million years of the protoplanetary disk and escaped accretion into planets. Isotopic dates from these meteorites serve as time markers for the Solar System's earliest history. Because of the unique environments in the protoplanetary disk, dating the earliest meteorites has its own opportunities and challenges, different from those of terrestrial geochronology. Understanding the processes that transformed a cloud of interstellar gas into our Solar System, the only planetary system that is known to sustain life, is a key step in the quest for our origins. Due to recent discoveries of Earth-like exoplanets and the rapid accumulation of astronomical observations of young stellar objects, we have obtained, for the first time in history, an opportunity to place the formation of our Solar System in the context of an emerging general model of formation and evolution of planetary systems. In his book On the Origin of Species, Charles Darwin referred to the formation of our Solar System as “so simple a beginning,” but it is now realized that the beginning was not simple at all. Most stars are born in a sequence of complex processes in clusters within giant molecular clouds (Lada and Lada 2003). In such dynamic and short-lived environments, accreting protoplanetary disks do not evolve in isolation. Irradiation and influx of matter from nearby massive stars can change the structure and composition of the protoplanetary disk. The accretion of our Solar System is seen as an assembly of hot and cold domains, pristine dust and partially molten planetesimals that coexisted and interacted for a short period – less than 10 million years – some 4.5 billion years ago. Understanding the nature of the processes involved is impossible without accurate knowledge of their timing. The key events of accretion and planetary growth can be sequenced with high precision and accuracy by means of U–Pb and extinct radionuclide dating of the oldest, best preserved meteorites and their components, combined with supporting information about metamorphism, aqueous alteration and shock history, necessary to validate the ages. In this paper, we discuss how the ages of the oldest solids are determined and how researchers are striving to improve understanding of the sequence of events that converted a dense clump in an interstellar molecular cloud into the planetary system we inhabit. Our review is complementary to the recent reviews of the early Solar System that are mainly concerned with the processes and application of the age data (Kleine and Rudge 2011) or with analytical techniques (Zinner et al. 2011). COSMOCHRONOLOGY, COSMOCHEMISTRY AND STAR FORMATION The early history of our Solar System cannot be observed directly. It is recorded in the early minerals and rocks that were removed from the final stages of accretion before formation of the planets. These primitive rocks are preserved in asteroids that experienced only moderate heating and in comets. Other asteroids that were extensively melted are thought to be the sources of igneous meteorites. Cosmochronology is an application of the methods of isotopic dating to extraterrestrial rocks and minerals. A simplified view of the formation of the Solar System is shown in Figure 1. It is important to note that the astronomical and cosmochemical timescales use different reference “zero” points: ignition of the star in astronomy, which cannot be directly determined by means of isotopic dating, and formation of the first solid materials in cosmochemistry, which cannot be directly determined by means of astronomical observations. Finding a common reference point for the astronomical and cosmochemical timescales is one of the main goals in the development of a general theory of planetary system formation. Stars and their planetary systems form in giant molecular clouds. In these environments, accretion disks are polluted by ejecta and stellar winds from nearby rapidly evolving massive stars. Freshly synthesized short-lived radionuclides are injected into the solar nebula during the first three stages of accretion (Fig. 1). The decline in the abundance of these radionuclides until extinction can be used for dating early Solar System processes (Kita et al. 2005). The method is similar to using the abundance of 14C produced by the interaction of cosmic rays with the Earth's atmosphere for dating in archeology. In extinct radionuclide dating, it is assumed that the radionuclide was uniformly distributed in the solar nebula. The abundance of radionuclides is determined from the distribution of their decay products. The short-lived radionuclides are produced by two dominant mechanisms: stellar nucleosynthesis followed by injection into the nascent Solar System, and spallation, where the breaking of larger nuclei produces radioactive nuclear fragments, which could have occurred within the Solar System. Identifying the production mechanisms is not straightforward. While 10Be is produced only in spallation reactions and 60Fe only by nucleosynthesis in massive stars, 53Mn and 26Al are produced by both stellar nucleosynthesis and irradiation (Huss et al. 2009). From U–Pb dating combined with extinct radionuclide abundances, we can determine at what stages of accretion freshly produced radionuclides were added to our Solar System. COSMOCHRONOLOGY AND GEOCHRONOLOGY Cosmochronology and geochronology share basic principles and many analytical techniques. Interaction and exchange of experience between the two research communities are mutually enriching. Because of unique environments in the protoplanetary disk that differ from those on the surfaces and in the interiors of the Earth and other planets, dating the earliest meteorites and their components has its own opportunities and challenges. In “terrestrial” geochronology, the development of sophisticated ways of extracting simple, closed-system parts of crystals, and accurately analyzing them, proved much more productive than analyzing bulk mineral fractions and using elaborate models to interpret their isotopic systems. Sequencing early Solar System history requires a similar refinement in isotopic dating. Covering the great variety of processes that need dating requires many chronometers and analytical techniques. Most meteorites are ultramafic or mafic in composition, and minerals that concentrate radioactive parent elements and effectively exclude daughter elements, such as zircon for U–Pb, are only rarely found in meteorites. Concentrations of parent nuclides in meteorites and their minerals are usually very low, making the analyses demanding. Finally, meteorites are assorted random samples from an unknown, and possibly large, range of parent asteroids. Under these circumstances, the development of a coherent dating strategy is a great challenge for the small community of cosmochronologists. WHAT ARE WE DATING? Three central, and closely related, questions of cosmochronology are: Which processes are we dating? Which isotopic systems and techniques do we need to obtain those dates? And which meteorites do we need to analyze to get the dates of the processes we are interested in? Isotopic clocks measure the timing of the processes that fractionate parent and daughter elements. From this seemingly trivial notion, it follows that some processes can be directly dated, whereas others cannot. The datable processes include melt crystallization (fractionation driven by crystal–melt partitioning), metamorphism (fractionation due to growth of new minerals in the solid state), metasomatism (fractionation driven by solubility in fluids), condensation and evaporation (volatility-induced fractionation), and metal–silicate separation (fractionation driven by the affinity of certain elements to Fe–Ni metal as opposed to silicate minerals, i.e. siderophile versus lithophile properties). Parent–daughter-element fractionations by magmatic, metamorphic and metasomatic processes are well known and widely used in terrestrial geochronology, whereas fractionations by metal–silicate affinity and by volatility are unique to the early Solar System. Metal–silicate fractionation, such as in planetesimal core formation, influences the 107Pd–107Ag, 60Fe–60Ni and 182Hf–182W isotopic systems (Kleine and Rudge 2011). Differences in volatility are important for many parent–daughter pairs (Fig. 2). In solids that condense from a cooling gas, an isotope chronometer starts measuring time when both parent and daughter isotopes are retained in the solid phase. In several parent–daughter pairs – 26Al–26Mg, 41Ca–41K, 129I–129Xe, and U–Pb – parent elements are much more refractory than the decay products, and volatility-driven fractionation can be important for using these systems as chronometers. Calcium–aluminum-rich inclusions (CAIs), chondrules, and achondrites – and minerals that comprise them – experienced both volatility-driven and igneous fractionation. In some cases, it is possible to date these processes separately using different scales of sampling, for example, whole-rock versus microbeam analysis of minerals. Several processes in the protoplanetary disk, most importantly accretion of solids into larger aggregates, planetesimal collisions and planetary accretion, do not cause chemical fractionation of elements and therefore cannot be dated directly. Their ages can only be bracketed or approximated using associated processes, such as the formation of new solids from shock melt. Which Isotopic Systems? Four isotopic systems have become the main contributors to modern early Solar System chronology: 207Pb/206Pb 26Al–26Mg, 53Mn–53Cr and 182Hf–182W. These isotopic systems feature in recent reviews of early Solar System chronology (Nyquist et al. 2009; Dauphas and Chaussidon 2011). Their wide applicability is based on their presence and fractionation in a variety of minerals and rocks, including both chondrites and achondrites. Several short-lived isotope chronometers, e.g. 92Nb–92Zr, 107Pd–107Ag and 41Ca-41K, are used when the parent nuclide is highly concentrated or when parent–daughter fractionation allows good temporal leverage. Other isotopic systems, including initial Sr, 129I–129Xe, U–Th–He and the systems based on the decay of 244Pu, popular in the past, are now forgotten or used only rarely. The group of chronometers based on the decay of extant radionuclides – 87Rb–87Sr, 147Sm–143Nd, 40Ar–39Ar and 176Lu–176Hf – usually yield dates with ≥10 Ma uncertainties, which are insufficient for resolving processes in the protoplanetary disk but provide valuable information about possible late disturbances. Early studies of the most common and easily available meteorites, such as eucrites and equilibrated ordinary chondrites, helped to establish the main benchmarks of early Solar System evolution. Eventually it became clear that their geological history was very complex and eventful, and meteorites of other classes, although rare, are better suited for high-resolution dating of the stages of nebular condensation and accretion. The modern chronology of Solar System formation is based primarily on the studies of three groups of materials (Fig. 3): (1) a relatively small number of exceptionally old and well-preserved igneous meteorites, such as angrites, anomalous eucrite-like meteorites and some unclassified basaltic achondrites (Wadhwa et al. 2009; Bouvier et al. 2011); (2) chondrules from well-preserved, unequilibrated ordinary and carbonaceous chondrites; and (3) CAIs and amoeboid olivine aggregates (AOAs) from chondrites. Establishing accurate age relationships between these groups of materials is among the most important goals of early Solar System chronology. The principles of timescale construction using two chronometers, U–Pb and the extinct radionuclide system 26Al–26Mg, are illustrated in Figure 4. Direct comparison of different chronometric systems is not a trivial task. Two isotopic clocks in the same rock can read the timing of different events because of the differences in volatility, diffusion rate and chemical properties of parent and daughter elements. When we compare U–Pb and 26Al–26Mg ages of chondrules and chondrites, we have to consider that the parent elements may reside in different minerals. Chondrule mesostasis is the primary host of both Al and U, but the secondary host minerals are different: feldspar for Al, and Ca phosphates for U. The diffusion rates of the daughter isotopes (Pb and Mg) are also different and mineral dependent, so that in slowly cooled meteorites the U–Pb system in phosphates and the 26Al–26Mg system in feldspar could have closed at different times. HOW WELL DO WE KNOW THE FOUNDATIONS? It was thought, until recently, that the rates of decay of radionuclides used in cosmochronology were well known and that the isotopic ratios of elements are constant, apart from the accumulation of decay products and relatively minor mass-dependent fractionation. These tenets have been reexamined in several recent studies. Half-Lives of Parent Radionuclides In the last ten years, half-lives have been precisely redetermined for four isotopes used in early Solar System chronology: 182Hf (Vockenhuber et al. 2004), 41Ca (Jörg et al. 2012), 60Fe (Rugel et al. 2009) and 146Sm (Kinoshita et al. 2012). The first two papers confirm previously accepted values with greatly improved precision, whereas the latter two differ substantially from the currently used values. Obtaining reliable half-life values requires a combination of advanced decay counting, careful control of radiochemical purity, and accurate concentration determination with isotope dilution mass spectrometry. Many older half-life studies lack at least one of these components, and their results need confirmation. Isotopic Composition of Uranium The 238U/235U ratio, which was considered constant until recently, is now known to be variable and offset from the previously accepted value. Variations among the CAIs are most prominent (Brennecka et al. 2010), and it is currently unclear whether the 238U/235U ratio in bulk chondrites and achondrites is variable at a smaller scale and identical to the 238U/235U ratio in the Earth (Bouvier et al. 2011; Brennecka and Wadhwa 2012; Connelly et al. 2012). Revisions to the Pb isotope chronology of meteorites, with consideration of 238U/235U variability, are being undertaken by several research groups. The U isotope ratios of many meteorites precisely dated with the 207Pb/206Pb method are still unknown, and their determination is one of the pressing tasks in the refinement of early Solar System chronometry. THREE TALES OF METEORITE AGES 60Fe–60Ni: Not a Chronometer, and No Longer a Proof for Supernova? 60Fe–60Ni has recently been the most troubled of all cosmochronometers. The first TIMS work by Shukolyukov and Lugmair (1993) found that the 60Fe/56Fe abundance ratio in eucrites was below 10−8. Ion microprobe analyses of chondrules (Tachibana et al. 2006) yielded much higher 60Fe/56Fe ratios, implying the need for an additional source of 60Fe, such as a supernova, where this isotope was produced shortly before injection into the solar nebula. New MC–ICP–MS data for both differentiated meteorites and chondrites indicate an 60Fe/56Fe ratio around 10−8, close to the original TIMS value (Regelous et al. 2008; Quitté et al. 2011). The high SIMS value appears to be an artefact of data reduction (Ogliore et al. 2011). As it stands now, the abundance of 60Fe is consistent with the galactic background and no longer requires an input of material to the protosolar nebula from a nearby supernova. Old Ages of Chondritic Carbonates: An Analytical Artefact Clarified One of the long-standing inconsistencies in the timing of early Solar System events was the exceptionally old (close to the age of CAIs) 53Mn–53Cr age of secondary carbonates (calcite and dolomite) in chondrites (de Leuw et al. 2009). Taken at face value, these carbonate ages indicated that the accretion of the chondrite parent bodies was extremely early and fast, in contradiction to all the other evidence suggesting that accretion started relatively late and continued for several million years. The study by Fujiya et al. (2012) shows that extremely old 53Mn–53Cr ages are an artefact of inadequate standard-to-sample matching in the SIMS analyses in the earlier studies. New SIMS measurements with a matrix-matched standard for accurate Mn/Cr determination yield an age of 4563.4 +0.4/–0.5 Ma, much younger than the earlier estimated apparent ages between 4565 and 4569 Ma. The new result is consistent with late accretion of the chondrite parent bodies and suggests an onset of aqueous activity in the Solar System contemporaneous with early thermal metamorphism. CAIs: How Old Is Old? The progress in U–Pb dating of CAIs, recognized as the oldest macroscopic objects in the Solar System, provides an excellent illustration of the growth of scientific knowledge (Fig. 5). As analytical techniques progressed, the precision and consistency of CAI ages improved to less than 1 million years. Then the discovery of large 238U/235U variations in CAIs (Brennecka et al. 2010) added a previously unrecognized uncertainty to the age. An attempt to remedy the situation by applying an age correction based on an empirical 238U/235U versus Th/U correlation for other CAIs (Bouvier and Wadhwa 2010) made the CAI age data set discrepant. However, four 238U/235U-corrected CAI dates reported recently (Amelin et al. 2010; Connelly et al. 2012) show excellent agreement, with a total range for the ages of only 0.2 million years – from 4567.18 ± 0.50 Ma to 4567.38 ± 0.31 Ma. This short age interval is also consistent with uniform 26Al/27Al values close to 5*10−5 in CAIs. Such rapid turnover of new ideas and interpretations in the wake of analytical innovation suggests we are on the way to a new paradigm for condensation in the protoplanetary disk. CONCLUSION AND OUTLOOK The road towards a unified timescale of Solar System formation is not straight. We know more about the behaviour of radionuclide chronometers in meteorites, possess better tools for isotope analyses and have accumulated much high-quality data. Some of these data are inconsistent with previous views on the formation of the Solar System and demand the development of new models. Recent findings remind us that the foundations of cosmochronology, and geochronology in general, require regular inspection, reinforcement and, if necessary, rebuilding, to make sure they are strong enough to sustain the growing body of knowledge. We thank reviewers Noriko Kita and Thorsten Kleine for constructive and extensive critiquing, and the editors Mark Schmitz, Dan Condon and John Valley for the opportunity to submit a manuscript to the Geochronology issue of Elements and for their valuable comments.
November is a busy month! November 8th is National STEM Day – let’s make it a blast! Celebrate National STEM Day with a sequence of activities you can implement in your afterschool program today!!! Say goodbye to googling or searching on Pinterest. Use the following activities in sequence to support youth in building an engineering mindset straight from NASA’s Engineering Playlist. Lesson 1: Build an Airplane (Engineering Design Process Activity) – Build an airplane that has ailerons, elevators and a rudder. Lesson 2: Build a Satellite (Engineering Design Process Activity) – This lesson provides students with an understanding of satellites, their use and structure, and power systems. In this potentially multi-day activity, students will use the engineering design process to design, build, test and improve a model satellite. Lesson 3: Women@NASA: Role Models (Role Models and Mentors Connection) Watch a video of one of the women who are engineers and work at NASA’s Jet Propulsion Lab. After watching a video, have the youth write a paragraph about the engineer, one wondering what they have about the engineer, and one question they would ask the engineer. Lesson 4: Make a Soda Can Engine (Engineering Design Process) – Students will investigate the action-reaction principle (Newton’s third law of motion) by creating a water-propelled engine. By observing the device in action and changing certain variables, students will explore the properties of engines and the dynamics behind directionality and thrust. Lesson 5: Women@NASA: Careers (Role Models and Mentor Connection) – Many people dream of careers in science, technology, engineering, and math. Hear about the fascinating careers at NASA. After watching the short video, youth write a paragraph about the career they might want in STEM. Looking for EVEN: Engineering is Elementary’s NASA Partnership free units – A suite of free NASA-funded STEM resources for students in grades 3-8. All resources are research-based and classroom-tested. They are designed to support students’ understanding of space, while helping them see themselves as capable problem solvers.
We have seen throughout this Basic Electronics Tutorials website that there are two types of elements within an electrical or electronics circuit: passive elements and active elements. An active element is one that is capable of continuously supplying energy to a circuit, such as a battery, a generator, an operational amplifier, etc. A passive element on the other hand are physical elements such as resistors, capacitors, inductors, etc, which cannot generate electrical energy by themselves but only consume it. The types of active circuit elements that are most important to us are those that supply electrical energy to the circuits or network connected to them. These are called “electrical sources” with the two types of electrical sources being the voltage source and the current source. The current source is usually less common in circuits than the voltage source, but both are used and can be regarded as complements of each other. An electrical supply or simply, “a source”, is a device that supplies electrical power to a circuit in the form of a voltage source or a current source. Both types of electrical sources can be classed as a direct (DC) or alternating (AC) source in which a constant voltage is called a DC voltage and one that varies sinusoidally with time is called an AC voltage. So for example, batteries are DC sources and the 230V wall socket or mains outlet in your home is an AC source. We said earlier that electrical sources supply energy, but one of the interesting characteristic of an electrical source, is that they are also capable of converting non-electrical energy into electrical energy and vice versa. For example, a battery converts chemical energy into electrical energy, while an electrical machine such as a DC generator or an AC alternator converts mechanical energy into electrical energy. Renewable technologies can convert energy from the sun, the wind, and waves into electrical or thermal energy. But as well as converting energy from one source to another, electrical sources can both deliver or absorb energy allowing it to flow in both directions. Another important characteristic of an electrical source and one which defines its operation, are its I-V characteristics. The I-V characteristic of an electrical source can give us a very nice pictorial description of the source, either as a voltage source and a current source as shown. Electrical sources, both as a voltage source or a current source can be classed as being either independent (ideal) or dependent, (controlled) that is whose value depends upon a voltage or current elsewhere within the circuit, which itself can be either constant or time-varying. When dealing with circuit laws and analysis, electrical sources are often viewed as being “ideal”, that is the source is ideal because it could theoretically deliver an infinite amount of energy without loss thereby having characteristics represented by a straight line. However, in real or practical sources there is always a resistance either connected in parallel for a current source, or series for a voltage source associated with the source affecting its output. A voltage source, such as a battery or generator, provides a potential difference (voltage) between two points within an electrical circuit allowing current to flowing around it. Remember that voltage can exist without current. A battery is the most common voltage source for a circuit with the voltage that appears across the positive and negative terminals of the source being called the terminal voltage. An ideal voltage source is defined as a two terminal active element that is capable of supplying and maintaining the same voltage, (v) across its terminals regardless of the current, (i) flowing through it. In other words, an ideal voltage source will supply a constant voltage at all times regardless of the value of the current being supplied producing an I-V characteristic represented by a straight line. Then an ideal voltage source is known as an Independent Voltage Source as its voltage does not depend on either the value of the current flowing through the source or its direction but is determined solely by the value of the source alone. So for example, an automobile battery has a 12V terminal voltage that remains constant as long as the current through it does not become to high, delivering power to the car in one direction and absorbing power in the other direction as it charges. On the other hand, a Dependent Voltage Source or controlled voltage source, provides a voltage supply whose magnitude depends on either the voltage across or current flowing through some other circuit element. A dependent voltage source is indicated with a diamond shape and are used as equivalent electrical sources for many electronic devices, such as transistors and operational amplifiers. Ideal voltage sources can be connected together in both parallel or series the same as for any circuit element. Series voltages add together while parallel voltages have the same value. Note that unequal ideal voltage sources cannot be connected directly together in parallel. While not best practice for circuit analysis, ideal voltage sources can be connected in parallel provided they are of the same voltage value. Here in this example, two 10 volt voltage source are combined to produce 10 volts between terminals A and B. Ideally, there would be just one single voltage source of 10 volts given between terminals A and B. What is not allowed or is not best practice, is connecting together ideal voltage sources that have different voltage values as shown, or are short-circuited by an external closed loop or branch. However, when dealing with circuit analysis, voltage sources of different values can be used providing there are other circuit elements in between them to comply with Kirchoff’s Voltage Law, KVL. Unlike parallel connected voltage sources, ideal voltage sources of different values can be connected together in series to form a single voltage source whose output will be the algebraic addition or subtraction of the voltages used. Their connection can be as: series-aiding or series-opposing voltages as shown. Series aiding voltage sources are series connected sources with their polarities connected so that the plus terminal of one is connected to the negative terminal of the next allowing current to flow in the same direction. In the example above, the two voltages of 10V and 5V of the first circuit can be added, for a VS of 10 + 5 = 15V. So the voltage across terminals A and B is 15 volts. Series opposing voltage sources are series connected sources which have their polarities connected so that the plus terminal or the negative terminals are connected together as shown in the second circuit above. The net result is that the voltages are subtracted from each other. Then the two voltages of 10V and 5V of the second circuit are subtracted with the smaller voltage subtracted from the larger voltage. Resulting in a VS of 10 - 5 = 5V. The polarity across terminals A and B is determined by the larger polarity of the voltage sources, in this example terminal A is positive and terminal B is negative resulting in +5 volts. If the series-opposing voltages are equal, the net voltage across A and B will be zero as one voltage balances out the other. Also any currents (I) will also be zero, as without any voltage source, current can not flow. Two series aiding ideal voltage sources of 6 volts and 9 volts respectively are connected together to supply a load resistance of 100 Ohms. Calculate: the source voltage, VS, the load current through the resistor, IR and the total power, P dissipated by the resistor. Draw the circuit. Thus, VS = 15V, IR = 150mA or 0.15A, and PR = 2.25W. We have seen that an ideal voltage source can provide a voltage supply that is independent of the current flowing through it, that is, it maintains the same voltage value always. This idea may work well for circuit analysis techniques, but in the real world voltage sources behave a little differently as for a practical voltage source, its terminal voltage will actually decrease with an increase in load current. As the terminal voltage of an ideal voltage source does not vary with increases in the load current, this implies that an ideal voltage source has zero internal resistance, RS = 0. In other words, it is a resistorless voltage source. In reality all voltage sources have a very small internal resistance which reduces their terminal voltage as they supply higher load currents. For non-ideal or practical voltage sources such as batteries, their internal resistance (RS) produces the same effect as a resistance connected in series with an ideal voltage source as these two series connected elements carry the same current as shown. You may have noticed that a practical voltage source closely resembles that of a Thevenin’s equivalent circuit as Thevenin’s theorem states that “any linear network containing resistances and sources of emf and current may be replaced by a single voltage source, VS in series with a single resistance, RS“. Note that if the series source resistance is low, the voltage source is ideal. When the source resistance is infinite, the voltage source is open-circuited. In the case of all real or practical voltage sources, this internal resistance, RS no matter how small has an effect on the I-V characteristic of the source as the terminal voltage falls off with an increase in load current. This is because the same load current flows through RS. Ohms law tells us that when a current, (i) flows through a resistance, a voltage drop is produce across the same resistance. The value of this voltage drop is given as iRS. Then VOUT will equal the ideal voltage source, VS minus the iRS voltage drop across the resistor. Remember that in the case of an ideal source voltage, RS is equal to zero as there is no internal resistance, therefore the terminal voltage is same as VS. Then the voltage sum around the loop given by Kirchoff’s voltage law, KVL is: VOUT = VS – iRS. This equation can be plotted to give the I-V characteristics of the actual output voltage. It will give a straight line with a slope –RS which intersects the vertical voltage axis at the same point as VS when the current i = 0 as shown. Therefore, all ideal voltage sources will have a straight line I-V characteristic but non-ideal or real practical voltage sources will not but instead will have an I-V characteristic that is slightly angled down by an amount equal to iRS where RS is the internal source resistance (or impedance). The I-V characteristics of a real battery provides a very close approximation of an ideal voltage source since the source resistance RS is usually quite small. The decrease in the angle of the slope of the I-V characteristics as the current increases is known as regulation. Voltage regulation is an important measure of the quality of a practical voltage source as it measures the variation in terminal voltage between no load, that is when IL = 0, (an open-circuit) and full load, that is when IL is at maximum, (a short-circuit). A battery supply consists of an ideal voltage source in series with an internal resistor. The voltage and current measured at the terminals of the battery were found to be VOUT1 = 130V at 10A, and VOUT2 = 100V at 25A. Calculate the voltage rating of the ideal voltage source and the value of its internal resistance. Draw the I-V characteristics. Firstly lets define in simple “simultaneous equation form“, the two voltage and current outputs of the battery supply given as: VOUT1 and VOUT2. As with have the voltages and currents in a simultaneous equation form, to find VS we will first multiply VOUT1 by five, (5) and VOUT2 by two, (2) as shown to make the value of the two currents, (i) the same for both equations. Having made the co-efficients for RS the same by multiplying through with the previous constants, we now multiply the second equation VOUT2 by minus one, (-1) to allow for the subtraction of the two equations so that we can solve for VS as shown. Knowing that the ideal voltage source, VS is equal to 150 volts, we can use this value for equation VOUT1 (or VOUT2 if so wished) and solve to find the series resistance, RS. Then for our simple example, the batteries internal voltage source is calculated as: VS = 150 volts, and its internal resistance as: RS = 2Ω’s. The I-V characteristics of the battery are given as: Unlike an ideal voltage source which produces a constant voltage across its terminals regardless of what is connected to it, a controlled or dependent voltage source changes its terminal voltage depending upon the voltage across, or the current through, some other element connected to the circuit, and as such it is sometimes difficult to specify the value of a dependent voltage source, unless you know the actual value of the voltage or current on which it depends. Dependent voltage sources behave similar to the electrical sources we have looked at so far, both practical and ideal (independent) the difference this time is that a dependent voltage source can be controlled by an input current or voltage. A voltage source that depends on a voltage input is generally referred to as a Voltage Controlled Voltage Source or VCVS. A voltage source that depends on a current input is referred too as a Current Controlled Voltage Source or CCVS. Ideal dependent sources are commonly used in the analysing the input/output characteristics or the gain of circuit elements such as operational amplifiers, transistors and integrated circuits. Generally, an ideal voltage dependent source, either voltage or current controlled is designated by a diamond-shaped symbol as shown. An ideal dependent voltage-controlled voltage source, VCVS, maintains an output voltage equal to some multiplying constant (basically an amplification factor) times the controlling voltage present elsewhere in the circuit. As the multiplying constant is, well, a constant, the controlling voltage, VIN will determine the magnitude of the output voltage, VOUT. In other words, the output voltage “depends” on the value of input voltage making it a dependent voltage source and in many ways, an ideal transformer can be thought of as a VCVS device with the amplification factor being its turns ratio. Then the VCVS output voltage is determined by the following equation: VOUT = μVIN. Note that the multiplying constant μ is dimensionless as it is purely a scaling factor because μ = VOUT/VIN, so its units will be volts/volts. An ideal dependent current-controlled voltage source, CCVS, maintains an output voltage equal to some multiplying constant (rho) times a controlling current input generated elsewhere within the connected circuit. Then the output voltage “depends” on the value of the input current, again making it a dependent voltage source. As a controlling current, IIN determines the magnitude of the output voltage, VOUT times the magnification constant ρ (rho), this allows us to model a current-controlled voltage source as a trans-resistance amplifier as the multiplying constant, ρ gives us the following equation: VOUT = ρIIN. This multiplying constant ρ (rho) has the units of Ohm’s because ρ = VOUT/IIN, and its units will therefore be volts/amperes. We have seen here that a Voltage Source can be either an ideal independent voltage source, or a controlled dependent voltage source. Independent voltage sources supply a constant voltage that does not depend on any other quantity within the circuit. Ideal independent sources can be batteries, DC generators or time-varying AC voltage supplies from alternators. Independent voltage sources can be modelled as either an ideal voltage source, (RS = 0) where the output is constant for all load currents, or a non-ideal or practical, such as a battery with a resistance connected in series with the circuit to represent the internal resistance of the source. Ideal voltage sources can be connected together in parallel only if they are of the same voltage value. Series-aiding or series-opposing connections will affect the output value. Also for solving circuit analysis and complex theorems, voltage sources become short-circuited sources making their voltage equal to zero to help solve the network. Note also that voltage sources are capable of both delivering or absorbing power. Ideal dependent voltage sources represented by a diamond-shaped symbol, are dependent on, and are proportional too an external controlling voltage or current. The multiplying constant, μ for a VCVS has no units, while the multiplying constant ρ for a CCVS has units of Ohm’s. A dependent voltage source is of great interest to model electronic devices or active devices such as operational amplifiers and transistors that have gain. In the next tutorial about electrical sources, we will look at the compliment of the voltage source, that is the current source and see that current sources can also be classed as dependent or independent electrical sources.
We have worked with iteration in previous lectures, related to loops, while recursion is a new topic for us. Let's start with the definition: "Recursion occurs when a function calls itself." Mostly both recursion and iteration are used in association with loops, but both are very different from each other. In both recursion and iteration, the goal is to execute a statement again and again until a specific condition is fulfilled. An iterative loop ends when it has reached the end of its sequence; for example, if we are moving through a list, then the loop will stop executing when it reached the end of the list. But in the case of recursion, the function stops terminating when a base condition is satisfied. Let us understand both of them in detail. There are two essential and significant parts of a recursive function. The first one is the base case, and the second one is the recursive case. In the base case, a conditional statement is written, which the program executes at the end, just before returning values to the users. In the recursive case, the formula or logic the function is based upon is written. A recursive function terminates to get closer to its base case or base condition. As in case of loops, if the condition does not satisfy the loop could run endlessly, the same is in recursion that if the base case is not met in the call, the function will repeatedly run, causing the system to crash. In case of recursion, each recursive call is stored into a stack until it reaches the base condition, then the stack will one by one return the calls printing a series or sequence of numbers onto the screen. It is worth noting that stack is a LIFO data structure i.e., last in first out. This means that the call that is sent into the stack at the end will be executed first, and the first one that was inserted into the stack will be executed at last. We have a basic idea about iteration as we have already discussed it in tutorial # 16 and 17 relating loops. Iteration runs a block of code again and again, depending on a user-defined condition. Many of the functions that recursion performs can also be achieved by using iterations but not all, and vice versa. In my opinion for smaller programs where there are lessor lines of codes, we should use a recursive approach and in complex programs, we should go with iteration to reduce the risk of bugs. # n! = n * n-1 * n-2 * n-3.......1 # n! = n * (n-1)! def factorial_iterative(n): """ :param n: Integer :return: n * n-1 * n-2 * n-3.......1 """ fac = 1 for i in range(n): fac = fac * (i+1) return fac def factorial_recursive(n): """ :param n: Integer :return: n * n-1 * n-2 * n-3.......1 """ if n ==1: return 1 else: return n * factorial_recursive(n-1) # 5 * factorial_recursive(4) # 5 * 4 * factorial_recursive(3) # 5 * 4 * 3 * factorial_recursive(2) # 5 * 4 * 3 * 2 * factorial_recursive(1) # 5 * 4 * 3 * 2 * 1 = 120 # 0 1 1 2 3 5 8 13 def fibonacci(n): if n==1: return 0 elif n==2: return 1 else: return fibonacci(n-1)+ fibonacci(n-2) number = int(input("Enter then number")) # print("Factorial Using Iterative Method", factorial_iterative(number)) # print("Factorial Using Recursive Method", factorial_recursive(number)) print(fibonacci(number)) I want discription bhai n term ka fiboibonacci list print karna hoto kya kare............... No downloadable resources for this video. If you think you need anything, please post it in the QnA! Any Course related announcements will be posted here
|Enforcement authorities and organizations| A monopoly (from Greek μόνος mónos ("alone" or "single") and πωλεῖν pōleîn ("to sell")) exists when a specific person or enterprise is the only supplier of a particular commodity (this contrasts with a monopsony which relates to a single entity's control of a market to purchase a good or service, and with oligopoly which consists of a few entities dominating an industry). Monopolies are thus characterized by a lack of economic competition to produce the good or service, a lack of viable substitute goods, and the possibility of a high monopoly price well above the firm's marginal cost that leads to a high monopoly profit. The verb monopolise or monopolize refers to the process by which a company gains the ability to raise prices or exclude competitors. In economics, a monopoly is a single seller. In law, a monopoly is a business entity that has significant market power, that is, the power to charge overly high prices. Although monopolies may be big businesses, size is not a characteristic of a monopoly. A small business may still have the power to raise prices in a small industry (or market). A monopoly is distinguished from a monopsony, in which there is only one buyer of a product or service; a monopoly may also have monopsony control of a sector of a market. Likewise, a monopoly should be distinguished from a cartel (a form of oligopoly), in which several providers act together to coordinate services, prices or sale of goods. Monopolies, monopsonies and oligopolies are all situations such that one or a few of the entities have market power and therefore interact with their customers (monopoly), suppliers (monopsony) and the other companies (oligopoly) in ways that leave market interactions distorted. Monopolies can be established by a government, form naturally, or form by integration. In many jurisdictions, competition laws restrict monopolies. Holding a dominant position or a monopoly of a market is often not illegal in itself, however certain categories of behavior can be considered abusive and therefore incur legal sanctions when business is dominant. A government-granted monopoly or legal monopoly, by contrast, is sanctioned by the state, often to provide an incentive to invest in a risky venture or enrich a domestic interest group. Patents, copyrights, and trademarks are sometimes used as examples of government-granted monopolies. The government may also reserve the venture for itself, thus forming a government monopoly. - 1 Market structures - 2 Characteristics - 3 Sources of monopoly power - 4 Monopoly versus competitive markets - 5 The inverse elasticity rule - 6 Price discrimination - 7 Monopoly and efficiency - 8 Monopolist shutdown rule - 9 Breaking up monopolies - 10 Law - 11 Historical monopolies - 12 Countering monopolies - 13 See also - 14 Notes and references - 15 Further reading - 16 External links In economics, the idea of monopoly will be important for the study of management structures, which directly concerns normative aspects of economic competition, and provides the basis for topics such as industrial organization and economics of regulation. There are four basic types of market structures by traditional economic analysis: perfect competition, monopolistic competition, oligopoly and monopoly. A monopoly is a structure in which a single supplier produces and sells a given product. If there is a single seller in a certain industry and there are not any close substitutes for the product, then the market structure is that of a "pure monopoly". Sometimes, there are many sellers in an industry and/or there exist many close substitutes for the goods being produced, but nevertheless companies retain some market power. This is termed monopolistic competition, whereas in oligopoly the companies interact strategically. In general, the main results from this theory compare price-fixing methods across market structures, analyze the effect of a certain structure on welfare, and vary technological/demand assumptions in order to assess the consequences for an abstract model of society. Most economic textbooks follow the practice of carefully explaining the perfect competition model, mainly because of its usefulness to understand "departures" from it (the so-called imperfect competition models). The boundaries of what constitutes a market and what doesn't are relevant distinctions to make in economic analysis. In a general equilibrium context, a good is a specific concept entangling geographical and time-related characteristics (grapes sold during October 2009 in Moscow is a different good from grapes sold during October 2009 in New York). Most studies of market structure relax a little their definition of a good, allowing for more flexibility at the identification of substitute-goods. Therefore, one can find an economic analysis of the market of grapes in Russia, for example, which is not a market in the strict sense of general equilibrium theory monopoly. - Profit Maximizer: Maximizes profits. - Price Maker: Decides the price of the good or product to be sold, but does so by determining the quantity in order to demand the price desired by the firm. - High Barriers: Other sellers are unable to enter the market of the monopoly. - Single seller: In a monopoly, there is one seller of the good that produces all the output. Therefore, the whole market is being served by a single company, and for practical purposes, the company is the same as the industry. - Price Discrimination: A monopolist can change the price and quality of the product. He or she sells higher quantities, charging a lower price for the product, in a very elastic market and sells lower quantities, charging a higher price, in a less elastic market. Sources of monopoly power Monopolies derive their market power from barriers to entry – circumstances that prevent or greatly impede a potential competitor's ability to compete in a market. There are three major types of barriers to entry: economic, legal and deliberate. - Economic barriers: Economic barriers include economies of scale, capital requirements, cost advantages and technological superiority. - Economies of scale: Monopolies are characterised by decreasing costs for a relatively large range of production. Decreasing costs coupled with large initial costs give monopolies an advantage over would-be competitors. Monopolies are often in a position to reduce prices below a new entrant's operating costs and thereby prevent them from continuing to compete. Furthermore, the size of the industry relative to the minimum efficient scale may limit the number of companies that can effectively compete within the industry. If for example the industry is large enough to support one company of minimum efficient scale then other companies entering the industry will operate at a size that is less than MES, meaning that these companies cannot produce at an average cost that is competitive with the dominant company. Finally, if long-term average cost is constantly decreasing, the least cost method to provide a good or service is by a single company. - Capital requirements: Production processes that require large investments of capital, or large research and development costs or substantial sunk costs limit the number of companies in an industry. Large fixed costs also make it difficult for a small company to enter an industry and expand. - Technological superiority: A monopoly may be better able to acquire, integrate and use the best possible technology in producing its goods while entrants do not have the size or finances to use the best available technology. One large company can sometimes produce goods cheaper than several small companies. - No substitute goods: A monopoly sells a good for which there is no close substitute. The absence of substitutes makes the demand for the good relatively inelastic enabling monopolies to extract positive profits. - Control of natural resources: A prime source of monopoly power is the control of resources that are critical to the production of a final good. - Network externalities: The use of a product by a person can affect the value of that product to other people. This is the network effect. There is a direct relationship between the proportion of people using a product and the demand for that product. In other words, the more people who are using a product the greater the probability of any individual starting to use the product. This effect accounts for fads, fashion trends, social networks etc. It also can play a crucial role in the development or acquisition of market power. The most famous current example is the market dominance of the Microsoft office suite and operating system in personal computers. - Legal barriers: Legal rights can provide opportunity to monopolise the market of a good. Intellectual property rights, including patents and copyrights, give a monopolist exclusive control of the production and selling of certain goods. Property rights may give a company exclusive control of the materials necessary to produce a good. - Deliberate actions: A company wanting to monopolise a market may engage in various types of deliberate action to exclude competitors or eliminate competition. Such actions include collusion, lobbying governmental authorities, and force (see anti-competitive practices). In addition to barriers to entry and competition, barriers to exit may be a source of market power. Barriers to exit are market conditions that make it difficult or expensive for a company to end its involvement with a market. Great liquidation costs are a primary barrier for exiting. Market exit and shutdown are separate events. The decision whether to shut down or operate is not affected by exit barriers. A company will shut down if price falls below minimum average variable costs. Monopoly versus competitive markets While monopoly and perfect competition mark the extremes of market structures there is some similarity. The cost functions are the same. Both monopolies and perfectly competitive (PC) companies minimize cost and maximize profit. The shutdown decisions are the same. Both are assumed to have perfectly competitive factors markets. There are distinctions, some of the more important of which are as follows: - Marginal revenue and price: In a perfectly competitive market, price equals marginal cost. In a monopolistic market, however, price is set above marginal cost. - Product differentiation: There is zero product differentiation in a perfectly competitive market. Every product is perfectly homogeneous and a perfect substitute for any other. With a monopoly, there is great to absolute product differentiation in the sense that there is no available substitute for a monopolized good. The monopolist is the sole supplier of the good in question. A customer either buys from the monopolizing entity on its terms or does without. - Number of competitors: PC markets are populated by an infinite number of buyers and sellers. Monopoly involves a single seller. - Barriers to Entry: Barriers to entry are factors and circumstances that prevent entry into market by would-be competitors and limit new companies from operating and expanding within the market. PC markets have free entry and exit. There are no barriers to entry, or exit competition. Monopolies have relatively high barriers to entry. The barriers must be strong enough to prevent or discourage any potential competitor from entering the market. - Elasticity of Demand: The price elasticity of demand is the percentage change of demand caused by a one percent change of relative price. A successful monopoly would have a relatively inelastic demand curve. A low coefficient of elasticity is indicative of effective barriers to entry. A PC company has a perfectly elastic demand curve. The coefficient of elasticity for a perfectly competitive demand curve is infinite. - Excess Profits: Excess or positive profits are profit more than the normal expected return on investment. A PC company can make excess profits in the short term but excess profits attract competitors, which can enter the market freely and decrease prices, eventually reducing excess profits to zero. A monopoly can preserve excess profits because barriers to entry prevent competitors from entering the market. - Profit Maximization: A PC company maximizes profits by producing such that price equals marginal costs. A monopoly maximises profits by producing where marginal revenue equals marginal costs. The rules are not equivalent. The demand curve for a PC company is perfectly elastic – flat. The demand curve is identical to the average revenue curve and the price line. Since the average revenue curve is constant the marginal revenue curve is also constant and equals the demand curve, Average revenue is the same as price (AR = TR/Q = P x Q/Q = P). Thus the price line is also identical to the demand curve. In sum, D = AR = MR = P. - P-Max quantity, price and profit: If a monopolist obtains control of a formerly perfectly competitive industry, the monopolist would increase prices, reduce production, and realise positive economic profits. - Supply Curve: in a perfectly competitive market there is a well defined supply function with a one to one relationship between price and quantity supplied. In a monopolistic market no such supply relationship exists. A monopolist cannot trace a short term supply curve because for a given price there is not a unique quantity supplied. As Pindyck and Rubenfeld note, a change in demand "can lead to changes in prices with no change in output, changes in output with no change in price or both". Monopolies produce where marginal revenue equals marginal costs. For a specific demand curve the supply "curve" would be the price/quantity combination at the point where marginal revenue equals marginal cost. If the demand curve shifted the marginal revenue curve would shift as well and a new equilibrium and supply "point" would be established. The locus of these points would not be a supply curve in any conventional sense. The most significant distinction between a PC company and a monopoly is that the monopoly has a downward-sloping demand curve rather than the "perceived" perfectly elastic curve of the PC company. Practically all the variations mentioned above relate to this fact. If there is a downward-sloping demand curve then by necessity there is a distinct marginal revenue curve. The implications of this fact are best made manifest with a linear demand curve. Assume that the inverse demand curve is of the form x = a − by. Then the total revenue curve is TR = ay − by2 and the marginal revenue curve is thus MR = a − 2by. From this several things are evident. First the marginal revenue curve has the same y intercept as the inverse demand curve. Second the slope of the marginal revenue curve is twice that of the inverse demand curve. Third the x intercept of the marginal revenue curve is half that of the inverse demand curve. What is not quite so evident is that the marginal revenue curve is below the inverse demand curve at all points. Since all companies maximise profits by equating MR and MC it must be the case that at the profit-maximizing quantity MR and MC are less than price, which further implies that a monopoly produces less quantity at a higher price than if the market were perfectly competitive. The fact that a monopoly has a downward-sloping demand curve means that the relationship between total revenue and output for a monopoly is much different than that of competitive companies. Total revenue equals price times quantity. A competitive company has a perfectly elastic demand curve meaning that total revenue is proportional to output. Thus the total revenue curve for a competitive company is a ray with a slope equal to the market price. A competitive company can sell all the output it desires at the market price. For a monopoly to increase sales it must reduce price. Thus the total revenue curve for a monopoly is a parabola that begins at the origin and reaches a maximum value then continuously decreases until total revenue is again zero. Total revenue has its maximum value when the slope of the total revenue function is zero. The slope of the total revenue function is marginal revenue. So the revenue maximizing quantity and price occur when MR = 0. For example, assume that the monopoly’s demand function is P = 50 − 2Q. The total revenue function would be TR = 50Q − 2Q2 and marginal revenue would be 50 − 4Q. Setting marginal revenue equal to zero we have So the revenue maximizing quantity for the monopoly is 12.5 units and the revenue maximizing price is 25. A company with a monopoly does not experience price pressure from competitors, although it may experience pricing pressure from potential competition. If a company increases prices too much, then others may enter the market if they are able to provide the same good, or a substitute, at a lesser price. The idea that monopolies in markets with easy entry need not be regulated against is known as the "revolution in monopoly theory". A monopolist can extract only one premium,[clarification needed] and getting into complementary markets does not pay. That is, the total profits a monopolist could earn if it sought to leverage its monopoly in one market by monopolizing a complementary market are equal to the extra profits it could earn anyway by charging more for the monopoly product itself. However, the one monopoly profit theorem is not true if customers in the monopoly good are stranded or poorly informed, or if the tied good has high fixed costs. A pure monopoly has the same economic rationality of perfectly competitive companies, i.e. to optimise a profit function given some constraints. By the assumptions of increasing marginal costs, exogenous inputs' prices, and control concentrated on a single agent or entrepreneur, the optimal decision is to equate the marginal cost and marginal revenue of production. Nonetheless, a pure monopoly can – unlike a competitive company – alter the market price for its own convenience: a decrease of production results in a higher price. In the economics' jargon, it is said that pure monopolies have "a downward-sloping demand". An important consequence of such behaviour is worth noticing: typically a monopoly selects a higher price and lesser quantity of output than a price-taking company; again, less is available at a higher price. The inverse elasticity rule A monopoly chooses that price that maximizes the difference between total revenue and total cost. The basic markup rule can be expressed as (P − MC)/P = 1/PED. The markup rules indicate that the ratio between profit margin and the price is inversely proportional to the price elasticity of demand. The implication of the rule is that the more elastic the demand for the product the less pricing power the monopoly has. Market power is the ability to increase the product's price above marginal cost without losing all customers. Perfectly competitive (PC) companies have zero market power when it comes to setting prices. All companies of a PC market are price takers. The price is set by the interaction of demand and supply at the market or aggregate level. Individual companies simply take the price determined by the market and produce that quantity of output that maximizes the company's profits. If a PC company attempted to increase prices above the market level all its customers would abandon the company and purchase at the market price from other companies. A monopoly has considerable although not unlimited market power. A monopoly has the power to set prices or quantities although not both. A monopoly is a price maker. The monopoly is the market and prices are set by the monopolist based on his circumstances and not the interaction of demand and supply. The two primary factors determining monopoly market power are the company's demand curve and its cost structure. Market power is the ability to affect the terms and conditions of exchange so that the price of a product is set by a single company (price is not imposed by the market as in perfect competition). Although a monopoly's market power is great it is still limited by the demand side of the market. A monopoly has a negatively sloped demand curve, not a perfectly inelastic curve. Consequently, any price increase will result in the loss of some customers. Price discrimination allows a monopolist to increase its profit by charging higher prices for identical goods to those who are willing or able to pay more. For example, most economic textbooks cost more in the United States than in developing countries like Ethiopia. In this case, the publisher is using its government-granted copyright monopoly to price discriminate between the generally wealthier American economics students and the generally poorer Ethiopian economics students. Similarly, most patented medications cost more in the U.S. than in other countries with a (presumed) poorer customer base. Typically, a high general price is listed, and various market segments get varying discounts. This is an example of framing to make the process of charging some people higher prices more socially acceptable. Perfect price discrimination would allow the monopolist to charge each customer the exact maximum amount he would be willing to pay. This would allow the monopolist to extract all the consumer surplus of the market. While such perfect price discrimination is a theoretical construct, advances in information technology and micromarketing may bring it closer to the realm of possibility. It is very important to realize that partial price discrimination can cause some customers who are inappropriately pooled with high price customers to be excluded from the market. For example, a poor student in the U.S. might be excluded from purchasing an economics textbook at the U.S. price, which the student may have been able to purchase at the Ethiopian price'. Similarly, a wealthy student in Ethiopia may be able to or willing to buy at the U.S. price, though naturally would hide such a fact from the monopolist so as to pay the reduced third world price. These are deadweight losses and decrease a monopolist's profits. As such, monopolists have substantial economic interest in improving their market information and market segmenting. There is important information for one to remember when considering the monopoly model diagram (and its associated conclusions) displayed here. The result that monopoly prices are higher, and production output lesser, than a competitive company follow from a requirement that the monopoly not charge different prices for different customers. That is, the monopoly is restricted from engaging in price discrimination (this is termed first degree price discrimination, such that all customers are charged the same amount). If the monopoly were permitted to charge individualised prices (this is termed third degree price discrimination), the quantity produced, and the price charged to the marginal customer, would be identical to that of a competitive company, thus eliminating the deadweight loss; however, all gains from trade (social welfare) would accrue to the monopolist and none to the consumer. In essence, every consumer would be indifferent between (1) going completely without the product or service and (2) being able to purchase it from the monopolist. As long as the price elasticity of demand for most customers is less than one in absolute value, it is advantageous for a company to increase its prices: it receives more money for fewer goods. With a price increase, price elasticity tends to increase, and in the optimum case above it will be greater than one for most customers. A company maximizes profit by selling where marginal revenue equals marginal cost. A company that does not engage in price discrimination will charge the profit maximizing price, P*, to all its customers. In such circumstances there are customers who would be willing to pay a higher price than P* and those who will not pay P* but would buy at a lower price. A price discrimination strategy is to charge less price sensitive buyers a higher price and the more price sensitive buyers a lower price. Thus additional revenue is generated from two sources. The basic problem is to identify customers by their willingness to pay. The purpose of price discrimination is to transfer consumer surplus to the producer. Consumer surplus is the difference between the value of a good to a consumer and the price the consumer must pay in the market to purchase it. Price discrimination is not limited to monopolies. Market power is a company’s ability to increase prices without losing all its customers. Any company that has market power can engage in price discrimination. Perfect competition is the only market form in which price discrimination would be impossible (a perfectly competitive company has a perfectly elastic demand curve and has zero market power). There are three forms of price discrimination. First degree price discrimination charges each consumer the maximum price the consumer is willing to pay. Second degree price discrimination involves quantity discounts. Third degree price discrimination involves grouping consumers according to willingness to pay as measured by their price elasticities of demand and charging each group a different price. Third degree price discrimination is the most prevalent type. There are three conditions that must be present for a company to engage in successful price discrimination. First, the company must have market power. Second, the company must be able to sort customers according to their willingness to pay for the good. Third, the firm must be able to prevent resell. A company must have some degree of market power to practice price discrimination. Without market power a company cannot charge more than the market price. Any market structure characterized by a downward sloping demand curve has market power – monopoly, monopolistic competition and oligopoly. The only market structure that has no market power is perfect competition. A company wishing to practice price discrimination must be able to prevent middlemen or brokers from acquiring the consumer surplus for themselves. The company accomplishes this by preventing or limiting resale. Many methods are used to prevent resale. For example, persons are required to show photographic identification and a boarding pass before boarding an airplane. Most travelers assume that this practice is strictly a matter of security. However, a primary purpose in requesting photographic identification is to confirm that the ticket purchaser is the person about to board the airplane and not someone who has repurchased the ticket from a discount buyer. The inability to prevent resale is the largest obstacle to successful price discrimination. Companies have however developed numerous methods to prevent resale. For example, universities require that students show identification before entering sporting events. Governments may make it illegal to resale tickets or products. In Boston, Red Sox baseball tickets can only be resold legally to the team. The three basic forms of price discrimination are first, second and third degree price discrimination. In first degree price discrimination the company charges the maximum price each customer is willing to pay. The maximum price a consumer is willing to pay for a unit of the good is the reservation price. Thus for each unit the seller tries to set the price equal to the consumer’s reservation price. Direct information about a consumer’s willingness to pay is rarely available. Sellers tend to rely on secondary information such as where a person lives (postal codes); for example, catalog retailers can use mail high-priced catalogs to high-income postal codes. First degree price discrimination most frequently occurs in regard to professional services or in transactions involving direct buyer/seller negotiations. For example, an accountant who has prepared a consumer's tax return has information that can be used to charge customers based on an estimate of their ability to pay. In second degree price discrimination or quantity discrimination customers are charged different prices based on how much they buy. There is a single price schedule for all consumers but the prices vary depending on the quantity of the good bought. The theory of second degree price discrimination is a consumer is willing to buy only a certain quantity of a good at a given price. Companies know that consumer’s willingness to buy decreases as more units are purchased. The task for the seller is to identify these price points and to reduce the price once one is reached in the hope that a reduced price will trigger additional purchases from the consumer. For example, sell in unit blocks rather than individual units. In third degree price discrimination or multi-market price discrimination the seller divides the consumers into different groups according to their willingness to pay as measured by their price elasticity of demand. Each group of consumers effectively becomes a separate market with its own demand curve and marginal revenue curve. The firm then attempts to maximize profits in each segment by equating MR and MC, Generally the company charges a higher price to the group with a more price inelastic demand and a relatively lesser price to the group with a more elastic demand. Examples of third degree price discrimination abound. Airlines charge higher prices to business travelers than to vacation travelers. The reasoning is that the demand curve for a vacation traveler is relatively elastic while the demand curve for a business traveler is relatively inelastic. Any determinant of price elasticity of demand can be used to segment markets. For example, seniors have a more elastic demand for movies than do young adults because they generally have more free time. Thus theaters will offer discount tickets to seniors. Assume that by a uniform pricing system the monopolist would sell five units at a price of $10 per unit. Assume that his marginal cost is $5 per unit. Total revenue would be $50, total costs would be $25 and profits would be $25. If the monopolist practiced price discrimination he would sell the first unit for $50 the second unit for $40 and so on. Total revenue would be $150, his total cost would be $25 and his profit would be $125.00. Several things are worth noting. The monopolist acquires all the consumer surplus and eliminates practically all the deadweight loss because he is willing to sell to anyone who is willing to pay at least the marginal cost. Thus the price discrimination promotes efficiency. Secondly, by the pricing scheme price = average revenue and equals marginal revenue. That is the monopolist behaving like a perfectly competitive company. Thirdly, the discriminating monopolist produces a larger quantity than the monopolist operating by a uniform pricing scheme. Successful price discrimination requires that companies separate consumers according to their willingness to buy. Determining a customer's willingness to buy a good is difficult. Asking consumers directly is fruitless: consumers don't know, and to the extent they do they are reluctant to share that information with marketers. The two main methods for determining willingness to buy are observation of personal characteristics and consumer actions. As noted information about where a person lives (postal codes), how the person dresses, what kind of car he or she drives, occupation, and income and spending patterns can be helpful in classifying. Monopoly and efficiency |This section does not cite any sources. (October 2009) (Learn how and when to remove this template message)| According to the standard model, in which a monopolist sets a single price for all consumers, the monopolist will sell a lesser quantity of goods at a higher price than would companies by perfect competition. Because the monopolist ultimately forgoes transactions with consumers who value the product or service more than its price, monopoly pricing creates a deadweight loss referring to potential gains that went neither to the monopolist nor to consumers. Given the presence of this deadweight loss, the combined surplus (or wealth) for the monopolist and consumers is necessarily less than the total surplus obtained by consumers by perfect competition. Where efficiency is defined by the total gains from trade, the monopoly setting is less efficient than perfect competition. It is often argued that monopolies tend to become less efficient and less innovative over time, becoming "complacent", because they do not have to be efficient or innovative to compete in the marketplace. Sometimes this very loss of psychological efficiency can increase a potential competitor's value enough to overcome market entry barriers, or provide incentive for research and investment into new alternatives. The theory of contestable markets argues that in some circumstances (private) monopolies are forced to behave as if there were competition because of the risk of losing their monopoly to new entrants. This is likely to happen when a market's barriers to entry are low. It might also be because of the availability in the longer term of substitutes in other markets. For example, a canal monopoly, while worth a great deal during the late 18th century United Kingdom, was worth much less during the late 19th century because of the introduction of railways as a substitute. A natural monopoly is an organization that experiences increasing returns to scale over the relevant range of output and relatively high fixed costs. A natural monopoly occurs where the average cost of production "declines throughout the relevant range of product demand". The relevant range of product demand is where the average cost curve is below the demand curve. When this situation occurs, it is always cheaper for one large company to supply the market than multiple smaller companies; in fact, absent government intervention in such markets, will naturally evolve into a monopoly. An early market entrant that takes advantage of the cost structure and can expand rapidly can exclude smaller companies from entering and can drive or buy out other companies. A natural monopoly suffers from the same inefficiencies as any other monopoly. Left to its own devices, a profit-seeking natural monopoly will produce where marginal revenue equals marginal costs. Regulation of natural monopolies is problematic. Fragmenting such monopolies is by definition inefficient. The most frequently used methods dealing with natural monopolies are government regulations and public ownership. Government regulation generally consists of regulatory commissions charged with the principal duty of setting prices. To reduce prices and increase output, regulators often use average cost pricing. By average cost pricing, the price and quantity are determined by the intersection of the average cost curve and the demand curve. This pricing scheme eliminates any positive economic profits since price equals average cost. Average-cost pricing is not perfect. Regulators must estimate average costs. Companies have a reduced incentive to lower costs. Regulation of this type has not been limited to natural monopolies. Average-cost pricing does also have some disadvantages. By setting price equal to the intersection of the demand curve and the average total cost curve, the firm's output is allocatively inefficient as the price exceeds the marginal cost (which is the output quantity for a perfectly competitive and allocatively efficient market). A government-granted monopoly (also called a "de jure monopoly") is a form of coercive monopoly by which a government grants exclusive privilege to a private individual or company to be the sole provider of a commodity; potential competitors are excluded from the market by law, regulation, or other mechanisms of government enforcement. Monopolist shutdown rule A monopolist should shut down when price is less than average variable cost for every output level – in other words where the demand curve is entirely below the average variable cost curve. Under these circumstances at the profit maximum level of output (MR = MC) average revenue would be less than average variable costs and the monopolists would be better off shutting down in the short term. Breaking up monopolies In a free market, monopolies can be ended at any time by new competition, breakaway businesses, or consumers seeking alternatives. In a highly regulated market environment a government will often either regulate the monopoly, convert it into a publicly owned monopoly environment, or forcibly fragment it (see Antitrust law and trust busting). Public utilities, often being naturally efficient with only one operator and therefore less susceptible to efficient breakup, are often strongly regulated or publicly owned. American Telephone & Telegraph (AT&T) and Standard Oil are debatable examples of the breakup of a private monopoly by government: When AT&T, a monopoly previously protected by force of law, was broken up into various components in 1984, MCI, Sprint, and other companies were able to compete effectively in the long distance phone market. The existence of a very high market share does not always mean consumers are paying excessive prices since the threat of new entrants to the market can restrain a high-market-share company's price increases. Competition law does not make merely having a monopoly illegal, but rather abusing the power a monopoly may confer, for instance through exclusionary practices (i.e. pricing high just because you are the only one around.) It may also be noted that it is illegal to try to obtain a monopoly, by practices of buying out the competition, or equal practices. If one occurs naturally, such as a competitor going out of business, or lack of competition, it is not illegal until such time as the monopoly holder abuses the power. First it is necessary to determine whether a company is dominant, or whether it behaves "to an appreciable extent independently of its competitors, customers and ultimately of its consumer". As with collusive conduct, market shares are determined with reference to the particular market in which the company and product in question is sold. The Herfindahl-Hirschman Index (HHI) is sometimes used to assess how competitive an industry is. In the US, the merger guidelines state that a post-merger HHI below 1000 is viewed as unconcentrated while HHIs above that will provoke further review. By European Union law, very large market shares raise a presumption that a company is dominant, which may be rebuttable. If a company has a dominant position, then there is "a special responsibility not to allow its conduct to impair competition on the common market". The lowest yet market share of a company considered "dominant" in the EU was 39.7%. Certain categories of abusive conduct are usually prohibited by a country's legislation. The main recognised categories are: - Limiting supply - Predatory pricing - Price discrimination - Refusal to deal and exclusive dealing - Tying (commerce) and product bundling Despite wide agreement that the above constitute abusive practices, there is some debate about whether there needs to be a causal connection between the dominant position of a company and its actual abusive conduct. Furthermore, there has been some consideration of what happens when a company merely attempts to abuse its dominant position. The meaning and understanding of the English word 'monopoly' has changed over the years. Monopolies of resources Vending of common salt (sodium chloride) was historically a natural monopoly. Until recently, a combination of strong sunshine and low humidity or an extension of peat marshes was necessary for producing salt from the sea, the most plentiful source. Changing sea levels periodically caused salt "famines" and communities were forced to depend upon those who controlled the scarce inland mines and salt springs, which were often in hostile areas (e.g. the Sahara desert) requiring well-organised security for transport, storage, and distribution. The "Gabelle" was a notoriously high tax levied upon salt in the Kingdom of France. The much-hated levy had a role in the beginning of the French Revolution, when strict legal controls specified who was allowed to sell and distribute salt. First instituted in 1286, the Gabelle was not permanently abolished until 1945. Robin Gollan argues in The Coalminers of New South Wales that anti-competitive practices developed in the coal industry of Australia's Newcastle as a result of the business cycle. The monopoly was generated by formal meetings of the local management of coal companies agreeing to fix a minimum price for sale at dock. This collusion was known as "The Vend". The Vend ended and was reformed repeatedly during the late 19th century, ending by recession in the business cycle. "The Vend" was able to maintain its monopoly due to trade union assistance, and material advantages (primarily coal geography). During the early 20th century, as a result of comparable monopolistic practices in the Australian coastal shipping business, the Vend developed as an informal and illegal collusion between the steamship owners and the coal industry, eventually resulting in the High Court case Adelaide Steamship Co. Ltd v. R. & AG. Standard Oil was an American oil producing, transporting, refining, and marketing company. Established in 1870, it became the largest oil refiner in the world. John D. Rockefeller was a founder, chairman and major shareholder. The company was an innovator in the development of the business trust. The Standard Oil trust streamlined production and logistics, lowered costs, and undercut competitors. "Trust-busting" critics accused Standard Oil of using aggressive pricing to destroy competitors and form a monopoly that threatened consumers. Its controversial history as one of the world's first and largest multinational corporations ended in 1911, when the United States Supreme Court ruled that Standard was an illegal monopoly. The Standard Oil trust was dissolved into 33 smaller companies; two of its surviving "child" companies are ExxonMobil and the Chevron Corporation. U.S. Steel has been accused of being a monopoly. J. P. Morgan and Elbert H. Gary founded U.S. Steel in 1901 by combining Andrew Carnegie's Carnegie Steel Company with Gary's Federal Steel Company and William Henry "Judge" Moore's National Steel Company. At one time, U.S. Steel was the largest steel producer and largest corporation in the world. In its first full year of operation, U.S. Steel made 67 percent of all the steel produced in the United States. However, U.S. Steel's share of the expanding market slipped to 50 percent by 1911, and anti-trust prosecution that year failed. De Beers settled charges of price fixing in the diamond trade in the 2000s. De Beers is well known for its monopoloid practices throughout the 20th century, whereby it used its dominant position to manipulate the international diamond market. The company used several methods to exercise this control over the market. Firstly, it convinced independent producers to join its single channel monopoly, it flooded the market with diamonds similar to those of producers who refused to join the cartel, and lastly, it purchased and stockpiled diamonds produced by other manufacturers in order to control prices through limiting supply. In 2000, the De Beers business model changed due to factors such as the decision by producers in Russia, Canada and Australia to distribute diamonds outside the De Beers channel, as well as rising awareness of blood diamonds that forced De Beers to "avoid the risk of bad publicity" by limiting sales to its own mined products. De Beers' market share by value fell from as high as 90% in the 1980s to less than 40% in 2012, having resulted in a more fragmented diamond market with more transparency and greater liquidity. In November 2011 the Oppenheimer family announced its intention to sell the entirety of its 40% stake in De Beers to Anglo American plc thereby increasing Anglo American's ownership of the company to 85%. The transaction was worth £3.2 billion ($5.1 billion) in cash and ended the Oppenheimer dynasty's 80-year ownership of De Beers. A public utility (or simply "utility") is an organization or company that maintains the infrastructure for a public service or provides a set of services for public consumption. Common examples of utilities are electricity, natural gas, water, sewage, cable television, and telephone. In the United States, public utilities are often natural monopolies because the infrastructure required to produce and deliver a product such as electricity or water is very expensive to build and maintain. American Telephone & Telegraph was a telecommunications giant. AT&T was broken up in 1984. The Comcast Corporation is the largest mass media and communications company in the world by revenue. It is the largest cable company and home Internet service provider in the United States, and the nation's third largest home telephone service provider. Comcast has a monopoly in Boston, Philadelphia, Chicago, and many other small towns across the US. The United Aircraft and Transport Corporation was an aircraft manufacturer holding company that was forced to divest itself of airlines in 1934. The Long Island Rail Road (LIRR) was founded in 1834, and since the mid-1800s has provided train service between Long Island and New York City. In the 1870s, LIRR became the sole railroad in that area through a series of acquisitions and consolidations. In 2013, the LIRR's commuter rail system is the busiest commuter railroad in North America, serving nearly 335,000 passengers daily. The British East India Company was created as a legal trading monopoly in 1600. The East India Company was formed for pursuing trade with the East Indies but ended up trading mainly with the Indian subcontinent, North-West Frontier Province, and Balochistan. The Company traded in basic commodities, which included cotton, silk, indigo dye, salt, saltpetre, tea and opium. Major League Baseball survived U.S. anti-trust litigation in 1922, though its special status is still in dispute as of 2009. The National Football League survived anti-trust lawsuit in the 1960s but was convicted of being an illegal monopoly in the 1980s. Other examples of monopolies - Microsoft has been the defendant in multiple anti-trust suits on strategy embrace, extend and extinguish. They settled anti-trust litigation in the U.S. in 2001. In 2004 Microsoft was fined 493 million euros by the European Commission which was upheld for the most part by the Court of First Instance of the European Communities in 2007. The fine was US$1.35 billion in 2008 for noncompliance with the 2004 rule. - MPAA (Motion Picture Association of America) has a monopoly over film ratings in the U.S. - Joint Commission is an organization that accredits more than 20,000 health care organizations and programs in the United States. The Commission has a monopoly over determining whether a U.S. hospital can participate in the publicly funded Medicare and Medicaid healthcare programs. - Monsanto has been sued by competitors for anti-trust and monopolistic practices. They have between 70% and 100% of the commercial GMO seed market in a small number of crops. - AAFES has a monopoly on retail sales at overseas U.S. military installations. - State stores in certain United States states, e.g. for liquor. - The Registered Dietitian union seeks monopoly over nutrition services through state-level licensing schemes. - The State retail alcohol monopolies of Norway (Vinmonopolet), Sweden (Systembolaget), Finland (Alko), Iceland (Vínbúð), Ontario (LCBO), Quebéc (SAQ), British Columbia (Liquor Distribution Branch), among others. - Google is widely considered a monopoly for search engines in Europe and North America, where "to google" has even become a word used in everyday language. According to professor Milton Friedman, laws against monopolies cause more harm than good, but unnecessary monopolies should be countered by removing tariffs and other regulation that upholds monopolies. A monopoly can seldom be established within a country without overt and covert government assistance in the form of a tariff or some other device. It is close to impossible to do so on a world scale. The De Beers diamond monopoly is the only one we know of that appears to have succeeded (and even De Beers are protected by various laws against so called "illicit" diamond trade). – In a world of free trade, international cartels would disappear even more quickly.— Milton Friedman, Free to Choose, p. 53–54 However, professor Steve H. Hanke believes that although private monopolies are more efficient than public ones, often by a factor of two, sometimes private natural monopolies, such as local water distribution, should be regulated (not prohibited) by, e.g., price auctions. Thomas DiLorenzo asserts, however, that during the early days of utility companies where there was little regulation, there were no natural monopolies and there was competition. Only when companies realized that they could gain power through government did monopolies begin to form. - Complementary monopoly - De facto standard - Dominant design - Flag carrier - History of monopoly - Ramsey problem, a policy rule concerning what price a monopolist should set. - Simulations and games in economics education that model monopolistic markets. - State monopoly capitalism - Unfair competition Notes and references - Michael Burgan (2007). J. Pierpont Morgan: Industrialist and Financier. p. 93. ISBN 9780756519872. - Milton Friedman. "VIII: Monopoly and the Social Responsibility of Business and Labor". Capitalism and Freedom (paperback) (40th anniversary ed.). The University of Chicago Press. p. 208. ISBN 0-226-26421-1. - Blinder, Alan S; Baumol, William J; Gale, Colton L (June 2001). "11: Monopoly". Microeconomics: Principles and Policy (paperback). Thomson South-Western. p. 212. ISBN 0-324-22115-0. A pure monopoly is an industry in which there is only one supplier of a product for which there are no close substitutes and in which is very difficult or impossible for another firm to coexist - Orbach, Barak; Campbell, Grace (2012). "The Antitrust Curse of Bigness". Southern California Law Review. - Binger and Hoffman (1998), p. 391. - Goodwin, N; Nelson, J; Ackerman, F; Weisskopf, T (2009). Microeconomics in Context (2nd ed.). Sharpe. pp. 307–308. - Samuelson, William F.; Marks, Stephen G. (2003). Managerial Economics (4th ed.). Wiley. pp. 365–366. - Nicholson, Walter; Snyder, Christopher (2007). Intermediate Microeconomics. Thomson. p. 379. - Frank (2009), p. 274. - Samuelson & Marks (2003), p. 365. - Ayers, Rober M.; Collinge, Robert A. (2003). Microeconomics. Pearson. p. 238. - Pindyck and Rubinfeld (2001), p. 127. - Png, Ivan (1999). Managerial Economics. Blackwell. p. 271. ISBN 1-55786-927-8. - Png (1999), p. 268. - Negbennebor, Anthony (2001). Microeconomics, The Freedom to Choose. CAT Publishing. - Mankiw (2007), p. 338. - Hirschey, M (2000). Managerial Economics. Dreyden. p. 426. - Pindyck, R; Rubinfeld, D (2001). Microeconomics (5th ed.). Prentice-Hall. p. 333. - Melvin and Boyes (2002), p. 245. - Varian, H (1992). Microeconomic Analysis (3rd ed.). Norton. p. 235. - Pindyck and Rubinfeld (2001), p. 370. - Frank (2008), p. 342. - Pindyck and Rubenfeld (2000), p. 325. - Nicholson (1998), p. 551. - Perfectly competitive firms are price takers. Price is exogenous and it is possible to associate each price with unique profit maximizing quantity. Besanko, David, and Ronald Braeutigam, Microeconomics 2nd ed., Wiley (2005), p. 413. - Binger, B.; Hoffman, E. (1998). Microeconomics with Calculus (2nd ed.). Addison-Wesley. - Frank (2009), p. 377. - Frank (2009), p. 378. - Depken, Craig (November 23, 2005). "10". Microeconomics Demystified. McGraw Hill. p. 170. ISBN 0-07-145911-1. - Davies, Glyn; Davies, John (July 1984). "The revolution in monopoly theory". Lloyds Bank Review (153): 38–52. - Levine, David; Boldrin, Michele (2008-09-07). Against intellectual monopoly. Cambridge University Press. p. 312. ISBN 978-0-521-87928-6. - Tirole, p. 66. - Tirole, p. 65. - Hirschey (2000), p. 412. - Melvin, Michael; Boyes, William (2002). Microeconomics (5th ed.). Houghton Mifflin. p. 239. - Pindyck and Rubinfeld (2001), p. 328. - Varian (1992), p. 233. - Png (1999). - Krugman, Paul; Wells, Robin (2009). Microeconomics (2nd ed.). Worth. - Samuelson and Marks (2006), p. 107. - Boyes and Melvin, p. 246. - Perloff (2009), p. 404. - Perloff (2009), p. 394. - Besanko and Beautigam (2005), p. 449. - Wessels, p. 159. - Boyes and Melvin, p. 449. - Varian (1992), p. 241. - Perloff (2009), p. 393. - Besanko and Beautigam (2005), p. 448. - Hall, Robert E.; Liberman, Marc (2001). Microeconomics: Theory and Applications (2nd ed.). South_Western. p. 263. - Besanko and Beautigam (2005), p. 451. - If the monopolist is able to segment the market perfectly, then the average revenue curve effectively becomes the marginal revenue curve for the company and the company maximizes profits by equating price and marginal costs. That is the company is behaving like a perfectly competitive company. The monopolist will continue to sell extra units as long as the extra revenue exceeds the marginal cost of production. The problem that the company has is that the company must charge a different price for each successive unit sold. - Varian (1992), p. 242. - Perloff (2009), p. 396. - Because MC is the same in each market segment the profit maximizing condition becomes produce where MR1 = MR2 = MC. Pindyck and Rubinfeld (2009), pp. 398–99. - As Pindyck and Rubinfeld note, managers may find it easier to conceptualize the problem of what price to charge in each segment in terms of relative prices and price elasticities of demand. Marginal revenue can be written in terms of elasticities of demand as MR = P(1+1/PED). Equating MR1 and MR2 we have P1 (1+1/PED) = P2 (1+1/PED) or P1/P2 = (1+1/PED2)/(1+1/PED1). Using this equation the manager can obtain elasticity information and set prices for each segment. [Pindyck and Rubinfeld (2009), pp. 401–02.] Note that the manager may be able to obtain industry elasticities, which are far more inelastic than the elasticity for an individual firm. As a rule of thumb the company’s elasticity coefficient is 5 to 6 times that of the industry. [Pindyck and Rubinfeld (2009) pp. 402.] - Colander, David C., p. 269. - Note that the discounts apply only to tickets not to concessions. The reason there is not any popcorn discount is that there is not any effective way to prevent resell. A profit maximizing theater owner maximizes concession sales by selling where marginal revenue equals marginal cost. - Lovell (2004), p. 266. - Frank (2008), p. 394. - Frank (2008), p. 266. - Smith, Adam (1776), Wealth of Nations, Penn State Electronic Classics edition, republished 2005 - Binger and Hoffman (1998), p. 406. - Samuelson, P. & Nordhaus, W.: Microeconomics, 17th ed. McGraw-Hill 2001 - Samuelson, W; Marks, S (2005). Managerial Economics (4th ed.). Wiley. p. 376. - Samuelson and Marks (2003), p. 100. - Frank, Robert H. (2008). Microeconomics and Behavior (7th ed.). McGraw-Hill. ISBN 978-0-07-126349-8. - Case 27/76: United Brands Company and United Brands Continentaal BV v Commission of the European Communities (ECR 207), 14 February 1978 - Kerber, Wolfgang; Kretschmer, Jürgen-Peter; von Wangenheim, Georg (September 23, 2009), Market Share Thresholds and Herfindahl-Hirschman-Index (HHI) as Screening Instruments in Competition Law: A Theoretical Analysis (PDF), Department of Economics, University of Vienna - "1.5 Concentration and Market Shares", Horizontal Merger Guidelines (U.S. Department of Justice and the Federal Trade Commission), April 8, 1997 - Case 85/76: Hoffmann-La Roche & Co. AG v Commission of the European Communities (ECR 461), 13 February 1979 - AKZO Chemie BV v Commission of the European Communities, 3 July 1991 - Case 322/81: NV Nederlandsche Banden Industrie Michelin v Commission of the European Communities, 9 November 1983 - COMMISSION DECISION of 14 July 1999 relating to a proceeding under Article 82 of the EC Treaty (IV/D-2/34.780 — Virgin/British Airways, 14 July 1999, p. L30/1 - Case 6-72: Europemballage Corporation and Continental Can Company Inc. v Commission of the European Communities, 21 February 1973 - Aristotle. Politics (350 B.C.E ed.). - Aristotle. Politics. p. 1252α. - Richardson, Gary (June 2001). "A Tale of Two Theories: Monopolies and Craft Guilds in Medieval England and Modern Imagination". Journal of the History of Economic Thought. - Chazelas, Jean (1968). "La suppression de la gabelle du sel en 1945". Le rôle du sel dans l'histoire: travaux préparés sous la direction de Michel Mollat (Presses universitaires de France): 263–65. - Gollan, Robin (1963). The Coalminers of New South Wales: a history of the union, 1860–1960. Melbourne: Melbourne University Press. pp. 45–134. - "Exxon Mobil - Our history". Exxon Mobil Corp. Retrieved 2009-02-03. - Morris, Charles R. The Tycoons: How Andrew Carnegie, John D. Rockefeller, Jay Gould, and J.P. Morgan invented the American supereconomy, H. Holt and Co., New York, 2005, pp. 255-258. ISBN 0-8050-7599-2. - "United States Steel Corporation History". FundingUniverse. Retrieved 3 January 2014. - Boselovic, Len (February 25, 2001). "Steel Standing: U.S. Steel celebrates 100 years". PG News - Business & Technology. post-gazette.com - PG Publishing. Retrieved 6 August 2013. - "West's Encyclopedia of American Law". Answers.com. 2009-06-28. Retrieved 2011-10-11. - Lasar, Matthew (May 13, 2011), How Robber Barons hijacked the "Victorian Internet": Ars revisits those wild and crazy days when Jay Gould ruled the telegraph and ..., Ars technica - Kevin J. O'Brien, IHT.com, Regulators in Europe fight for independence, International Herald Tribune, November 9, 2008, Accessed November 14, 2008. - IfM - Comcast/NBCUniversal, LLC. Mediadb.eu (2013-11-15). Retrieved on 2013-12-09. - Dickens, Matthew (24 May 2013), TRANSIT RIDERSHIP REPORT: First Quarter 2013 (PDF), American Public Transportation Association, retrieved 3 January 2014 - Van Boven, M. W. "Towards A New Age of Partnership (TANAP): An Ambitious World Heritage Project (UNESCO Memory of the World – reg.form, 2002)". VOC Archives Appendix 2, p.14. - EU competition policy and the consumer - Leo Cendrowicz (2008-02-27). "Microsoft Gets Mother Of All EU Fines". Forbes. Retrieved 2008-03-10. - "EU fines Microsoft record $1.3 billion". Time Warner. 2008-02-27. Retrieved 2008-03-10. - "American Society for Healthcare Engineering". - "In Praise of Private Infrastructure", Globe Asia, April 2008 - Thomas J. DiLorenzo. "The Myth of Natural Monopoly – Thomas J. DiLorenzo – Mises Daily". Mises.org. Retrieved 2012-11-02. - Guy Ankerl, Beyond Monopoly Capitalism and Monopoly Socialism. Cambridge, Massachusetts: Schenkman Pbl., 1978. ISBN 0-87073-938-7 - McChesney, Fred (2008). "Antitrust". In David R. Henderson (ed.). Concise Encyclopedia of Economics (2nd ed.). Indianapolis: Library of Economics and Liberty. ISBN 978-0865976658. OCLC 237794267. - Stigler, George J. (2008). "Monopoly". In David R. Henderson (ed.). Concise Encyclopedia of Economics (2nd ed.). Indianapolis: Library of Economics and Liberty. ISBN 978-0865976658. OCLC 237794267. ||This article's use of external links may not follow Wikipedia's policies or guidelines. (December 2015) (Learn how and when to remove this template message)| |Wikimedia Commons has media related to Monopoly.| |Look up monopoly in Wiktionary, the free dictionary.| - Monopoly: A Brief Introduction by The Linux Information Project - Monopoly by Elmer G. Wiens: Online Interactive Models of Monopoly (Public or Private) and Oligopoly - Beach, Chandler B., ed. (1914). "Monopoly". The New Student's Reference Work. Chicago: F. E. Compton and Co. - Impact of Antitrust Laws on American Professional Team Sports - A monopolist who does not know the demand curve – A paper and a simulation software by Valentino Piana (2002). - Monopoly Profit and Loss by Fiona Maclachlan & Monopoly and Natural Monopoly by Seth J. Chandler, Wolfram Demonstrations Project - Government and Microsoft: a Libertarian View on Monopolies (by François-René Rideau on his personal website) - The Myth of Natural Monopoly (by Thomas J. DiLorenzo on www.Mises.org) – 1996 - Natural Monopoly and Its Regulation - From rulers' monopolies to users' choices A critical survey of monopolistic practices - Body of Knowledge on Infrastructure Regulation Monopoly and Market Power
Augmented reality (AR) is a live direct or indirect view of a physical, real-world environment whose elements are augmented (or supplemented) by computer-generated sensory input such as sound, video, graphics or GPS data. It is related to a more general concept called mediated reality, in which a view of reality is modified (possibly even diminished rather than augmented) by a computer. As a result, the technology functions by enhancing one’s current perception of reality. By contrast, virtual reality replaces the real world with a simulated one. Augmentation is conventionally in real-time and in semantic context with environmental elements, such as sports scores on TV during a match. With the help of advanced AR technology (e.g. adding computer vision and object recognition) the information about the surrounding real world of the user becomes interactive and digitally manipulable. Artificial information about the environment and its objects can be overlaid on the real world. - 1 Technology - 2 Applications - 2.1 Archaeology - 2.2 Architecture - 2.3 Art - 2.4 Commerce - 2.5 Construction - 2.6 Education - 2.7 Everyday - 2.8 Gaming - 2.9 Industrial design - 2.10 Medical - 2.11 Military - 2.12 Navigation - 2.13 Office workplace - 2.14 Sports and entertainment - 2.15 Task support - 2.16 Television - 2.17 Tourism and sightseeing - 2.18 Translation - 3 Notable researchers - 4 History - 5 See also - 6 References - 7 External links Hardware components for augmented reality are: processor, display, sensors and input devices. Modern mobile computing devices like smartphones and tablet computers contain these elements which often include a camera and MEMS sensors such as accelerometer, GPS, and solid state compass, making them suitable AR platforms. Various technologies are used in Augmented Reality rendering including optical projection systems, monitors, hand held devices, and display systems worn on one's person. A head-mounted display (HMD) is a display device paired to a headset such as a harness or helmet. HMDs place images of both the physical world and virtual objects over the user's field of view. Modern HMDs often employ sensors for six degrees of freedom monitoring that allow the system to align virtual information to the physical world and adjust accordingly with the user's head movements. HMDs can provide users immersive, mobile and collaborative AR experiences. AR displays can be rendered on devices resembling eyeglasses. Versions include eyewear that employ cameras to intercept the real world view and re-display its augmented view through the eye pieces and devices in which the AR imagery is projected through or reflected off the surfaces of the eyewear lens pieces. Google Glass is not intended for an AR experience, but third-party developers are pushing the device toward a mainstream AR experience. After the debut of Google Glass many other AR devices emerged as alternatives. Most promising Google Alternatives can be listed as Vuzix M100, Optinvent, Meta Space Glasses, Telepathy, Recon Jet, Glass Up, K-Glass. CrowdOptic, an existing app for smartphones, applies algorithms and triangulation techniques to photo metadata including GPS position, compass heading, and a time stamp to arrive at a relative significance value for photo objects. CrowdOptic technology can be used by Google Glass users to learn where to look at a given point in time. Contact lenses that display AR imaging are in development. These bionic contact lenses might contain the elements for display embedded into the lens including integrated circuitry, LEDs and an antenna for wireless communication. Another version of contact lenses, in development for the U.S. Military, is designed to function with AR spectacles, allowing soldiers to focus on close-to-the-eye AR images on the spectacles and distant real world objects at the same time. In 2013, at the Augmented World Expo Conference, a futuristic video named Sight featuring the potential of having augmented reality through contact lenses received the best futuristic augmented reality video award. Virtual retinal display A virtual retinal display (VRD) is a personal display device under development at the University of Washington's Human Interface Technology Laboratory. With this technology, a display is scanned directly onto the retina of a viewer's eye. The viewer sees what appears to be a conventional display floating in space in front of them. The EyeTap (also known as Generation-2 Glass) captures rays of light that would otherwise pass through the center of a lens of an eye of the wearer, and substitutes synthetic computer-controlled light for each ray of real light. The Generation-4 Glass (Laser EyeTap) is similar to the VRD (i.e. it uses a computer controlled laser light source) except that it also has infinite depth of focus and causes the eye itself to, in effect, function as both a camera and a display, by way of exact alignment with the eye, and resynthesis (in laser light) of rays of light entering the eye. Handheld displays employ a small display that fits in a user's hand. All handheld AR solutions to date opt for video see-through. Initially handheld AR employed fiduciary markers, and later GPS units and MEMS sensors such as digital compasses and six degrees of freedom accelerometer–gyroscope. Today SLAM markerless trackers such as PTAM are starting to come into use. Handheld display AR promises to be the first commercial success for AR technologies. The two main advantages of handheld AR is the portable nature of handheld devices and ubiquitous nature of camera phones. The disadvantages are the physical constraints of the user having to hold the handheld device out in front of them at all times as well as distorting effect of classically wide-angled mobile phone cameras when compared to the real world as viewed through the eye. Spatial Augmented Reality (SAR) augments real world objects and scenes without the use of special displays such as monitors, head mounted displays or hand-held devices. SAR makes use of digital projectors to display graphical information onto physical objects. The key difference in SAR is that the display is separated from the users of the system. Because the displays are not associated with each user, SAR scales naturally up to groups of users, thus allowing for collocated collaboration between users. Examples include shader lamps, mobile projectors, virtual tables, and smart projectors. Shader lamps mimic and augment reality by projecting imagery onto neutral objects, providing the opportunity to enhance the object’s appearance with materials of a simple unit- a projector, camera, and sensor. Other applications include table and wall projections. One innovation, the Extended Virtual Table, separates the virtual from the real by including beam-splitter mirrors attached to the ceiling at an adjustable angle. Virtual showcases, which employ beam-splitter mirrors together with multiple graphics displays, provide an interactive means of simultaneously engaging with the virtual and the real. Many more implementations and configurations make spatial augmented reality display an increasingly attractive interactive alternative. A SAR system can display on any number of surfaces of an indoor setting at once. SAR supports both a graphical visualisation and passive haptic sensation for the end users. Users are able to touch physical objects in a process that provides passive haptic sensation. Modern mobile augmented reality systems use one or more of the following tracking technologies: digital cameras and/or other optical sensors, accelerometers, GPS, gyroscopes, solid state compasses, RFID and wireless sensors. These technologies offer varying levels of accuracy and precision. Most important is the position and orientation of the user's head. Tracking the user's hand(s) or a handheld input device can provide a 6DOF interaction technique. Techniques include speech recognition systems that translate a user's spoken words into computer instructions and gesture recognition systems that can interpret a user's body movements by visual detection or from sensors embedded in a peripheral device such as a wand, stylus, pointer, glove or other body wear. The computer analyzes the sensed visual and other data to synthesize and position augmentations. Software and algorithms A key measure of AR systems is how realistically they integrate augmentations with the real world. The software must derive real world coordinates, independent from the camera, from camera images. That process is called image registration which uses different methods of computer vision, mostly related to video tracking. Many computer vision methods of augmented reality are inherited from visual odometry. Usually those methods consist of two parts. First detect interest points, or fiduciary markers, or optical flow in the camera images. First stage can use feature detection methods like corner detection, blob detection, edge detection or thresholding and/or other image processing methods. The second stage restores a real world coordinate system from the data obtained in the first stage. Some methods assume objects with known geometry (or fiduciary markers) present in the scene. In some of those cases the scene 3D structure should be precalculated beforehand. If part of the scene is unknown simultaneous localization and mapping (SLAM) can map relative positions. If no information about scene geometry is available, structure from motion methods like bundle adjustment are used. Mathematical methods used in the second stage include projective (epipolar) geometry, geometric algebra, rotation representation with exponential map, kalman and particle filters, nonlinear optimization, robust statistics. Augmented Reality Markup Language (ARML) is a data standard developed within the Open Geospatial Consortium (OGC), which consists of an XML grammar to describe the location and appearance of virtual objects in the scene, as well as ECMAScript bindings to allow dynamic access to properties of virtual objects. To enable rapid development of Augmented Reality Application, some software development kits (SDK) have emerged. Some of the well known AR SDKs are offered by Metaio, Vuforia, Wikitude and Layar. ||This section reads like an editorial or opinion piece. (June 2013)| Augmented reality has many applications, and many areas can benefit from the use of AR technology. AR was first used for military, industrial, and medical applications, but was soon applied to commercial and entertainment areas. AR can be used to aid archaeological research, by augmenting archaeological features onto the modern landscape, enabling archaeologists to formulate conclusions about site placement and configuration. Another application given to AR in this field is the possibility for users to rebuild ruins, buildings, or even landscapes as they formerly existed. AR can aid in visualizing building projects. Computer-generated images of a structure can be superimposed into a real life local view of a property before the physical building is constructed there. AR can also be employed within an architect's work space, rendering into their view animated 3D visualizations of their 2D drawings. Architecture sight-seeing can be enhanced with AR applications allowing users viewing a building's exterior to virtually see through its walls, viewing its interior objects and layout. AR technology has helped disabled individuals create art by using eye tracking to translate a user's eye movements into drawings on a screen. An item such as a commemorative coin can be designed so that when scanned by an AR-enabled device it displays additional objects and layers of information that were not visible in a real world view of it. In 2013, L'Oreal used CrowdOptic technology to create an augmented reality at the seventh annual Luminato Festival in Toronto, Canada. AR can enhance product previews such as allowing a customer to view what's inside a product's packaging without opening it. AR can also be used as an aid in selecting products from a catalog or through a kiosk. Scanned images of products can activate views of additional content such as customization options and additional images of the product in its use. AR is used to integrate print and video marketing. Printed marketing material can be designed with certain "trigger" images that, when scanned by an AR enabled device using image recognition, activate a video version of the promotional material. A major difference between Augmented Reality and straight forward image recognition is that you can overlay multiple media at the same time in the view screen, such as social media share buttons, in-page video even audio and 3D objects. Traditional print only publications are using Augmented Reality to connect many different types of media. With the continual improvements to GPS accuracy, businesses are able to use augmented reality to visualize georeferenced models of construction sites, underground structures, cables and pipes using mobile devices. Following the Christchurch earthquake, the University of Canterbury released, CityViewAR, which enabled city planners and engineers to visualize buildings that were destroyed in the earthquake. Not only did this provide planners with tools to reference the previous cityscape, but it also served as a reminder to the magnitude of the devastation caused, as entire buildings were demolished. Augmented reality applications can complement a standard curriculum. Text, graphics, video and audio can be superimposed into a student’s real time environment. Textbooks, flashcards and other educational reading material can contain embedded “markers” that, when scanned by an AR device, produce supplementary information to the student rendered in a multimedia format. Students can participate interactively with computer generated simulations of historical events, exploring and learning details of each significant area of the event site. On higher education, there are some applications that can be used. For instance, Construct3D, a Studierstube system, allows students to learn mechanical engineering concepts, math or geometry. This is an active learning process in which students learn to learn with technology. AR can aid students in understanding chemistry by allowing them to visualize the spatial structure of a molecule and interact with a virtual model of it that appears, in a camera image, positioned at a marker held in their hand. It can also enable students of physiology to visualize different systems of the human body in three dimensions. Augmented reality technology also permits learning via remote collaboration, in which students and instructors not at the same physical location can share a common virtual learning environment populated by virtual objects and learning materials and interact with another within that setting. This resource could also take of advantage in Primary School. Students learn through experiences, besides when children are so young, they need see to learn. For instance, they can learn new knowledge about Astronomy, which is usually difficult to acquire to them, with this device children can understand better The Solar System because they would see it in 3D; even children under 6 years old could understand it following that method. In addition, learners could change the pictures of their Science Book for using this resource. On the other hand to teach bones or organs, they could also stick one paper on their body and that paper contains an embedded “markers” about a bones or an organ that existed under the paper, and the teacher would only need to press a button when children would change the place of the paper, in this way, we would use the same embedded “markers” in order to teach another part of the body. Since the 1970s and early 1980s, Steve Mann has been developing technologies meant for everyday use i.e. "horizontal" across all applications rather than a specific "vertical" market. Examples include Mann's "EyeTap Digital Eye Glass", a general-purpose seeing aid that does dynamic-range management (HDR vision) and overlays, underlays, simultaneous augmentation and diminishment (e.g. diminishing the electric arc while looking at a welding torch). Augmented reality allows gamers to experience digital game play in a real world environment. In the last 10 years there has been a lot of improvements of technology, resulting in better movement detection and the possibility for the Wii to exist, but also direct detection of the player's movements. AR can help industrial designers experience a product's design and operation before completion. Volkswagen uses AR for comparing calculated and actual crash test imagery. AR can be used to visualize and modify a car body structure and engine layout. AR can also be used to compare digital mock-ups with physical mock-ups for finding discrepancies between them. Augmented Reality can provide the surgeon with information, which are otherwise hidden, such as showing the heartbeat rate, the blood pressure, the state of the patient’s organ, etc. AR can be used to let a doctor look inside a patient by combining one source of images such as an X-ray with another such as video. Examples include a virtual X-ray view based on prior tomography or on real time images from ultrasound and confocal microscopy probes or visualizing the position of a tumor in the video of an endoscope. AR can enhance viewing a fetus inside a mother's womb. See also Mixed reality. In combat, AR can serve as a networked communication system that renders useful battlefield data onto a soldier's goggles in real time. From the soldier's viewpoint, people and various objects can be marked with special indicators to warn of potential dangers. Virtual maps and 360° view camera imaging can also be rendered to aid a soldier's navigation and battlefield perspective, and this can be transmitted to military leaders at a remote command center. An interesting application of AR occurred when Rockwell International created video map overlays of satellite and orbital debris tracks to aid in space observations at Air Force Maui Optical System. In their 1993 paper "Debris Correlation Using the Rockwell WorldView System" the authors describe the use of map overlays applied to video from space surveillance telescopes. The map overlays indicated the trajectories of various objects in geographic coordinates. This allowed telescope operators to identify satellites, and also to identify - and catalog - potentially dangerous space debris. AR can augment the effectiveness of navigation devices. Information can be displayed on an automobile's windshield indicating destination directions and meter, weather, terrain, road conditions and traffic information as well as alerts to potential hazards in their path. Aboard maritime vessels, AR can allow bridge watch-standers to continuously monitor important information such as a ship's heading and speed while moving throughout the bridge or performing other tasks. The NASA X-38 was flown using a Hybrid Synthetic Vision system that overlaid map data on video to provide enhanced navigation for the spacecraft during flight tests from 1998 to 2002. It used the LandForm software and was useful for times of limited visibility, including an instance when the video camera window frosted over leaving astronauts to rely on the map overlays. The LandForm software was also test flown at the Army Yuma Proving Ground in 1999. In the photo at right one can see the map markers indicating runways, air traffic control tower, taxiways, and hangars overlaid on the video. AR can help facilitate collaboration among distributed team members in a work force via conferences with real and virtual participants. AR tasks can include brainstorming and discussion meetings utilizing common visualization via touch screen tables, interactive digital whiteboards, shared design spaces, and distributed control rooms. Sports and entertainment AR has become common in sports telecasting. Sports and entertainment venues are provided with see-through and overlay augmentation through tracked camera feeds for enhanced viewing by the audience. Examples include the yellow "first down" line seen in television broadcasts of American football games showing the line the offensive team must cross to receive a first down. AR is also used in association with football and other sporting events to show commercial advertisements overlaid onto the view of the playing area. Sections of rugby fields and cricket pitches also display sponsored images. Swimming telecasts often add a line across the lanes to indicate the position of the current record holder as a race proceeds to allow viewers to compare the current race to the best performance. Other examples include hockey puck tracking and annotations of racing car performance and snooker ball trajectories. AR can enhance concert and theater performances. For example, artists can allow listeners to augment their listening experience by adding their performance to that of other bands/groups of users. The gaming industry has benefited a lot from the development of this technology. A number of games have been developed for prepared indoor environments. Early AR games also include AR air hockey, collaborative combat against virtual enemies, and an AR-enhanced pool games. A significant number of games incorporate AR in them and the introduction of the smartphone has made a bigger impact. Complex tasks such as assembly, maintenance, and surgery can be simplified by inserting additional information into the field of view. For example, labels can be displayed on parts of a system to clarify operating instructions for a mechanic who is performing maintenance on the system. Assembly lines gain many benefits from the usage of AR. In addition to Boeing, BMW and Volkswagen are known for incorporating this technology in their assembly line to improve their manufacturing and assembly processes. Big machines are difficult to maintain because of the multiple layers or structures they have. With the use of AR the workers can complete their job in a much easier way because AR permits them to look through the machine as if it was with x-ray, pointing them to the problem right away. Weather visualizations were the first application of Augmented Reality to television. It has now become common in weathercasting to display full motion video of images captured in real-time from multiple cameras and other imaging devices. Coupled with 3D graphics symbols and mapped to a common virtual geospace model, these animated visualizations constitute the first true application of AR to TV. Augmented reality has also become common in sports telecasting. Sports and entertainment venues are provided with see-through and overlay augmentation through tracked camera feeds for enhanced viewing by the audience. Examples include the yellow "first down" line seen in television broadcasts of American football games showing the line the offensive team must cross to receive a first down. AR is also used in association with football and other sporting events to show commercial advertisements overlaid onto the view of the playing area. Sections of rugby fields and cricket pitches also display sponsored images. Swimming telecasts often add a line across the lanes to indicate the position of the current record holder as a race proceeds to allow viewers to compare the current race to the best performance. Other examples include hockey puck tracking and annotations of racing car performance and snooker ball trajectories. Augmented reality is starting to allow Next Generation TV viewers to interact with the programs they are watching. They can place objects into an existing program and interact with these objects, such as moving them around. Avatars of real persons in real time who are also watching the same program. Tourism and sightseeing Augmented reality applications can enhance a user's experience when traveling by providing real time informational displays regarding a location and its features, including comments made by previous visitors of the site. AR applications allow tourists to experience simulations of historical events, places and objects by rendering them into their current view of a landscape. AR applications can also present location information by audio, announcing features of interest at a particular site as they become visible to the user. AR systems can interpret foreign text on signs and menus and, in a user's augmented view, re-display the text in the user's language. Spoken words of a foreign language can be translated and displayed in a user's view as printed subtitles. - Ivan Sutherland invented the first AR head-mounted display at Harvard University. - Steven Feiner, Professor at Columbia University, is a leading pioneer of augmented reality, and author of the first paper on an AR system prototype, KARMA (the Knowledge-based Augmented Reality Maintenance Assistant), along with Blair MacIntyre and Doree Seligmann. - Steve Mann formulated an earlier concept of Mediated reality in the 1970s and 1980s, using cameras, processors, and display systems to modify visual reality to help people see better (dynamic range management), building computerized welding helmets, as well as "Augmediated Reality" vision systems for use in everyday life. - Louis Rosenberg developed one of the first known AR systems, called Virtual Fixtures, while working at the U.S. Air Force Armstrong Labs in 1991, and published the first study of how an AR system can enhance human performance. Rosenberg's subsequent work at Stanford University in the early 90's, was the first proof that virtual overlays, when registered and presented over a user's direct view of the real physical world, could significantly enhance human performance. - Dieter Schmalstieg and Daniel Wagner jump started the field of AR on mobile phones. They developed the first marker tracking systems for mobile phones and PDAs. - Bruce H. Thomas and Wayne Piekarski develop the Tinmith system in 1998. They along with Steve Feiner with his MARS system pioneer outdoor augmented reality. - Reinhold Behringer performed important early work in image registration for augmented reality, and prototype wearable testbeds for augmented reality. He also co-organized the First IEEE International Symposium on Augmented Reality in 1998 (IWAR'98), and co-edited one of the first books on augmented reality. ||This section may contain promotional material and other spam. (April 2013)| - 1901: L. Frank Baum, an author, first mentions the idea of an electronic display/spectacles that overlays data onto real life (in this case 'people'), it is named a 'character marker'. - 1957–62: Morton Heilig, a cinematographer, creates and patents a simulator called Sensorama with visuals, sound, vibration, and smell. - 1968: Ivan Sutherland invents the head-mounted display and positions it as a window into a virtual world. - 1975: Myron Krueger creates Videoplace to allow users to interact with virtual objects for the first time. - 1980: Steve Mann creates the first wearable computer, a computer vision system with text and graphical overlays on a photographically mediated reality, or Augmediated Reality. See EyeTap. - 1981: Dan Reitan geospatially maps multiple weather radar images and space-based and studio cameras to virtual reality Earth maps and abstract symbols for television weather broadcasts, bringing Augmented Reality to TV. - 1989: Jaron Lanier coins the phrase Virtual Reality and creates the first commercial business around virtual worlds. - 1990: The term "'Augmented Reality'" is believed to be attributed to Tom Caudell, a former Boeing researcher. - 1992: Louis Rosenberg develops one of the first functioning AR systems, called Virtual Fixtures, at the U.S. Air Force Research Laboratory—Armstrong, and demonstrates benefits to human performance. - 1992: Steven Feiner, Blair MacIntyre and Doree Seligmann present the first major paper on an AR system prototype, KARMA, at the Graphics Interface conference. - 1993 A widely cited version of the paper above is published in Communications of the ACM – Special issue on computer augmented environments, edited by Pierre Wellner, Wendy Mackay, and Rich Gold. - 1993: Loral WDL, with sponsorship from STRICOM, performed the first demonstration combining live AR-equipped vehicles and manned simulators. Unpublished paper, J. Barrilleaux, "Experiences and Observations in Applying Augmented Reality to Live Training", 1999. - 1994: Julie Martin creates first 'Augmented Reality Theater production', Dancing In Cyberspace, funded by the Australia Council for the Arts, features dancers and acrobats manipulating body–sized virtual object in real time, projected into the same physical space and performance plane. The acrobats appeared immersed within the virtual object and environments. The installation used Silicon Graphics computers and Polhemus sensing system. - 1998: Spatial Augmented Reality introduced at University of North Carolina at Chapel Hill by Ramesh Raskar, Welch, Henry Fuchs. - 1999: The US Naval Research Laboratory engage on a decade long research program called the Battlefield Augmented Reality System (BARS) to prototype some of the early wearable systems for dismounted soldier operating in urban environment for situation awareness and training NRL BARS Web page - 1999: Hirokazu Kato (加藤 博一) created ARToolKit at HITLab, where AR later was further developed by other HITLab scientists, demonstrating it at SIGGRAPH. - 2000: Bruce H. Thomas develops ARQuake, the first outdoor mobile AR game, demonstrating it in the International Symposium on Wearable Computers. - 2001: NASA X-38 flown using LandForm software video map overlays at Dryden Flight Research Center. - 2008: Wikitude AR Travel Guide launches on 20 Oct 2008 with the G1 Android phone. - 2009: ARToolkit was ported to Adobe Flash (FLARToolkit) by Saqoosha, bringing augmented reality to the web browser. - 2013: Google announces an open beta test of its Google Glass augmented reality glasses. The glasses reach the Internet through Bluetooth, which connects to the wireless service on a user’s cellphone. The glasses respond when a user speaks, touches the frame or moves the head. - Alternate reality game - Augmented browsing - Augmented reality-based testing - Augmented web - Bionic contact lens - Brain in a vat - Computer-mediated reality - Head-mounted display - Lifelike experience - List of augmented reality software - Optical head-mounted display - Simulated reality - Transreality gaming - Video mapping - Virtual reality - Wearable computing - Graham, M., Zook, M., and Boulton, A. "Augmented reality in urban places: contested content and the duplicity of code." Transactions of the Institute of British Geographers, DOI: 10.1111/j.1475-5661.2012.00539.x 2012. - Steuer, Jonathan. Defining Virtual Reality: Dimensions Determining Telepresence, Department of Communication, Stanford University. 15 October 1993. - Introducing Virtual Environments National Center for Supercomputing Applications, University of Illinois. - Chen, Brian X. If You’re Not Seeing Data, You’re Not Seeing, Wired, 25 August 2009. - Maxwell, Kerry. Augmented Reality, Macmillan Dictionary Buzzword. - Augmented reality-Everything about AR, Augmented Reality On. - Azuma, Ronald. A Survey of Augmented Reality Presence: Teleoperators and Virtual Environments, pp. 355–385, August 1997. - Metz, Rachel. Augmented Reality Is Finally Getting Real Technology Review, 2 August 2012. - Fleet Week: Office of Naval Research Technology- Virtual Reality Welder Training, eweek, 28 May 2012. - Rolland, Jannick; Baillott, Yohan; Goon, Alexei.A Survey of Tracking Technology for Virtual Environments, Center for Research and Education in Optics and Lasers, University of Central Florida. - Klepper, Sebastian.Augmented Reality – Display Systems. - Rolland, J; Biocca F; Hamza-Lup F; Yanggang H; Martins R (October 2005). "Development of Head-Mounted Projection Displays for Distributed, Collaborative, Augmented Reality Applications". Presence: Teleoperators & Virtual Environments 14 (5): 528–549. - Grifatini, Kristina. Augmented Reality Goggles, Technology Review 10 November 2010. - Arthur, Charles. UK company's 'augmented reality' glasses could be better than Google's, The Guardian, 10 September 2012. - Gannes, Liz. "Google Unveils Project Glass: Wearable Augmented-Reality Glasses". http://allthingsd.com. Retrieved 2012-04-04., All Things D. - Benedetti, Winda. Xbox leak reveals Kinect 2, augmented reality glasses NBC News. - Manjoo, Farhad (2012-06-19). "You Will Want Google Goggles | MIT Technology Review". Technologyreview.com. Retrieved 2013-06-14. - "faqs – Glass Press". Sites.google.com. Retrieved 2013-07-08. - "Google Glass Alternative". Digital Trends. Retrieved 15 November 2013. - "Some of Google Glass Alternative". Premier Logic. Retrieved 15 November 2013. - "5 Google Glass Alternatives". NBC News. Retrieved 15 November 2013. - "Vuzix M100". Vuzix M100. Retrieved 15 November 2013. - "Optinvent". Optinvent. Retrieved 15 November 2013. - "Meta Space Glasses". Meta Glass. Retrieved 15 November 2013. - "Telepathy". Telepathy. Retrieved 15 November 2013. - "Recon Jet". Recon Jet. Retrieved 15 November 2013. - "Glass Up". Glass Up. Retrieved 15 November 2013. - "K-Glass is a high-speed, head-mounted display with augmented reality chip". 2014. - "How Crowdoptic’s big data technology reveals the world’s most popular photo objects". VentureBeat. Retrieved 6 June 2013. - "CrowdOptic and L'Oreal To Make History By Demonstrating How Augmented Reality Can Be A Shared Experience". Forbes. Retrieved 6 June 2013. - Greenemeier, Larry. Computerized Contact Lenses Could Enable In-Eye Augmented Reality. Scientific American, 23 November 2011. - Yoneda, Yuka. Solar Powered Augmented Contact Lenses Cover Your Eye with 100s of LEDs. inhabitat, 17 March 2010. - Rosen, Kenneth. "Contact Lenses Can Display Your Text Messages". Mashable.com. Mashable.com. Retrieved 2012-12-13. - O'Neil, Lauren. "LCD contact lenses could display text messages in your eye". CBC. Retrieved 2012-12-12. - Anthony, Sebastian. US military developing multi-focus augmented reality contact lenses. ExtremeTech, 13 April 2012. - Bernstein, Joseph. 2012 Invention Awards: Augmented-Reality Contact Lenses Popular Science, 5 June 2012. - Augmented World Expo Conference. . AR Conference, 15 November 2013. - A Futuristic Short Film: by Sight Systems. . Sight, 15 November 2013. - Tidwell, Michael; Johnson, Richard S.; Melville, David; Furness, Thomas A.The Virtual Retinal Display – A Retinal Scanning Imaging System, Human Interface Technology Laboratory, University of Washington. - “GlassEyes”: The Theory of EyeTap Digital Eye Glass, supplemental material for IEEE Technology and Society, Volume Vol. 31, Number 3, 2012, pp. 10-14. - "Intelligent Image Processing", John Wiley and Sons, 2001, ISBN 0-471-40637-6, 384 p. - Marker vs Markerless AR, Dartmouth College Library. - Feiner, Steve. "Augmented reality: a long way off?". AR Week. Pocket-lint. Retrieved 2011-03-03. - Bimber, Oliver; Encarnação, Miguel; Branco, Pedro. The Extended Virtual Table: An Optical Extension for Table-Like Projection Systems, MIT Press Journal Vol. 10, No. 6, Pages 613–631, March 13, 2006. - Ramesh Raskar, Greg Welch, Henry Fuchs Spatially Augmented Reality, First International Workshop on Augmented Reality, Sept 1998. - Knight, Will. Augmented reality brings maps to life 19 July 2005. - Sung, Dan. Augmented reality in action – maintenance and repair. Pocket-lint, 1 March 2011. - Stationary systems can employ 6DOF track systems such as Polhemus, ViCON, A.R.T, or Ascension. - Marshall, Gary.Beyond the mouse: how input is evolving, Touch,voice and gesture recognition and augmented realitytechradar.computing\PC Plus 23 August 2009. - Simonite, Tom. Augmented Reality Meets Gesture Recognition, Technology Review, 15 September 2011. - Chaves, Thiago; Figueiredo, Lucas; Da Gama, Alana; de Araujo, Christiano; Teichrieb, Veronica. Human Body Motion and Gestures Recognition Based on Checkpoints. SVR '12 Proceedings of the 2012 14th Symposium on Virtual and Augmented Reality pp. 271–278. - Barrie, Peter; Komninos, Andreas; Mandrychenko, Oleksii.A Pervasive Gesture-Driven Augmented Reality Prototype using Wireless Sensor Body Area Networks. - Azuma, Ronald; Balliot, Yohan; Behringer, Reinhold; Feiner, Steven; Julier, Simon; MacIntyre, Blair. Recent Advances in Augmented Reality Computers & Graphics, November 2001. - Maida, James; Bowen, Charles; Montpool, Andrew; Pace, John. Dynamic registration correction in augmented-reality systems, Space Life Sciences, NASA. - State, Andrei; Hirota, Gentaro; Chen,David T; Garrett, William; Livingston, Mark. Superior Augmented Reality Registration by Integrating Landmark Tracking and Magnetic Tracking, Department of Computer ScienceUniversity of North Carolina at Chapel Hill. - Bajura, Michael; Neumann, Ulrich. Dynamic Registration Correction in Augmented-Reality Systems University of North Carolina, University of Southern California. - "ARML 2.0 SWG". Open Geospatial Consortium website. Open Geospatial Consortium. Retrieved 12 November 2013. - "Top 5 AR SDKs". Augmented Reality News. Retrieved 15 November 2013. - "Top 10 AR SDKs". Augmented World Expo. Retrieved 15 November 2013. - "Metaio AR SDK". Metaio. Retrieved 15 November 2013. - "Vuforia AR SDK". Vuforia. Retrieved 15 November 2013. - "Wikitude AR SDK". Wikitude. Retrieved 15 November 2013. - "Layar AR SDK". Layar. Retrieved 15 November 2013. - Augmented Reality Landscape 11 August 2012. - Stuart Eve. "Augmenting Phenomenology: Using Augmented Reality to Aid Archaeological Phenomenology in the Landscape". Retrieved 2012-09-25. - Dähne, Patrick; Karigiannis, John N. "Archeoguide: System Architecture of a Mobile Outdoor Augmented Reality System". Retrieved 2010-01-06. - Divecha, Devina.Augmented Reality (AR) used in architecture and design. designMENA 8 September 2011. - Architectural dreams in augmented reality. University News, University of Western Australia. 5 March 2012. - Webley, Kayla. The 50 Best Inventions of 2010 – EyeWriter Time, 11 November 2010. - Alexander, Michael.Arbua Shoco Owl Silver Coin with Augmented Reality, Coin Update July 20, 2012. - Royal Mint produces revolutionary commemorative coin for Aruba, Today August 7, 2012. - Humphries, Mathew..Geek.com 19 September 2011. - Netburn, Deborah.Ikea introduces augmented reality app for 2013 catalog. Los Angeles Times, 23 July 2012. - Saenz, Aaron.Virtual Mirror Brings Augmented Reality to Makeup Counters. singularityHub, 15 June 2010. - Katts, Rima. Elizabeth Arden brings new fragrance to life with augmented reality Mobile Marketer, 19 September 2012. - Meyer, David. Telefónica bets on augmented reality with Aurasma tie-in gigaom, 17 September 2012. - Mardle, Pamela.Video becomes reality for Stuprint.com. Printweek, 3 October 2012. - Houlton, Paul. AR in Publishing Publications, 21/01/2014 - RevEye. AR in Print Ads See More with Rev Eye, 3 March 2014 - Churcher, Jason. "Internal accuracy vs external accuracy". Retrieved 7 May 2013. - Lee, Gun (2012). CityViewAR outdoor AR visualization. ACM. p. 97. ISBN 978-1-4503-1474-9. - Groundbreaking Augmented Reality-Based Reading Curriculum Launches, ‘’PRweb’’, 23 October 2011. - Stewart-Smith, Hanna. Education with Augmented Reality: AR textbooks released in Japan, ‘’ZDnet’’, 4 April 2012. - Augmented reality in education smarter learning. - Lubrecht, Anna. Augmented Reality for Education ‘’Digital Union’’, The Ohio State University 24 April 2012. - Maier, Patrick; Tönnis, Marcus; Klinker, Gudron. Augmented Reality for teaching spatial relations, Conference of the International Journal of Arts & Sciences (Toronto 2009). - Vuforia Case Study: Anatomy 4D - Kaufmann, Hannes. Collaborative Augmented Reality in Education, Institute of Software Technology and Interactive Systems, Vienna University of Technology. - Davies, Chris (2012-09-12). "Quantigraphic camera promises HDR eyesight from Father of AR". SlashGear. Retrieved 2012-12-30. - "YOUR THOUGHTS ABOUT AUGMENTED REALITY IN VIDEO GAMES". 2013-05-01. Retrieved 2013-05-07. - Noelle, S. (2002). "Stereo augmentation of simulation results on a projection wall". Mixed and Augmented Reality, 2002. ISMAR 2002. Proceedings.: 271–322. Retrieved 2012-10-07. - Verlinden, Jouke; Horvath, Imre. Augmented Prototyping as Design Means in Industrial Design Engineering. Delft University of Technology. Retrieved 2012-10-07. - Pang, Y; Nee, A; Youcef-Toumie, Kamal; Ong, S.K; Yuan, M.L (November 18, 2004). Assembly Design and Evaluation in an Augmented Reality Environment. National University of Singapore, M.I.T. Retrieved 2012-10-07. - Mountney, Peter; Giannarou, Stamatia ; Elson, Daniel; Yang, Guang-Zhong. Optical Biopsy Mapping for Minimally Invasive Cancer Screening. Department of Computing, Imperial College 2009. - Scopis Augmented Reality: Path guidance to craniopharyngioma on YouTube - "UNC Ultrasound/Medical Augmented Reality Research". Archived from the original on 12 February 2010. Retrieved 2010-01-06. - Cameron, Chris. Military-Grade Augmented Reality Could Redefine Modern Warfare ReadWriteWeb June 11, 2010. - Abernathy, M., Houchard, J., Puccetti, M., and Lambert, J,"Debris Correlation Using the Rockwell WorldView System",Proceedings of 1993 Space Surveillance Workshop 30 March to 1 April 1993,pages 189-195 - GM's Enhanced Vision System. Techcrunch.com (17 March 2010). Retrieved 9 June 2012. - Couts, Andrew. New augmented reality system shows 3D GPS navigation through your windshield Digital Trens,27 October 2011. - Griggs, Brandon. Augmented-reality' windshields and the future of driving CNN Tech, 13 January 2012. - Cheney-Peters, Scott (12 April 2012). "CIMSEC: Google's AR Goggles". Retrieved 2012-04-20. - Delgado, F., Abernathy, M., White J., and Lowrey, B. Real-Time 3-D Flight Guidance with Terrain for the X-38,SPIE Enhanced and Synthetic Vision 1999, Orlando Florida, April 1999, Proceedings of the SPIE Vol. 3691, pages 149-156 - Delgado, F., Altman, S., Abernathy, M., White, J. Virtual Cockpit Window for the X-38,SPIE Enhanced and Synthetic Vision 2000, Orlando Florida, Proceedings of the SPIE Vol. 4023, pages 63-70 - Stafford, Aaron; Piekarski, Wayne; Thomas, Bruce H. "Hand of God". Archived from the original on 2009-12-07. Retrieved 2009-12-18. - Benford, S, Greenhalgh, C, Reynard, G, Brown, C and Koleva, B. Understanding and constructing shared spaces with mixed-reality boundaries. ACM Trans. Computer-Human Interaction, 5(3):185–223, Sep. 1998. - Office of Tomorrow Media Interaction Lab. - Marlow, Chris. Hey, hockey puck! NHL PrePlay adds a second-screen experience to live games, digitalmediawire April 27, 2012. - Pair, J.; Wilson, J.; Chastine, J.; Gandy, M. "The Duran Duran Project: The Augmented Reality Toolkit in Live Performance". The First IEEE International Augmented Reality Toolkit Workshop, 2002. - Broughall, Nick. Sydney Band Uses Augmented Reality For Video Clip. Gizmodo, 19 October 2009. - Pendlebury, Ty. Augmented reality in Aussie film clip. c|net 19 October 2009. - Hawkins, Mathew. Augmented Reality Used To Enhance Both Pool And Air Hockey Game Set WatchOctober 15, 2011. - One Week Only – Augmented Reality Project Combat-HELO Dev Blog July 31, 2012. - The big idea:Augmented Reality. Ngm.nationalgeographic.com (15 May 2012). Retrieved 2012-06-09. - Henderson, Steve; Feiner, Steven. "Augmented Reality for Maintenance and Repair (ARMAR)". Retrieved 2010-01-06. - Sandgren, Jeffrey. The Augmented Eye of the Beholder, BrandTech News January 8, 2011. - Cameron, Chris. Augmented Reality for Marketers and Developers, ReadWriteWeb. - Dillow, Clay BMW Augmented Reality Glasses Help Average Joes Make Repairs, Popular Science September 2009. - King, Rachael. Augmented Reality Goes Mobile, Bloomberg Business Week Technology November 3, 2009. - Saenz, Aaron Augmented Reality Does Time Travel Tourism SingularityHUB November 19, 2009. - Sung, Dan Augmented reality in action – travel and tourism Pocket-lint March 2, 2011. - Dawson, Jim Augmented Reality Reveals History to Tourists Life Science August 16, 2009. - Bartie, P and Mackaness, W.Development of a speech-based augmented reality system to support exploration of cityscape. Trans. GIS, 10(1):63–86, 2006. - Benderson, Bejamin B. Audio Augmented Reality: A Prototype Automated Tour Guide Bell Communications Research,, ACM Human Computer in Computing Systems conference, pp. 210–211. - Tsotsis, Alexia. Word Lens Translates Words Inside of Images. Yes Really. TechCrunch (16 December 2010). - N.B. Word Lens: This changes everything The Economist: Gulliver blog 18 December 2010. - Borghino, Dario Augmented reality glasses perform real-time language translation. gizmag, 29 July 2012. - "Knowledge-based augmented reality". ACM. July 1993. - "Wearable Computing: A first step towards personal imaging", IEEE Computer, pp. 25–32, Vol. 30, Issue 2, Feb. 1997 link. - L. B. Rosenberg. The Use of Virtual Fixtures As Perceptual Overlays to Enhance Operator Performance in Remote Environments. Technical Report AL-TR-0089, USAF Armstrong Laboratory, Wright-Patterson AFB OH, 1992. - Rosenberg, L., "Virtual fixtures as tools to enhance operator performance in telepresence environments," SPIE Manipulator Technology, 1993. - Rosenberg, "Virtual Haptic Overlays Enhance Performance in Telepresence Tasks," Dept. of Mech. Eng., Stanford Univ., 1994. - Rosenberg, "Virtual Fixtures: Perceptual Overlays Enhance Operator Performance in Telepresence Tasks," Ph.D. Dissertation, Stanford University. - Wagner, Daniel (29 September 2009). "First Steps Towards Handheld Augmented Reality". ACM. Retrieved 2009-09-29. - Piekarski, William; Thomas, Bruce. Tinmith-Metro: New Outdoor Techniques for Creating City Models with an Augmented Reality Wearable Computer Fifth International Symposium on Wearable Computers (ISWC'01), 2001, pp. 31. - Behringer, R.;Improving the Registration Precision by Visual Horizon Silhouette Matching. Rockwell Science Center. - Behringer, R.;Tam, C; McGee, J.; Sundareswaran, V.; Vassiliou, Marius. Two Wearable Testbeds for Augmented Reality: itWARNS and WIMMIS. ISWC 2000, Atlanta, 16–17 October 2000. - R. Behringer, G. Klinker,. D. Mizell. Augmented Reality – Placing Artificial Objects in Real Scenes. Proceedings of IWAR '98. A.K.Peters, Natick, 1999. ISBN 1-56881-098-9. - Johnson, Joel. “The Master Key”: L. Frank Baum envisions augmented reality glasses in 1901 Mote & Beam 10 September 2012. - Mann, Steve (2012-11-02). "Eye Am a Camera: Surveillance and Sousveillance in the Glassage". Techland.time.com. Retrieved 2013-10-14. - Lee, Kangdon (March 2012). "Augmented Reality in Education and Training". Techtrends: Linking Research & Practice To Improve Learning 56 (2). Retrieved 2014-05-15. - L. B. Rosenberg, "The Use of Virtual Fixtures to Enhance Operator Performance in Telepresence Environments" SPIE Telemanipulator Technology, 1993. - Wellner, Pierre. "Computer Augmented Environments: back to the real world". ACM. Retrieved 2012-07-28. - Barrilleaux, Jon. Experiences and Observations in Applying Augmented Reality to Live Training. Jmbaai.com. Retrieved 2012-06-09. - AviationNow.com Staff, "X-38 Test Features Use Of Hybrid Synthetic Vision" AviationNow.com, December 11, 2001 - Wikitude AR Travel Guide. Youtube.com. Retrieved 2012-06-09. - Cameron, Chris. Flash-based AR Gets High-Quality Markerless Upgrade, ReadWriteWeb 9 July 2010. - Miller, Claire. , New York Times 20 February 2013. - Augmented Reality & Virtual Reality Market worth 1.06 Billion by 2018 – Market Research Media related to Augmented reality at Wikimedia Commons
Viruses reproduce only inside the living cells of organisms, (Wu 2020) being known so far more than 6,000 species of viruses. (International Committee on Taxonomy of Viruses (ICTV) 2020) When it infects a cell, the viruses force it to rapidly produce thousands of identical copies of the original virus. A relatively common misconception about what a biological virus actually is, is that a virus often refers only to protective capsules made up of proteins, which contain viral genomic information in the extracellular environment. (Jacob and Wollman 1961) This particle is a virion and is generally considered dead. Matti Jalasvuori (Jalasvuori 2012) highlights the difference between a virus and a virion, which allows us to appreciate viruses as evolutionary players or even as living organisms. (Forterre and Prangishvili 2009) Virions are external, autonomous, consisting of genetic material (DNA or RNA molecules) that encode the structure of proteins, a protein layer (capsid), and sometimes an outer layer of lipids. Viruses are far too small to be visualized with a regular microscope, with diameters between 20 and 300 nanometers. (Mahy 1998) The first images of them were obtained by electron microscopy in 1931 by German engineers Ernst Ruska and Max Knoll. (Fraengsmyr and Ekspong 1993) Rosalind Franklin discovered the complete structure of the virus in 1955. (Creager and Morgan 2008) Viruses appear to have played a role in events such as the origin of cell life (Koonin, Senkevich, and Dolja 2006) and the evolution of mammals. (Gifford 2012) The origin of viruses is unclear (they have existed since the first evolution of living cells. (Iyer et al. 2006) There are three main hypotheses that explain the origin of viruses: (Shors 2016) - The regressive hypothesis (‘degeneration hypothesis’, (Dimmock, Easton, and Leppard 2007) or ‘reduction hypothesis’: (Mahy and Regenmortel 2009) they come from small cells that previously parasitized larger cells. - The cell origin hypothesis (‘wandering hypothesis’) (Mahy 1998) or ‘escape hypothesis’: (B. W. J. Mahy and Regenmortel 2009) they come from bits of DNA or RNA that have ‘escaped’ from the genes of a larger organism. (Shors 2016) - The co-evolution hypothesis (‘the first virus hypothesis’): (B. W. J. Mahy and Regenmortel 2009) they come from complex molecules of proteins and nucleic acid appearing simultaneously with the cells on which they would have been dependent. One hypothesis claims that viruses have probably appeared several times in the past through one or more mechanisms. (B. W. J. Mahy and Regenmortel 2009) Even the simplest bacteria is far too complex to have appeared spontaneously at the beginning of evolution. Subsequently, evolution has been able to produce increasingly complex systems. Matti Jalasvuori concludes that the first true cell must have already been a product of evolution, (Jalasvuori 2012) resulting from a primordial community. (Doolittle 2000) The community has evolved mainly horizontally by changing genetic information between protocells, rather than in a ‘Darwinian’ way, passing genes vertically to offspring. (Koonin and Martin 2005) It follows that the protocells themselves were not coherent genetic entities, but more or less random collections of independent genetic replicators, which evolved collectively thus maintaining the common genetic code. (Vetsigian, Woese, and Goldenfeld 2006) Since viruses or virus-like replicators are thought to be able to come up with new genes, then they could have been one of the elements in that primordial community. Matti Jalasvuori states that viruses provide a possible explanation for the horizontal evolution of early life, as virions are essentially genetically encoded structures that mediate the cell-to-cell transfer of genetic information. As the primary system advanced, some of the first viruses established a permanent residence in some of the protocols. (Jalasvuori 2012) Scott Podolsky (Podolsky 1996) described the different roles of viruses in theorizing the origin of life, from the 1920s to the 1960s. (Kostyrka 2016) He noted that viruses were integrated into life-origin scenarios characterized by a “nucleocentric approach”, unlike a “cytoplasmic approach”. The nucleocentric approach defined life based on self-duplication. (Podolsky 1996, 80) The cytoplasmic approach focused on the cytoplasm as a model to define life and understand its origin, conceived as self-regulation. Podolsky identified three major roles of viruses in early life origin scenarios. (1) as a “metaphor” of life (conceptualized as an image of primitive life), as an “operational model” (provides, by analogy, a conceptual representation of possible mechanisms), and their phylogenetic role, conferring virus-centered nucleocentric arguments with a real “sense of historicity”. (Podolsky 1996, 84) Thus, viruses could be seen as the “relatively unmodified descendants of the primordial precursor to all later life forms.” According to Gladys Kostyrka in What roles for viruses in the origin of life scenarios? (Kostyrka 2016) the conceptualization of viruses as inert products of living cells or extracellular agents had strong implications for the roles that viruses could play in life origin scenarios. The divergence between an “endogenous thought style” and an “exogenous thought style” has been particularly strong in the debates. Felix d’Herelle proposed a virocentric scenario of the origin of life. (Félix d’ Hérelle 1926) For d’Herelle, viruses are not primitive life forms, (F. d’Hérelle 1928, 540) because they are parasites of cells. But viruses could represent relatively unchanged descendants of primitive life forms (phylogenetic role), and could also serve as a metaphor for life (metaphorical role). (F. d’Hérelle 1928, 538) Based on a viral metaphor of life, d’Herelle hypothesized that the simplest forms of life are not cellular, but micellar. The scenario proposed by Alexander and Brigdes in 1928 differs in many respects from d’Herelle’s scenario. (Alexander and Bridges 1928) Their approach is nucleocentric, because they conceive of the virus as an example of life. They consider viruses as simple life forms (“ultrabionts”), but more complex than fundamental ones (“moleculobionts”). B. S. Haldane provided another example of the conception of life which, like d’Herelle, is not strictly nucleocentric, but nevertheless gives viruses important roles in the origins of life. But Haldane refused to call viruses “living” and rather described them as models for understanding the first “half-living molecules” (Haldane 1929) that might have existed before the formation of the first cell. The phylogenetic roles of viruses have been particularly contested. Viruses would rather be the result of the reductive evolution of cells. (Laidlaw 2014) The Green-Laidlaw hypothesis or the retrograde hypothesis for the origin of viruses has convinced many biologists. (Podolsky 1996, 101–3) The hypotheses of the origin of life due to viruses increased during the years 2000-2010. (Koonin and Dolja 2013) According to Gladys Kostyrka, the following syllogism would probably be accepted by many biologists: (Kostyrka 2016) - Viruses depend on cells (no virus could have existed before cells), - The search for the origin of life is to trace the appearance of the first cell, - Viruses are therefore excluded from discussions about the origins of life. This syllogism seems to hinder the phylogenetic or historical role of viruses in the origins of life. However, Patrick Forterre hypothesized that viruses appeared before DNA cells and before LUCA (Last Universal Cellular Ancestor), (Forterre 2006) resulting in a phylogenetic role for viruses. According to Forterre, ancestral viruses did not contribute to the emergence of cell life; Cell life must have existed before, because viruses need cells to replicate. But viruses are said to have contributed to the origin of DNA cells. A simplified version of the scenario for the appearance of DNA is that RNA viruses appeared inside the second era of the RNA world, because RNA cells already existed and could be parasitized. (Kostyrka 2016) (Forterre 2005) Confirmation of the phylogenetic role of viruses could therefore explain the problematic coexistence of two distinct ways of replicating DNA in the living world. This scenario also gives viruses an operational role. Viruses, for Forterre, have phylogenetic and operational roles, but they are not metaphors of primitive life. (Forterre 2016) Eugene Koonin develops a virocentric scenario for the origin of life (“primordial virus world scenario” (Koonin 2009)). Koonin also assumes that viruses appeared during the second world of RNA, but rejects the alleged existence of RNA cells, mainly due to RNA instability. (Koonin, Senkevich, and Dolja 2006, 10) He argued that the first cells must have been DNA cells, so viruses must have appeared in a world without cells. Thus, Koonin rejects the common assumption that viruses cannot exist without cells. (Koonin and Dolja 2013, 550) In 2006, Koonin formulated the “ancient viral world hypothesis” that no gene is shared by all virus species – there is no common ancestor of all viruses, viruses have multiple origins. To explain the presence of these genes in existing viruses, Koonin assumes that they came from a primordial viral world and were conserved. (Koonin 2009, 60) Koonin argues that the phylogenetic role of making it possible to switch from RNA to DNA is not just attributed to viruses. (Koonin and Dolja 2014, 289) He attributes a phylogenetic role to all components of viruses. To some extent, this hypothesis also provides a metaphor for life. (Koonin and Martin 2005) The originality of Koonin’s virocentric scenario is based on the underlying conception of viruses. Unlike Forterre, Koonin argues that viruses can exist and replicate without cells. Thus, Koonin also challenges the premise 1 of the syllogism. Moreover, the viral world “is by no means limited to the typical viruses that encode capsid”. (Koonin and Dolja 2013) Gladys Kostyrka concludes that Forterre and Koonin both argue for possible analogies between the real pathways of viral replication and those that may have existed in the early stages of life, and that viruses played an important phylogenetic role in the appearance of DNA and, more generally, in the evolution of replication mechanisms. But Forterre claims that viruses could only exist if there were cells, because viruses are intracellular parasites. Thus, the phylogenetic role of viruses would have taken place after the appearance of cell life. On the contrary, Koonin’s conception of viruses contradicts the definition of viruses as intracellular parasites. For Koonin, viruses are fundamentally selfish genetic elements surrounded by a capsid. (Kostyrka 2016) How could the virus play a role in the appearance of life if the existence of cells is a precondition for the existence of viruses? Gladys Kostyrka proposes several strategies. A first important strategy for introducing viruses into life-giving scenarios is to define life as acellular. A very different strategy for introducing viruses into life-origin scenarios is based on redefining cell life. (Kostyrka 2016) There are six basic steps in virus replication: (Mahy 1998) - Attachment: binding between viral capsid proteins and specific receptors on the host cell surface. (Más and Melero 2013) - Penetration: virions enter the host cell through receptor-mediated endocytosis or membrane fusion. (Dimmock, Easton, and Leppard 2007) - Uncoating: removing the viral capsid. (Blaas 2016) - Replication: genome multiplication. (Isomura and Stinski 2013) - Assembly: a change in proteins (maturation) occurs after the virus has been released from the host cell. (Barman et al. 2001) - Release – by lysis, a process that usually kills the cell by breaking the membrane and the cell wall. (Dimmock, Easton, and Leppard 2007) Viruses facilitate horizontal gene transfer, increasing genetic diversity. (Canchaya et al. 2003) There is an ongoing debate as to the extent to which viruses are a life form, or are “living organisms” (Rybicki 1990) and self-replicators. (Koonin and Starokadomskyy 2016) Viruses undergo genetic changes through several mechanisms. In antigenic shift (when there is a major change in the virus genome) individual bases in DNA or RNA move to other bases – these changes can confer evolutionary benefits, such as resistance to antiviral drugs. (Sandbulte et al. 2011) When it may be the result of recombination or reassortment, influenza viruses can cause pandemics. (Hampson and Mackenzie 2006) RNA viruses often exist as quasispecies or swarms of viruses of the same species, but with slightly different nucleoside sequences of the genome. Such quasispecies are a major target for natural selection. (Metzner 2006) In genetic recombination a DNA is broken and then joined to the end of a different DNA molecule. Recombination usually occurs when viruses infect cells simultaneously. (Worobey and Holmes 1999) Many organisms harbor a variety of genes unknown to science. (Mocali and Benedetti 2010) Many of these new genes are found in viral genomes. (Yin and Fischer 2008) Viruses could be considered genetic modifiers. Viruses themselves do not evolve, but are evolved by cells. (Moreira and Lopez-Garcia 2009) But many viral genes do not appear to have cellular counterparts. (Yin and Fischer 2008) Viruses appear to have genes that produce structurally and functionally conserved proteins that have no apparent cellular ancestors. (Keller et al. 2009) Viral infections usually cause an immune response that kills the virus. These immune responses can be triggered by specific vaccines. There are viruses, such as those that cause AIDS, and viral hepatitis, which manage to prevent these immune responses by causing chronic infections. Some viruses do not cause apparent changes in the infected, asymptomatic cell (latency), (Sinclair 2008) a feature of herpes viruses. (Whitley and Roizman 2001) These latent viruses can sometimes be beneficial, increasing immunity against bacterial pathogens. (Barton et al. 2007) Other infections persist throughout life, (Bertoletti and Gehring 2007) so infected people are known as carriers because they serve as reservoirs of infectious viruses. (Rodrigues et al. 2001) Virus transmission can be vertical (e.g., mother-to-child), or horizontal (person-to-person). Horizontal transmission is the most common mechanism of virus spread. (Antonovics et al. 2017) Epidemiology is used to break the chain of infection in populations during outbreaks of viral diseases, (Shors 2016) knowing the source of the outbreak and identifying the virus. Interruption can be done through vaccines, or isolation (quarantine), sanitation and disinfection. Vaccines can consist of attenuated viruses or viral proteins (antigens). (Palese 2006) Matti Jalasvuori points out that, although viral infections can make the host resistant to subsequent infections by similar types of viruses, it is not a hereditary symbiosis. We are immune to chickenpox after an infection, but our children still have to infect themselves to become resistant. (Jalasvuori 2012) During the spread of a virus epidemic, this integration of a virus into germ cells could provide an advantage to a person. (Jern and Coffin 2008) It is possible for the virus to establish a mutually beneficial relationship with its host. This symbiotic partnership would exist mainly at the level of genetic information, (Ryan 2009) but can still occur through a fusion of two distinct entities of genetic reproduction. Although viruses could be considered to form symbiotic relationships through any mechanism, Matti Jalasvuori highlights some interesting aspects: How does this integrated virus affect the subsequent evolution of their hosts? The endogenous virus alters the genetic composition of chromosomes and can, for example, regulate the expression of host genes. (Jern and Coffin 2008) Some derived genes appear to have remained active for tens of millions of years. (Katzourakis and Gifford 2010) But even then, it is difficult to say with certainty how important these viruses were in the evolution of their hosts. (Jalasvuori 2012) Viruses are an important natural means of gene transfer between different species, increasing genetic diversity and helping evolution, (Canchaya et al. 2003) being considered one of the largest reservoirs of unexplored genetic diversity on Earth. (Suttle 2007) They can also be used to manipulate and investigate cell functions, (Mahy 1998) being used as vectors to introduce genes into the cells being studied. Virotherapy uses viruses as vectors to treat various diseases, including cancer treatment and gene therapy. (Jefferson, Cadet, and Hielscher 2015) Many viruses can be synthesized “from scratch”. The first synthetic virus was created in 2002. (Cello, Paul, and Wimmer 2002) This technology is used to investigate new vaccination strategies. (Coleman et al. 2008) It follows that viruses can no longer be considered extinct, as long as their genome sequence information is known and permissive cells are available. The ability of viruses to cause epidemics has raised concerns about the possibility of their use in a biological warfare. The 1918 influenza virus was recently successfully recreated in a laboratory. (Zilinskas 2017) There are only two centers in the world authorized by the WHO to store smallpox virus stocks, which can be used as a weapon because the smallpox vaccine has sometimes had severe side effects, and is no longer commonly used in any country. (Artenstein and Grabenstein 2008) - Alexander, J., and C. B. Bridges. 1928. “Some Physico-Chemical Aspects of Life, Mutation, and Evolution.” Biology and Medicine II: 9–58. - Antonovics, Janis, Anthony J. Wilson, Mark R. Forbes, Heidi C. Hauffe, Eva R. Kallio, Helen C. Leggett, Ben Longdon, Beth Okamura, Steven M. Sait, and Joanne P. Webster. 2017. “The Evolution of Transmission Mode.” Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences 372 (1719). https://doi.org/10.1098/rstb.2016.0083. - Artenstein, Andrew W., and John D. Grabenstein. 2008. “Smallpox Vaccines for Biodefense: Need and Feasibility.” Expert Review of Vaccines 7 (8): 1225–37. https://doi.org/10.1586/147605184.108.40.2065. - Barman, Subrata, Ayub Ali, Eric K. -W. Hui, Lopa Adhikary, and Debi P. Nayak. 2001. “Transport of Viral Proteins to the Apical Membranes and Interaction of Matrix Protein with Glycoproteins in the Assembly of Influenza Viruses.” Virus Research 77 (1): 61–69. https://doi.org/10.1016/S0168-1702(01)00266-0. - Barton, Erik S., Douglas W. White, Jason S. Cathelyn, Kelly A. Brett-McClellan, Michael Engle, Michael S. Diamond, Virginia L. Miller, and Herbert W. Virgin. 2007. “Herpesvirus Latency Confers Symbiotic Protection from Bacterial Infection.” Nature 447 (7142): 326–29. https://doi.org/10.1038/nature05762. - Bertoletti, Antonio, and Adam Gehring. 2007. “Immune Response and Tolerance during Chronic Hepatitis B Virus Infection.” Hepatology Research 37 (s3): S331–38. https://doi.org/10.1111/j.1872-034X.2007.00221.x. - Blaas, Dieter. 2016. “Viral Entry Pathways: The Example of Common Cold Viruses.” Wiener Medizinische Wochenschrift 166 (7): 211–26. https://doi.org/10.1007/s10354-016-0461-2. - Canchaya, Carlos, Ghislain Fournous, Sandra Chibani-Chennoufi, Marie-Lise Dillmann, and Harald Brüssow. 2003. “Phage as Agents of Lateral Gene Transfer.” Current Opinion in Microbiology 6 (4): 417–24. https://doi.org/10.1016/S1369-5274(03)00086-9. - Cello, Jeronimo, Aniko V. Paul, and Eckard Wimmer. 2002. “Chemical Synthesis of Poliovirus CDNA: Generation of Infectious Virus in the Absence of Natural Template.” Science (New York, N.Y.) 297 (5583): 1016–18. https://doi.org/10.1126/science.1072266. - Coleman, J. Robert, Dimitris Papamichail, Steven Skiena, Bruce Futcher, Eckard Wimmer, and Steffen Mueller. 2008. “Virus Attenuation by Genome-Scale Changes in Codon Pair Bias.” Science (New York, N.Y.) 320 (5884): 1784–87. https://doi.org/10.1126/science.1155761. - Creager, Angela N. H., and Gregory J. Morgan. 2008. “After the Double Helix.” Isis 99 (2): 239–72. https://doi.org/10.1086/588626. - Dimmock, Nigel J., Andrew J. Easton, and Keith N. Leppard. 2007. Introduction to Modern Virology. 6th Edition. Malden, MA: Wiley-Blackwell. - Doolittle, W. F. 2000. “The Nature of the Universal Ancestor and the Evolution of the Proteome.” Current Opinion in Structural Biology. Curr Opin Struct Biol. June 2000. https://doi.org/10.1016/s0959-440x(00)00096-8. - Forterre, Patrick. 2005. “The Two Ages of the RNA World, and the Transition to the DNA World: A Story of Viruses and Cells.” Biochimie. Biochimie. October 2005. https://doi.org/10.1016/j.biochi.2005.03.015. - ———. 2006. “The Origin of Viruses and Their Possible Roles in Major Evolutionary Transitions.” Virus Research. Virus Res. April 2006. https://doi.org/10.1016/j.virusres.2006.01.010. - ———. 2016. “To Be or Not to Be Alive: How Recent Discoveries Challenge the Traditional Definitions of Viruses and Life.” Studies in History and Philosophy of Biological and Biomedical Sciences. Stud Hist Philos Biol Biomed Sci. October 2016. https://doi.org/10.1016/j.shpsc.2016.02.013. - Forterre, Patrick, and David Prangishvili. 2009. “The Origin of Viruses.” Research in Microbiology 160 (7): 466–72. https://doi.org/10.1016/j.resmic.2009.07.008. - Fraengsmyr, Tore, and Goesta Ekspong. 1993. “Nobel Lectures in Physics 1981-1990.” Singapore: World Scientific, |c1993, Edited by Fraengsmyr, Tore; Ekspong, Goesta. http://adsabs.harvard.edu/abs/1993nlp..book…..F. - Gifford, Robert J. 2012. “Viral Evolution in Deep Time: Lentiviruses and Mammals.” Trends in Genetics: TIG 28 (2): 89–100. https://doi.org/10.1016/j.tig.2011.11.003. - Haldane, J. B. S. 1929. “The Origin of Life.” The Rationalist Annual, no. 148: 3–10. - Hampson, Alan W., and John S. Mackenzie. 2006. “The Influenza Viruses.” Medical Journal of Australia 185 (S10): S39–43. https://doi.org/10.5694/j.1326-5377.2006.tb00705.x. - Hérelle, F. d’. 1928. “Bacteriophage, a Living Colloidal Micell.” Colloid Chemistry, Theoretical and Applied II: 535–41. - Hérelle, Félix d’ . 1926. “Le Bactériophage et Son Comportement. (Book, 1926) [WorldCat.Org].” 1926. https://www.worldcat.org/title/bacteriophage-et-son-comportement/oclc/11981307. - International Committee on Taxonomy of Viruses (ICTV). 2020. “Virus Taxonomy: 2019 Release.” 2020. https://talk.ictvonline.org/taxonomy/. - Isomura, H., and M Stinski. 2013. “Coordination of Late Gene Transcription of Human Cytomegalovirus with Viral DNA Synthesis: Recombinant Viruses as Potential Therapeutic Vaccine Candidates.” Expert Opinion on Therapeutic Targets. Expert Opin Ther Targets. February 2013. https://doi.org/10.1517/14728222.2013.740460. - Iyer, Lakshminarayan M., S. Balaji, Eugene V. Koonin, and L. Aravind. 2006. “Evolutionary Genomics of Nucleo-Cytoplasmic Large DNA Viruses.” Virus Research, Comparative Genomics and Evolution of Complex Viruses, 117 (1): 156–84. https://doi.org/10.1016/j.virusres.2006.01.009. - Jacob, F., and E. L. Wollman. 1961. “Viruses and Genes.” Scientific American 204 (June): 93–107. - Jalasvuori, Matti. 2012. “Viruses: Essential Agents of Life.” ResearchGate. 2012. https://www.researchgate.net/publication/278720857_Viruses_Essential_Agents_of_Life. - Jefferson, Artrish, Valerie E. Cadet, and Abigail Hielscher. 2015. “The Mechanisms of Genetically Modified Vaccinia Viruses for the Treatment of Cancer.” Critical Reviews in Oncology/Hematology 95 (3): 407–16. https://doi.org/10.1016/j.critrevonc.2015.04.001. - Jern, Patric, and John M. Coffin. 2008. “Effects of Retroviruses on Host Genome Function.” Annual Review of Genetics 42: 709–32. https://doi.org/10.1146/annurev.genet.42.110807.091501. - Katzourakis, Aris, and Robert J. Gifford. 2010. “Endogenous Viral Elements in Animal Genomes.” PLoS Genetics 6 (11): e1001191. https://doi.org/10.1371/journal.pgen.1001191. - Keller, J., N. Leulliot, N. Soler, B. Collinet, R. Vincentelli, P. Forterre, and H. van Tilbeurgh. 2009. “A Protein Encoded by a New Family of Mobile Elements from Euryarchaea Exhibits Three Domains with Novel Folds.” Protein Science: A Publication of the Protein Society 18 (4): 850–55. https://doi.org/10.1002/pro.73. - Koonin, Eugene V. 2009. “On the Origin of Cells and Viruses: Primordial Virus World Scenario.” Annals of the New York Academy of Sciences 1178 (1): 47. https://doi.org/10.1111/j.1749-6632.2009.04992.x. - Koonin, Eugene V., and Valerian V. Dolja. 2013. “A Virocentric Perspective on the Evolution of Life.” Current Opinion in Virology 3 (5): 546. https://doi.org/10.1016/j.coviro.2013.06.008. - ———. 2014. “Virus World as an Evolutionary Network of Viruses and Capsidless Selfish Elements.” Microbiology and Molecular Biology Reviews : MMBR 78 (2): 278–303. https://doi.org/10.1128/MMBR.00049-13. - Koonin, Eugene V., and William Martin. 2005. “On the Origin of Genomes and Cells within Inorganic Compartments.” Trends in Genetics: TIG 21 (12): 647–54. https://doi.org/10.1016/j.tig.2005.09.006. - Koonin, Eugene V., Tatiana G. Senkevich, and Valerian V. Dolja. 2006. “The Ancient Virus World and Evolution of Cells.” Biology Direct 1 (1): 29. https://doi.org/10.1186/1745-6150-1-29. - Koonin, Eugene V., and Petro Starokadomskyy. 2016. “Are Viruses Alive? The Replicator Paradigm Sheds Decisive Light on an Old but Misguided Question.” Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 59 (October): 125–34. https://doi.org/10.1016/j.shpsc.2016.02.016. - Kostyrka, Gladys. 2016. “What Roles for Viruses in Origin of Life Scenarios?” Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 59 (October): 135–44. https://doi.org/10.1016/j.shpsc.2016.02.014. - Laidlaw, Patrick P. 2014. “Virus Diseases and Viruses.” Cambridge University Press. 2014. https://www.cambridge.org/us/academic/subjects/history/history-medicine/virus-diseases-and-viruses, https://www.cambridge.org/us/academic/subjects/history/history-medicine. - Mahy, Brian W. J. 1998. Topley and Wilson’s Microbiology and Microbial Infections: Volume 1: Virology. Edited by Leslie Collier, Albert Balows, and Max Sussman. 9th Edition. London : New York: Hodder Education Publishers. - Mahy, Brian W. J., and Marc H. V. van Regenmortel, eds. 2009. Desk Encyclopedia of Human and Medical Virology. 1st Edition. Amsterdam: Academic Press. - Más, Vicente, and José A. Melero. 2013. “Entry of Enveloped Viruses into Host Cells: Membrane Fusion.” In Structure and Physics of Viruses: An Integrated Textbook, edited by Mauricio G. Mateu, 467–87. Subcellular Biochemistry. Dordrecht: Springer Netherlands. https://doi.org/10.1007/978-94-007-6552-8_16. - Metzner, Karin J. 2006. “Detection and Significance of Minority Quasispecies of Drug-Resistant HIV-1.” Journal of HIV Therapy 11 (4): 74–81. - Mocali, Stefano, and Anna Benedetti. 2010. “Exploring Research Frontiers in Microbiology: The Challenge of Metagenomics in Soil Microbiology.” Research in Microbiology 161 (6): 497–505. https://doi.org/10.1016/j.resmic.2010.04.010. - Moreira, D., and P. Lopez-Garcia. 2009. “Ten Reasons to Exclude Viruses from the Tree of Life. Nat Rev Microbiol 7: 306-311.” ResearchGate. 2009. https://www.researchgate.net/publication/24186117_Moreira_D_Lopez-Garcia_P_Ten_reasons_to_exclude_viruses_from_the_tree_of_life_Nat_Rev_Microbiol_7_306-311. - Palese, Peter. 2006. “Making Better Influenza Virus Vaccines? – Volume 12, Number 1—January 2006 – Emerging Infectious Diseases Journal – CDC.” https://doi.org/10.3201/eid1201.051043. - Podolsky, S. 1996. “The Role of the Virus in Origin-of-Life Theorizing.” Journal of the History of Biology. J Hist Biol. Spring 1996. https://doi.org/10.1007/BF00129697. - Rodrigues, C., M. Deshmukh, T. Jacob, R. Nukala, S. Menon, and A. Mehta. 2001. “Significance of HBV DNA by PCR over Serological Markers of HBV in Acute and Chronic Patients.” Indian Journal of Medical Microbiology 19 (3): 141–44. - Ryan, Frank. 2009. Virolution. Collins. - Rybicki, E. P. 1990. “The Classification of Organisms at the Edge of Life, or Problems with Virus Systematics.” South African Journal of Science, no. 86: 182–86. - Sandbulte, Matthew R., Kim B. Westgeest, Jin Gao, Xiyan Xu, Alexander I. Klimov, Colin A. Russell, David F. Burke, Derek J. Smith, Ron A. M. Fouchier, and Maryna C. Eichelberger. 2011. “Discordant Antigenic Drift of Neuraminidase and Hemagglutinin in H1N1 and H3N2 Influenza Viruses.” Proceedings of the National Academy of Sciences 108 (51): 20748–53. https://doi.org/10.1073/pnas.1113801108. - Shors, Teri. 2016. Understanding Viruses. 3rd Edition. Burlington, Massachusetts: Jones & Bartlett Learning. - Sinclair, John. 2008. “Human Cytomegalovirus: Latency and Reactivation in the Myeloid Lineage.” Journal of Clinical Virology, CMV Special Issue, 41 (3): 180–85. https://doi.org/10.1016/j.jcv.2007.11.014. - Suttle, Curtis A. 2007. “Marine Viruses–Major Players in the Global Ecosystem.” Nature Reviews. Microbiology 5 (10): 801–12. https://doi.org/10.1038/nrmicro1750. - Vetsigian, K., C. Woese, and N. Goldenfeld. 2006. “Collective Evolution and the Genetic Code.” Proceedings of the National Academy of Sciences of the United States of America. Proc Natl Acad Sci U S A. July 11, 2006. https://doi.org/10.1073/pnas.0603780103. - Whitley, Richard J., and Bernard Roizman. 2001. “Herpes Simplex Virus Infections.” The Lancet 357 (9267): 1513–18. https://doi.org/10.1016/S0140-6736(00)04638-9. - Worobey, Michael, and Edward C. Holmes. 1999. “Evolutionary Aspects of Recombination in RNA Viruses.” The Journal of General Virology 80 ( Pt 10) (October): 2535–43. https://doi.org/10.1099/0022-1317-80-10-2535. - Wu, Katherine J. 2020. “There Are More Viruses than Stars in the Universe. Why Do Only Some Infect Us?” Science. April 15, 2020. https://www.nationalgeographic.com/science/2020/04/factors-allow-viruses-infect-humans-coronavirus/. - Yin, Yanbin, and Daniel Fischer. 2008. “Identification and Investigation of ORFans in the Viral World.” BMC Genomics 9 (January): 24. https://doi.org/10.1186/1471-2164-9-24. - Zilinskas, R. A. 2017. “A Brief History of Biological Weapons Programmes and the Use of Animal Pathogens as Biological Warfare Agents.” Revue Scientifique Et Technique (International Office of Epizootics) 36 (2): 415–22. https://doi.org/10.20506/rst.36.2.2662. Sfetcu, Nicolae, “What came first: the virus or the cell?”, SetThings (October 17, 2020), DOI: , URL = https://www.setthings.com/en/what-came-first-the-virus-or-the-cell/ Acest articol este licențiat sub Creative Commons Attribution-NoDerivatives 4.0 International. Pentru a vedea o copie a acestei licențe, vizitați http://creativecommons.org/licenses/by-nd/4.0/.
Hellenistic period(Redirected from Hellenistic era) The Hellenistic period covers the period of Mediterranean history between the death of Alexander the Great in 323 BC and the emergence of the Roman Empire as signified by the Battle of Actium in 31 BC and the subsequent conquest of Ptolemaic Egypt the following year. The Ancient Greek word Hellas (Ἑλλάς, Ellás) is the original word for Greece, from which the word "Hellenistic" was derived. At this time, Greek cultural influence and power was at its peak in Europe, North Africa and Western Asia, experiencing prosperity and progress in the arts, exploration, literature, theatre, architecture, music, mathematics, philosophy, and science. It is often considered a period of transition, sometimes even of decadence or degeneration, compared to the enlightenment of the Greek Classical era. The Hellenistic period saw the rise of New Comedy, Alexandrian poetry, the Septuagint and the philosophies of Stoicism and Epicureanism. Greek science was advanced by the works of the mathematician Euclid and the polymath Archimedes. The religious sphere expanded to include new gods such as the Greco-Egyptian Serapis, eastern deities such as Attis and Cybele and the Greek adoption of Buddhism. After Alexander the Great's invasion of the Persian Empire in 330 BC and its disintegration shortly after, the Hellenistic kingdoms were established throughout south-west Asia (Seleucid Empire, Kingdom of Pergamon), north-east Africa (Ptolemaic Kingdom) and South Asia (Greco-Bactrian Kingdom, Indo-Greek Kingdom). The Hellenistic period was characterized by a new wave of Greek colonization which established Greek cities and kingdoms in Asia and Africa. This resulted in the export of Greek culture and language to these new realms, spanning as far as modern-day India. Equally, however, these new kingdoms were influenced by the indigenous cultures, adopting local practices where beneficial, necessary, or convenient. Hellenistic culture thus represents a fusion of the Ancient Greek world with that of the Near East, Middle East, and Southwest Asia. This mixture gave rise to a common Attic-based Greek dialect, known as Koine Greek, which became the lingua franca through the Hellenistic world. Scholars and historians are divided as to what event signals the end of the Hellenistic era. The Hellenistic period may be seen to end either with the final conquest of the Greek heartlands by Rome in 146 BC following the Achean War, with the final defeat of the Ptolemaic Kingdom at the Battle of Actium in 31 BC, or even the move by Roman emperor Constantine the Great of the capital of the Roman Empire to Constantinople in 330 AD. "Hellenistic" is distinguished from "Hellenic" in that the first encompasses the entire sphere of direct ancient Greek influence, while the latter refers to Greece itself. The word originated from the German term hellenistisch, from Ancient Greek Ἑλληνιστής (Hellēnistḗs, "one who uses the Greek language"), from Ἑλλάς (Hellás, "Greece"); as if "Hellenist" + "ic". "Hellenistic" is a modern word and a 19th-century concept; the idea of a Hellenistic period did not exist in Ancient Greece. Although words related in form or meaning, e.g. Hellenist (Ancient Greek: Ἑλληνιστής, Hellēnistēs), have been attested since ancient times, it was Johann Gustav Droysen in the mid-19th century, who in his classic work Geschichte des Hellenismus (History of Hellenism), coined the term Hellenistic to refer to and define the period when Greek culture spread in the non-Greek world after Alexander's conquest. Following Droysen, Hellenistic and related terms, e.g. Hellenism, have been widely used in various contexts; a notable such use is in Culture and Anarchy by Matthew Arnold, where Hellenism is used in contrast with Hebraism. The major issue with the term Hellenistic lies in its convenience, as the spread of Greek culture was not the generalized phenomenon that the term implies. Some areas of the conquered world were more affected by Greek influences than others. The term Hellenistic also implies that the Greek populations were of majority in the areas in which they settled, but in many cases, the Greek settlers were actually the minority among the native populations. The Greek population and the native population did not always mix; the Greeks moved and brought their own culture, but interaction did not always occur. While a few fragments exist, there is no complete surviving historical work which dates to the hundred years following Alexander's death. The works of the major Hellenistic historians Hieronymus of Cardia (who worked under Alexander, Antigonus I and other successors), Duris of Samos and Phylarchus which were used by surviving sources are all lost. The earliest and most credible surviving source for the Hellenistic period is Polybius of Megalopolis (c. 200–118), a statesman of the Achaean League until 168 BC when he was forced to go to Rome as a hostage. His Histories eventually grew to a length of forty books, covering the years 220 to 167 BC. The most important source after Polybius is Diodorus Siculus who wrote his Bibliotheca historica between 60 and 30 BC and reproduced some important earlier sources such as Hieronymus, but his account of the Hellenistic period breaks off after the battle of Ipsus (301). Another important source, Plutarch's (c. 50–c. 120) Parallel Lives although more preoccupied with issues of personal character and morality, outlines the history of important Hellenistic figures. Appian of Alexandria (late 1st century AD–before 165) wrote a history of the Roman empire that includes information of some Hellenistic kingdoms. Other sources include Justin's (2nd century AD) epitome of Pompeius Trogus' Historiae Philipicae and a summary of Arrian's Events after Alexander, by Photios I of Constantinople. Lesser supplementary sources include Curtius Rufus, Pausanias, Pliny, and the Byzantine encyclopedia the Suda. In the field of philosophy, Diogenes Laertius' Lives and Opinions of Eminent Philosophers is the main source; works such as Cicero's De Natura Deorum also provide some further detail of philosophical schools in the Hellenistic period. Ancient Greece had traditionally been a fractious collection of fiercely independent city-states. After the Peloponnesian War (431–404 BC), Greece had fallen under a Spartan hegemony, in which Sparta was pre-eminent but not all-powerful. Spartan hegemony was succeeded by a Theban one after the Battle of Leuctra (371 BC), but after the Battle of Mantinea (362 BC), all of Greece was so weakened that no one state could claim pre-eminence. It was against this backdrop that the ascendancy of Macedon began, under king Philip II. Macedon was located at the periphery of the Greek world, and although its royal family claimed Greek descent, the Macedonians themselves were looked down upon as semi-barbaric by the rest of the Greeks. However, Macedon had a relatively strong and centralised government, and compared to most Greek states, directly controlled a large area. Philip II was a strong and expansionist king and he took every opportunity to expand Macedonian territory. In 352 BC he annexed Thessaly and Magnesia. In 338 BC, Philip defeated a combined Theban and Athenian army at the Battle of Chaeronea after a decade of desultory conflict. In the aftermath, Philip formed the League of Corinth, effectively bringing the majority of Greece under his direct sway. He was elected Hegemon of the league, and a campaign against the Achaemenid Empire of Persia was planned. However, while this campaign was in its early stages, he was assassinated. Succeeding his father, Alexander took over the Persian war himself. During a decade of campaigning, Alexander conquered the whole Persian Empire, overthrowing the Persian king Darius III. The conquered lands included Asia Minor, Assyria, the Levant, Egypt, Mesopotamia, Media, Persia, and parts of modern-day Afghanistan, Pakistan, and the steppes of central Asia. The years of constant campaigning had taken their toll however, and Alexander died in 323 BC. After his death, the huge territories Alexander had conquered became subject to a strong Greek influence (Hellenization) for the next two or three centuries, until the rise of Rome in the west, and of Parthia in the east. As the Greek and Levantine cultures mingled, the development of a hybrid Hellenistic culture began, and persisted even when isolated from the main centres of Greek culture (for instance, in the Greco-Bactrian kingdom). It can be argued that some of the changes across the Macedonian Empire after Alexander's conquests and during the rule of the Diadochi would have occurred without the influence of Greek rule. As mentioned by Peter Green, numerous factors of conquest have been merged under the term Hellenistic Period. Specific areas conquered by Alexander's invading army, including Egypt and areas of Asia Minor and Mesopotamia "fell" willingly to conquest and viewed Alexander as more of a liberator than a conqueror. In addition, much of the area conquered would continue to be ruled by the Diadochi, Alexander's generals and successors. Initially the whole empire was divided among them; however, some territories were lost relatively quickly, or only remained nominally under Macedonian rule. After 200 years, only much reduced and rather degenerate states remained, until the conquest of Ptolemaic Egypt by Rome. When Alexander the Great died (10 June 323 BC), he left behind a huge empire which was composed of many essentially autonomous territories called satrapies. Without a chosen successor there were immediate disputes among his generals as to who should be king of Macedon. These generals became known as the Diadochi (Greek: Διάδοχοι, Diadokhoi, meaning "Successors"). Meleager and the infantry supported the candidacy of Alexander's half-brother, Philip Arrhidaeus, while Perdiccas, the leading cavalry commander, supported waiting until the birth of Alexander's child by Roxana. After the infantry stormed the palace of Babylon, a compromise was arranged – Arrhidaeus (as Philip III) should become king, and should rule jointly with Roxana's child, assuming that it was a boy (as it was, becoming Alexander IV). Perdiccas himself would become regent (epimeletes) of the empire, and Meleager his lieutenant. Soon, however, Perdiccas had Meleager and the other infantry leaders murdered, and assumed full control. The generals who had supported Perdiccas were rewarded in the partition of Babylon by becoming satraps of the various parts of the empire, but Perdiccas' position was shaky, because, as Arrian writes, "everyone was suspicious of him, and he of them". The first of the Diadochi wars broke out when Perdiccas planned to marry Alexander's sister Cleopatra and began to question Antigonus I Monophthalmus' leadership in Asia Minor. Antigonus fled for Greece, and then, together with Antipater and Craterus (the satrap of Cilicia who had been in Greece fighting the Lamian war) invaded Anatolia. The rebels were supported by Lysimachus, the satrap of Thrace and Ptolemy, the satrap of Egypt. Although Eumenes, satrap of Cappadocia, defeated the rebels in Asia Minor, Perdiccas himself was murdered by his own generals Peithon, Seleucus, and Antigenes (possibly with Ptolemy's aid) during his invasion of Egypt (c. 21 May to 19 June, 320 BC). Ptolemy came to terms with Perdiccas's murderers, making Peithon and Arrhidaeus regents in his place, but soon these came to a new agreement with Antipater at the Treaty of Triparadisus. Antipater was made regent of the Empire, and the two kings were moved to Macedon. Antigonus remained in charge of Asia Minor, Ptolemy retained Egypt, Lysimachus retained Thrace and Seleucus I controlled Babylon. The second Diadochi war began following the death of Antipater in 319 BC. Passing over his own son, Cassander, Antipater had declared Polyperchon his successor as Regent. Cassander rose in revolt against Polyperchon (who was joined by Eumenes) and was supported by Antigonus, Lysimachus and Ptolemy. In 317, Cassander invaded Macedonia, attaining control of Macedon, sentencing Olympias to death and capturing the boy king Alexander IV, and his mother. In Asia, Eumenes was betrayed by his own men after years of campaign and was given up to Antigonus who had him executed. The third war of the Diadochi broke out because of the growing power and ambition of Antigonus. He began removing and appointing satraps as if he were king and also raided the royal treasuries in Ecbatana, Persepolis and Susa, making off with 25,000 talents. Seleucus was forced to flee to Egypt and Antigonus was soon at war with Ptolemy, Lysimachus, and Cassander. He then invaded Phoenicia, laid siege to Tyre, stormed Gaza and began building a fleet. Ptolemy invaded Syria and defeated Antigonus' son, Demetrius Poliorcetes, in the Battle of Gaza of 312 BC which allowed Seleucus to secure control of Babylonia, and the eastern satrapies. In 310, Cassander had young King Alexander IV and his mother Roxane murdered, ending the Argead Dynasty which had ruled Macedon for several centuries. Antigonus then sent his son Demetrius to regain control of Greece. In 307 he took Athens, expelling Demetrius of Phaleron, Cassander's governor, and proclaiming the city free again. Demetrius now turned his attention to Ptolemy, defeating his fleet at the Battle of Salamis and taking control of Cyprus. In the aftermath of this victory, Antigonus took the title of king (basileus) and bestowed it on his son Demetrius Poliorcetes, the rest of the Diadochi soon followed suit. Demetrius continued his campaigns by laying siege to Rhodes and conquering most of Greece in 302, creating a league against Cassander's Macedon. The decisive engagement of the war came when Lysimachus invaded and overran much of western Anatolia, but was soon isolated by Antigonus and Demetrius near Ipsus in Phrygia. Seleucus arrived in time to save Lysimachus and utterly crushed Antigonus at the Battle of Ipsus in 301 BC. Seleucus' war elephants proved decisive, Antigonus was killed, and Demetrius fled back to Greece to attempt to preserve the remnants of his rule there by recapturing a rebellious Athens. Meanwhile, Lysimachus took over Ionia, Seleucus took Cilicia, and Ptolemy captured Cyprus. After Cassander's death in 298 BC, however, Demetrius, who still maintained a sizable loyal army and fleet, invaded Macedon, seized the Macedonian throne (294) and conquered Thessaly and most of central Greece (293–291). He was defeated in 288 BC when Lysimachus of Thrace and Pyrrhus of Epirus invaded Macedon on two fronts, and quickly carved up the kingdom for themselves. Demetrius fled to central Greece with his mercenaries and began to build support there and in the northern Peloponnese. He once again laid siege to Athens after they turned on him, but then struck a treaty with the Athenians and Ptolemy, which allowed him to cross over to Asia Minor and wage war on Lysimachus' holdings in Ionia, leaving his son Antigonus Gonatas in Greece. After initial successes, he was forced to surrender to Seleucus in 285 and later died in captivity. Lysimachus, who had seized Macedon and Thessaly for himself, was forced into war when Seleucus invaded his territories in Asia minor and was defeated and killed in 281 BC at the Battle of Corupedium, near Sardis. Seleucus then attempted to conquer Lysimachus' European territories in Thrace and Macedon, but he was assassinated by Ptolemy Ceraunus ("the thunderbolt"), who had taken refuge at the Seleucid court and then had himself acclaimed as king of Macedon. Ptolemy was killed when Macedon was invaded by Gauls in 279—his head stuck on a spear—and the country fell into anarchy. Antigonus II Gonatas invaded Thrace in the summer of 277 and defeated a large force of 18,000 Gauls. He was quickly hailed as king of Macedon and went on to rule for 35 years. At this point the tripartite territorial division of the Hellenistic age was in place, with the main Hellenistic powers being Macedon under Demetrius's son Antigonus II Gonatas, the Ptolemaic kingdom under the aged Ptolemy I and the Seleucid empire under Seleucus' son Antiochus I Soter. Kingdom of EpirusEdit In 281 Pyrrhus (nicknamed "the eagle", aetos) invaded southern Italy to aid the city state of Tarentum. Pyrrhus defeated the Romans in the Battle of Heraclea and at the Battle of Asculum. Though victorious, he was forced to retreat due to heavy losses, hence the term "Pyrrhic victory". Pyrrhus then turned south and invaded Sicily but was unsuccessful and returned to Italy. After the Battle of Beneventum (275 BC) Pyrrhus lost all his Italian holdings and left for Epirus. Pyrrhus then went to war with Macedonia in 275, deposing Antigonus II Gonatas and briefly ruling over Macedonia and Thessaly until 285. Afterwards he invaded southern Greece, and was killed in battle against Argos in 272 BC. After the death of Pyrrhus, Epirus remained a minor power. In 233 BC the Aeacid royal family was deposed and a federal state was set up called the Epirote League. The league was conquered by Rome in the Third Macedonian War (171–168 BC). Kingdom of MacedonEdit Antigonus II, a student of Zeno of Citium, spent most of his rule defending Macedon against Epirus and cementing Macedonian power in Greece, first against the Athenians in the Chremonidean War, and then against the Achaean League of Aratus of Sicyon. Under the Antigonids, Macedonia was often short on funds, the Pangaeum mines were no longer as productive as under Philip II, the wealth from Alexander's campaigns had been used up and the countryside pillaged by the Gallic invasion. A large number of the Macedonian population had also been resettled abroad by Alexander or had chosen to emigrate to the new eastern Greek cities. Up to two thirds of the population emigrated, and the Macedonian army could only count on a levy of 25,000 men, a significantly smaller force than under Philip II. Antigonus II ruled until his death in 239 BC. His son Demetrius II soon died in 229 BC, leaving a child (Philip V) as king, with the general Antigonus Doson as regent. Doson led Macedon to victory in the war against the Spartan king Cleomenes III, and occupied Sparta. Philip V, who came to power when Doson died in 221 BC, was the last Macedonian ruler with both the talent and the opportunity to unite Greece and preserve its independence against the "cloud rising in the west": the ever-increasing power of Rome. He was known as "the darling of Hellas". Under his auspices the Peace of Naupactus (217 BC) brought the latest war between Macedon and the Greek leagues (the social war 220-217) to an end, and at this time he controlled all of Greece except Athens, Rhodes and Pergamum. In 215 BC Philip, with his eye on Illyria, formed an alliance with Rome's enemy Hannibal of Carthage, which led to Roman alliances with the Achaean League, Rhodes and Pergamum. The First Macedonian War broke out in 212 BC, and ended inconclusively in 205 BC. Philip continued to wage war against Pergamum and Rhodes for control of the Aegean (204-200 BC) and ignored Roman demands for non-intervention in Greece by invading Attica. In 198 BC, during the Second Macedonian War Philip was decisively defeated at Cynoscephalae by the Roman proconsul Titus Quinctius Flamininus and Macedon lost all its territories in Greece proper. Southern Greece was now thoroughly brought into the Roman sphere of influence, though it retained nominal autonomy. The end of Antigonid Macedon came when Philip V's son, Perseus, was defeated and captured by the Romans in the Third Macedonian War (171–168 BC). Rest of GreeceEdit During the Hellenistic period the importance of Greece proper within the Greek-speaking world declined sharply. The great centers of Hellenistic culture were Alexandria and Antioch, capitals of Ptolemaic Egypt and Seleucid Syria respectively. The conquests of Alexander greatly widened the horizons of the Greek world, making the endless conflicts between the cities which had marked the 5th and 4th centuries BC seem petty and unimportant. It led to a steady emigration, particularly of the young and ambitious, to the new Greek empires in the east. Many Greeks migrated to Alexandria, Antioch and the many other new Hellenistic cities founded in Alexander's wake, as far away as modern Afghanistan and Pakistan. Independent city states were unable to compete with Hellenistic kingdoms and were usually forced to ally themselves to one of them for defense, giving honors to Hellenistic rulers in return for protection. One example is Athens, which had been decisively defeated by Antipater in the Lamian war (323-322) and had its port in the Piraeus garrisoned by Macedonian troops who supported a conservative oligarchy. After Demetrius Poliorcetes captured Athens in 307 and restored the democracy, the Athenians honored him and his father Antigonus by placing gold statues of them on the agora and granting them the title of king. Athens later allied itself to Ptolemaic Egypt to throw off Macedonian rule, eventually setting up a religious cult for the Ptolemaic kings and naming one of the city's phyles in honour of Ptolemy for his aid against Macedon. In spite of the Ptolemaic monies and fleets backing their endeavors, Athens and Sparta were defeated by Antigonus II during the Chremonidean War (267-261). Athens was then occupied by Macedonian troops, and run by Macedonian officials. Sparta remained independent, but it was no longer the leading military power in the Peloponnese. The Spartan king Cleomenes III (235–222 BC) staged a military coup against the conservative ephors and pushed through radical social and land reforms in order to increase the size of the shrinking Spartan citizenry able to provide military service and restore Spartan power. Sparta's bid for supremacy was crushed at the Battle of Sellasia (222) by the Achaean league and Macedon, who restored the power of the ephors. Other city states formed federated states in self-defense, such as the Aetolian League (est. 370 BC), the Achaean League (est. 280 BC), the Boeotian league, the "Northern League" (Byzantium, Chalcedon, Heraclea Pontica and Tium) and the "Nesiotic League" of the Cyclades. These federations involved a central government which controlled foreign policy and military affairs, while leaving most of the local governing to the city states, a system termed sympoliteia. In states such as the Achaean league, this also involved the admission of other ethnic groups into the federation with equal rights, in this case, non-Achaeans. The Achean league was able to drive out the Macedonians from the Peloponnese and free Corinth, which duly joined the league. One of the few city states who managed to maintain full independence from the control of any Hellenistic kingdom was Rhodes. With a skilled navy to protect its trade fleets from pirates and an ideal strategic position covering the routes from the east into the Aegean, Rhodes prospered during the Hellenistic period. It became a center of culture and commerce, its coins were widely circulated and its philosophical schools became one of the best in the Mediterranean. After holding out for one year under siege by Demetrius Poliorcetes (305–304 BC), the Rhodians built the Colossus of Rhodes to commemorate their victory. They retained their independence by the maintenance of a powerful navy, by maintaining a carefully neutral posture and acting to preserve the balance of power between the major Hellenistic kingdoms. Initially Rhodes had very close ties with the Ptolemaic kingdom. Rhodes later became a Roman ally against the Seleucids, receiving some territory in Caria for their role in the Roman–Seleucid War (192–188 BC). Rome eventually turned on Rhodes and annexed the island as a Roman province. The west Balkan coast was inhabited by various Illyrian tribes and kingdoms such as the kingdom of the Dalmatae and of the Ardiaei, who often engaged in piracy under Queen Teuta (reigned 231 BC to 227 BC). Further inland was the Illyrian Paeonian Kingdom and the tribe of the Agrianes. Illyrians on the coast of the Adriatic were under the effects and influence of Hellenisation and some tribes adopted Greek, becoming bilingual due to their proximity to the Greek colonies in Illyria. Illyrians imported weapons and armor from the Ancient Greeks (such as the Illyrian type helmet, originally a Greek type) and also adopted the ornamentation of Ancient Macedon on their shields and their war belts (a single one has been found, dated 3rd century BC at modern Selce e Poshtme, a part of Macedon at the time under Philip V of Macedon). The Odrysian Kingdom was a union of Thracian tribes under the kings of the powerful Odrysian tribe centered around the region of Thrace. Various parts of Thrace were under Macedonian rule under Philip II of Macedon, Alexander the Great, Lysimachus, Ptolemy II, and Philip V but were also often ruled by their own kings. The Thracians and Agrianes were widely used by Alexander as peltasts and light cavalry, forming about one fifth of his army. The Diadochi also used Thracian mercenaries in their armies and they were also used as colonists. The Odrysians used Greek as the language of administration and of the nobility. The nobility also adopted Greek fashions in dress, ornament and military equipment, spreading it to the other tribes. Thracian kings were among the first to be Hellenized. Southern Italy (Magna Graecia) and south-eastern Sicily had been colonized by the Greeks during the 8th century. In 4th century Sicily the leading Greek city and hegemon was Syracuse. During the Hellenistic period the leading figure in Sicily was Agathocles of Syracuse (361–289 BC) who seized the city with an army of mercenaries in 317 BC. Agathocles extended his power throughout most of the Greek cities in Sicily, fought a long war with the Carthaginians, at one point invading Tunisia in 310 and defeating a Carthaginian army there. This was the first time a European force had invaded the region. After this war he controlled most of south-east Sicily and had himself proclaimed king, in imitation of the Hellenistic monarchs of the east. Agathocles then invaded Italy (c. 300 BC) in defense of Tarentum against the Bruttians and Romans, but was unsuccessful. Greeks in pre-Roman Gaul were mostly limited to the Mediterranean coast of Provence, France. The first Greek colony in the region was Massalia, which became one of the largest trading ports of Mediterranean by the 4th century BC with 6,000 inhabitants. Massalia was also the local hegemon, controlling various coastal Greek cities like Nice and Agde. The coins minted in Massalia have been found in all parts of Ligurian-Celtic Gaul. Celtic coinage was influenced by Greek designs, and Greek letters can be found on various Celtic coins, especially those of Southern France. Traders from Massalia ventured inland deep into France on the Rivers Durance and Rhône, and established overland trade routes deep into Gaul, and to Switzerland and Burgundy. The Hellenistic period saw the Greek alphabet spread into southern Gaul from Massalia (3rd and 2nd centuries BC) and according to Strabo, Massalia was also a center of education, where Celts went to learn Greek. A staunch ally of Rome, Massalia retained its independence until it sided with Pompey in 49 BC and was then taken by Caesar's forces. The city of Emporion (modern Empúries), originally founded by Archaic-period settlers from Phocaea and Massalia in the 6th century BC near the village of Sant Martí d'Empúries (located on an offshore island that forms part of L'Escala, Catalonia, Spain), was reestablished in the 5th century BC with a new city (neapolis) on the Iberian mainland. Emporion contained a mixed population of Greek colonists and Iberian natives, and although Livy and Strabo assert that they lived in different quarters, these two groups were eventually integrated. The city became a dominant trading hub and center of Hellenistic civilization in Iberia, eventually siding with the Roman Republic against the Carthaginian Empire during the Second Punic War (218–201 BC). However, Emporion lost its political independence around 195 BC with the establishment of the Roman province of Hispania Citerior and by the 1st century BC had become fully Romanized in culture. Hellenistic Middle EastEdit The Hellenistic states of Asia and Egypt were run by an occupying imperial elite of Greco-Macedonian administrators and governors propped up by a standing army of mercenaries and a small core of Greco-Macedonian settlers. Promotion of immigration from Greece was important in the establishment of this system. Hellenistic monarchs ran their kingdoms as royal estates and most of the heavy tax revenues went into the military and paramilitary forces which preserved their rule from any kind of revolution. Macedonian and Hellenistic monarchs were expected to lead their armies on the field, along with a group of privileged aristocratic companions or friends (hetairoi, philoi) which dined and drank with the king and acted as his advisory council. The monarch was also expected to serve as a charitable patron of the people; this public philanthropy could mean building projects and handing out gifts but also promotion of Greek culture and religion. Ptolemy, a somatophylax, one of the seven bodyguards who served as Alexander the Great's generals and deputies, was appointed satrap of Egypt after Alexander's death in 323 BC. In 305 BC, he declared himself King Ptolemy I, later known as "Soter" (saviour) for his role in helping the Rhodians during the siege of Rhodes. Ptolemy built new cities such as Ptolemais Hermiou in upper Egypt and settled his veterans throughout the country, especially in the region of the Faiyum. Alexandria, a major center of Greek culture and trade, became his capital city. As Egypt's first port city, it was the main grain exporter in the Mediterranean. The Egyptians begrudgingly accepted the Ptolemies as the successors to the pharaohs of independent Egypt, though the kingdom went through several native revolts. The Ptolemies took on the traditions of the Egyptian Pharaohs, such as marrying their siblings (Ptolemy II was the first to adopt this custom), having themselves portrayed on public monuments in Egyptian style and dress, and participating in Egyptian religious life. The Ptolemaic ruler cult portrayed the Ptolemies as gods, and temples to the Ptolemies were erected throughout the kingdom. Ptolemy I even created a new god, Serapis, who was combination of two Egyptian gods: Apis and Osiris, with attributes of Greek gods. Ptolemaic administration was, like the Ancient Egyptian bureaucracy, highly centralized and focused on squeezing as much revenue out of the population as possible though tariffs, excise duties, fines, taxes and so forth. A whole class of petty officials, tax farmers, clerks and overseers made this possible. The Egyptian countryside was directly administered by this royal bureaucracy. External possessions such as Cyprus and Cyrene were run by strategoi, military commanders appointed by the crown. Under Ptolemy II, Callimachus, Apollonius of Rhodes, Theocritus and a host of other poets made the city a center of Hellenistic literature. Ptolemy himself was eager to patronise the library, scientific research and individual scholars who lived on the grounds of the library. He and his successors also fought a series of wars with the Seleucids, known as the Syrian wars, over the region of Coele-Syria. Ptolemy IV won the great battle of Raphia (217 BC) against the Seleucids, using native Egyptians trained as phalangites. However these Egyptian soldiers revolted, eventually setting up a native breakaway Egyptian state in the Thebaid between 205–186/5 BC, severely weakening the Ptolemaic state. Ptolemy's family ruled Egypt until the Roman conquest of 30 BC. All the male rulers of the dynasty took the name Ptolemy. Ptolemaic queens, some of whom were the sisters of their husbands, were usually called Cleopatra, Arsinoe, or Berenice. The most famous member of the line was the last queen, Cleopatra VII, known for her role in the Roman political battles between Julius Caesar and Pompey, and later between Octavian and Mark Antony. Her suicide at the conquest by Rome marked the end of Ptolemaic rule in Egypt though Hellenistic culture continued to thrive in Egypt throughout the Roman and Byzantine periods until the Muslim conquest. Following division of Alexander's empire, Seleucus I Nicator received Babylonia. From there, he created a new empire which expanded to include much of Alexander's near eastern territories. At the height of its power, it included central Anatolia, the Levant, Mesopotamia, Persia, today's Turkmenistan, Pamir, and parts of Pakistan. It included a diverse population estimated at fifty to sixty million people. Under Antiochus I (c. 324/3 – 261 BC), however, the unwieldy empire was already beginning to shed territories. Pergamum broke away under Eumenes I who defeated a Seleucid army sent against him. The kingdoms of Cappadocia, Bithynia and Pontus were all practically independent by this time as well. Like the Ptolemies, Antiochus I established a dynastic religious cult, deifying his father Seleucus I. Seleucus, officially said to be descended from Apollo, had his own priests and monthly sacrifices. The erosion of the empire continued under Seleucus II, who was forced to fight a civil war (239–236) against his brother Antiochus Hierax and was unable to keep Bactria, Sogdiana and Parthia from breaking away. Hierax carved off most of Seleucid Anatolia for himself, but was defeated, along with his Galatian allies, by Attalus I of Pergamon who now also claimed kingship. The vast Seleucid Empire was, like Egypt, mostly dominated by a Greco-Macedonian political elite. The Greek population of the cities who formed the dominant elite were reinforced by emigration from Greece. These cities included newly founded colonies such as Antioch, the other cities of the Syrian tetrapolis, Seleucia (north of Babylon) and Dura-Europos on the Euphrates. These cities retained traditional Greek city state institutions such as assemblies, councils and elected magistrates, but this was a facade for they were always controlled by the royal Seleucid officials. Apart from these cities, there were also a large number of Seleucid garrisons (choria), military colonies (katoikiai) and Greek villages (komai) which the Seleucids planted throughout the empire to cement their rule. This 'Greco-Macedonian' population (which also included the sons of settlers who had married local women) could make up a phalanx of 35,000 men (out of a total Seleucid army of 80,000) during the reign of Antiochos III. The rest of the army was made up of native troops. Antiochus III ("the Great") conducted several vigorous campaigns to retake all the lost provinces of the empire since the death of Seleucus I. After being defeated by Ptolemy IV's forces at Raphia (217), Antiochus III led a long campaign to the east to subdue the far eastern breakaway provinces (212-205) including Bactria, Parthia, Ariana, Sogdiana, Gedrosia and Drangiana. He was successful, bringing back most of these provinces into at least nominal vassalage and receiving tribute from their rulers. After the death of Ptolemy IV (204), Antiochus took advantage of the weakness of Egypt to conquer Coele-Syria in the fifth Syrian war (202–195). He then began expanding his influence into Pergamene territory in Asia and crossed into Europe, fortifying Lysimachia on the Hellespont, but his expansion into Anatolia and Greece was abruptly halted after a decisive defeat at the Battle of Magnesia (190 BC). In the Treaty of Apamea which ended the war, Antiochus lost all of his territories in Anatolia west of the Taurus and was forced to pay a large indemnity of 15,000 talents. Much of the eastern part of the empire was then conquered by the Parthians under Mithridates I of Parthia in the mid-2nd century BC, yet the Seleucid kings continued to rule a rump state from Syria until the invasion by the Armenian king Tigranes the Great and their ultimate overthrow by the Roman general Pompey. |The Pergamon Altar, Smarthistory| After the death of Lysimachus, one of his officers, Philetaerus, took control of the city of Pergamum in 282 BC along with Lysimachus' war chest of 9,000 talents and declared himself loyal to Seleucus I while remaining de facto independent. His descendant, Attalus I, defeated the invading Galatians and proclaimed himself an independent king. Attalus I (241–197 BC), was a staunch ally of Rome against Philip V of Macedon during the first and second Macedonian Wars. For his support against the Seleucids in 190 BC, Eumenes II was rewarded with all the former Seleucid domains in Asia Minor. Eumenes II turned Pergamon into a centre of culture and science by establishing the library of Pergamum which was said to be second only to the library of Alexandria with 200,000 volumes according to Plutarch. It included a reading room and a collection of paintings. Eumenes II also constructed the Pergamum Altar with friezes depicting the Gigantomachy on the acropolis of the city. Pergamum was also a center of parchment (charta pergamena) production. The Attalids ruled Pergamon until Attalus III bequeathed the kingdom to the Roman Republic in 133 BC to avoid a likely succession crisis. The Celts who settled in Galatia came through Thrace under the leadership of Leotarios and Leonnorios c. 270 BC. They were defeated by Seleucus I in the 'battle of the Elephants', but were still able to establish a Celtic territory in central Anatolia. The Galatians were well respected as warriors and were widely used as mercenaries in the armies of the successor states. They continued to attack neighboring kingdoms such as Bithynia and Pergamon, plundering and extracting tribute. This came to an end when they sided with the renegade Seleucid prince Antiochus Hierax who tried to defeat Attalus, the ruler of Pergamon (241–197 BC). Attalus severely defeated the Gauls, forcing them to confine themselves to Galatia. The theme of the Dying Gaul (a famous statue displayed in Pergamon) remained a favorite in Hellenistic art for a generation signifying the victory of the Greeks over a noble enemy. In the early 2nd century BC, the Galatians became allies of Antiochus the Great, the last Seleucid king trying to regain suzerainty over Asia Minor. In 189 BC, Rome sent Gnaeus Manlius Vulso on an expedition against the Galatians. Galatia was henceforth dominated by Rome through regional rulers from 189 BC onward. After their defeats by Pergamon and Rome the Galatians slowly became hellenized and they were called "Gallo-Graeci" by the historian Justin as well as Ἑλληνογαλάται (Hellēnogalátai) by Diodorus Siculus in his Bibliotheca historica v.32.5, who wrote that they were "called Helleno-Galatians because of their connection with the Greeks." The Bithynians were a Thracian people living in northwest Anatolia. After Alexander's conquests the region of Bithynia came under the rule of the native king Bas, who defeated Calas, a general of Alexander the Great, and maintained the independence of Bithynia. His son, Zipoetes I of Bithynia maintained this autonomy against Lysimachus and Seleucus I, and assumed the title of king (basileus) in 297 BC. His son and successor, Nicomedes I, founded Nicomedia, which soon rose to great prosperity, and during his long reign (c. 278 – c. 255 BC), as well as those of his successors, the kingdom of Bithynia held a considerable place among the minor monarchies of Anatolia. Nicomedes also invited the Celtic Galatians into Anatolia as mercenaries, and they later turned on his son Prusias I, who defeated them in battle. Their last king, Nicomedes IV, was unable to maintain himself against Mithridates VI of Pontus, and, after being restored to his throne by the Roman Senate, he bequeathed his kingdom by will to the Roman republic (74 BC). Cappadocia, a mountainous region situated between Pontus and the Taurus mountains, was ruled by a Persian dynasty. Ariarathes I (332–322 BC) was the satrap of Cappadocia under the Persians and after the conquests of Alexander he retained his post. After Alexander's death he was defeated by Eumenes and crucified in 322 BC, but his son, Ariarathes II managed to regain the throne and maintain his autonomy against the warring Diadochi. In 255 BC, Ariarathes III took the title of king and married Stratonice, a daughter of Antiochus II, remaining an ally of the Seleucid kingdom. Under Ariarathes IV, Cappadocia came into relations with Rome, first as a foe espousing the cause of Antiochus the Great, then as an ally against Perseus of Macedon and finally in a war against the Seleucids. Ariarathes V also waged war with Rome against Aristonicus, a claimant to the throne of Pergamon, and their forces were annihilated in 130 BC. This defeat allowed Pontus to invade and conquer the kingdom. Kingdom of PontusEdit The Kingdom of Pontus was a Hellenistic kingdom on the southern coast of the Black Sea. It was founded by Mithridates I in 291 BC and lasted until its conquest by the Roman Republic in 63 BC. Despite being ruled by a dynasty which was a descendant of the Persian Achaemenid Empire it became hellenized due to the influence of the Greek cities on the Black Sea and its neighboring kingdoms. Pontic culture was a mix of Greek and Iranian elements; the most hellenized parts of the kingdom were on the coast, populated by Greek colonies such as Trapezus and Sinope, the latter of which became the capital of the kingdom. Epigraphic evidence also shows extensive Hellenistic influence in the interior. During the reign of Mithridates II, Pontus was allied with the Seleucids through dynastic marriages. By the time of Mithridates VI Eupator, Greek was the official language of the kingdom, though Anatolian languages continued to be spoken. The kingdom grew to its largest extent under Mithridates VI, who conquered Colchis, Cappadocia, Paphlagonia, Bithynia, Lesser Armenia, the Bosporan Kingdom, the Greek colonies of the Tauric Chersonesos and, for a brief time, the Roman province of Asia. Mithridates VI, himself of mixed Persian and Greek ancestry, presented himself as the protector of the Greeks against the 'barbarians' of Rome styling himself as "King Mithridates Eupator Dionysus" and as the "great liberator". Mithridates also depicted himself with the anastole hairstyle of Alexander and used the symbolism of Herakles, from whom the Macedonian kings claimed descent. After a long struggle with Rome in the Mithridatic wars, Pontus was defeated; part of it was incorporated into the Roman Republic as the province of Bithynia, while Pontus' eastern half survived as a client kingdom. Orontid Armenia formally passed to the empire of Alexander the Great following his conquest of Persia. Alexander appointed an Orontid named Mithranes to govern Armenia. Armenia later became a vassal state of the Seleucid Empire, but it maintained a considerable degree of autonomy, retaining its native rulers. Towards the end 212 BC the country was divided into two kingdoms, Greater Armenia and Armenia Sophene, including Commagene or Armenia Minor. The kingdoms became so independent from Seleucid control that Antiochus III the Great waged war on them during his reign and replaced their rulers. After the Seleucid defeat at the Battle of Magnesia in 190 BC, the kings of Sophene and Greater Armenia revolted and declared their independence, with Artaxias becoming the first king of the Artaxiad dynasty of Armenia in 188. During the reign of the Artaxiads, Armenia went through a period of hellenization. Numismatic evidence shows Greek artistic styles and the use of the Greek language. Some coins describe the Armenian kings as "Philhellenes". During the reign of Tigranes the Great (95–55 BC), the kingdom of Armenia reached its greatest extent, containing many Greek cities, including the entire Syrian tetrapolis. Cleopatra, the wife of Tigranes the Great, invited Greeks such as the rhetor Amphicrates and the historian Metrodorus of Scepsis to the Armenian court, and—according to Plutarch—when the Roman general Lucullus seized the Armenian capital, Tigranocerta, he found a troupe of Greek actors who had arrived to perform plays for Tigranes. Tigranes' successor Artavasdes II even composed Greek tragedies himself. Parthia was a north-eastern Iranian satrapy of the Achaemenid Empire which later passed on to Alexander's empire. Under the Seleucids, Parthia was governed by various Greek satraps such as Nicanor and Philip. In 247 BC, following the death of Antiochus II Theos, Andragoras, the Seleucid governor of Parthia, proclaimed his independence and began minting coins showing himself wearing a royal diadem and claiming kingship. He ruled until 238 BC when Arsaces, the leader of the Parni tribe conquered Parthia, killing Andragoras and inaugurating the Arsacid Dynasty. Antiochus III recaptured Arsacid controlled territory in 209 BC from Arsaces II. Arsaces II sued for peace and became a vassal of the Seleucids. It was not until the reign of Phraates I (168–165 BC), that the Arsacids would again begin to assert their independence. During the reign of Mithridates I of Parthia, Arsacid control expanded to include Herat (in 167 BC), Babylonia (in 144 BC), Media (in 141 BC), Persia (in 139 BC), and large parts of Syria (in the 110s BC). The Seleucid–Parthian wars continued as the Seleucids invaded Mesopotamia under Antiochus VII Sidetes (r. 138–129 BC), but he was eventually killed by a Parthian counterattack. After the fall of the Seleucid dynasty, the Parthians fought frequently against neighbouring Rome in the Roman–Parthian Wars (66 BC – 217 AD). Abundant traces of Hellenism continued under the Parthian empire. The Parthians used Greek as well as their own Parthian language (though lesser than Greek) as languages of administration and also used Greek drachmas as coinage. They enjoyed Greek theater, and Greek art influenced Parthian art. The Parthians continued worshipping Greek gods syncretized together with Iranian deities. Their rulers established ruler cults in the manner of Hellenistic kings and often used Hellenistic royal epithets. The Nabatean Kingdom was an Arab state located between the Sinai Peninsula and the Arabian Peninsula. Its capital was the city of Petra, an important trading city on the incense route. The Nabateans resisted the attacks of Antigonus and were allies of the Hasmoneans in their struggle against the Seleucids, but later fought against Herod the Great. The hellenization of the Nabateans occurred relatively late in comparison to the surrounding regions. Nabatean material culture does not show any Greek influence until the reign of Aretas III Philhellene in the 1st century BC. Aretas captured Damascus and built the Petra pool complex and gardens in the Hellenistic style. Though the Nabateans originally worshipped their traditional gods in symbolic form such as stone blocks or pillars, during the Hellenistic period they began to identify their gods with Greek gods and depict them in figurative forms influenced by Greek sculpture. Nabatean art shows Greek influences and paintings have been found depicting Dionysian scenes. They also slowly adopted Greek as a language of commerce along with Aramaic and Arabic. During the Hellenistic period, Judea became a frontier region between the Seleucid Empire and Ptolemaic Egypt and therefore was often the frontline of the Syrian wars, changing hands several times during these conflicts. Under the Hellenistic kingdoms, Judea was ruled by the hereditary office of the High Priest of Israel as a Hellenistic vassal. This period also saw the rise of a Hellenistic Judaism, which first developed in the Jewish diaspora of Alexandria and Antioch, and then spread to Judea. The major literary product of this cultural syncretism is the Septuagint translation of the Hebrew Bible from Biblical Hebrew and Biblical Aramaic to Koiné Greek. The reason for the production of this translation seems to be that many of the Alexandrian Jews had lost the ability to speak Hebrew and Aramaic. Between 301 and 219 BC the Ptolemies ruled Judea in relative peace, and Jews often found themselves working in the Ptolemaic administration and army, which led to the rise of a Hellenized Jewish elite class (e.g. the Tobiads). The wars of Antiochus III brought the region into the Seleucid empire; Jerusalem fell to his control in 198 and the Temple was repaired and provided with money and tribute. Antiochus IV Epiphanes sacked Jerusalem and looted the Temple in 169 BC after disturbances in Judea during his abortive invasion of Egypt. Antiochus then banned key Jewish religious rites and traditions in Judea. He may have been attempting to Hellenize the region and unify his empire and the Jewish resistance to this eventually led to an escalation of violence. Whatever the case, tensions between pro and anti-Seleucid Jewish factions led to the 174–135 BC Maccabean Revolt of Judas Maccabeus (whose victory is celebrated in the Jewish festival of Hanukkah). Modern interpretations see this period as a civil war between Hellenized and orthodox forms of Judaism. Out of this revolt was formed an independent Jewish kingdom known as the Hasmonaean Dynasty, which lasted from 165 BC to 63 BC. The Hasmonean Dynasty eventually disintegrated in a civil war, which coincided with civil wars in Rome. The last Hasmonean ruler, Antigonus II Mattathias, was captured by Herod and executed in 37 BC. In spite of originally being a revolt against Greek overlordship, the Hasmonean kingdom and also the Herodian kingdom which followed gradually became more and more hellenized. From 37 BC to 4 BC, Herod the Great ruled as a Jewish-Roman client king appointed by the Roman Senate. He considerably enlarged the Temple (see Herod's Temple), making it one of the largest religious structures in the world. The style of the enlarged temple and other Herodian architecture shows significant Hellenistic architectural influence. His son, Herod Archelaus, ruled from 4 BC to 6 AD when he was deposed for the formation of Roman Judea. The Greek kingdom of Bactria began as a breakaway satrapy of the Seleucid empire, which, because of the size of the empire, had significant freedom from central control. Between 255-246 BC, the governor of Bactria, Sogdiana and Margiana (most of present-day Afghanistan), one Diodotus, took this process to its logical extreme and declared himself king. Diodotus II, son of Diodotus, was overthrown in about 230 BC by Euthydemus, possibly the satrap of Sogdiana, who then started his own dynasty. In c. 210 BC, the Greco-Bactrian kingdom was invaded by a resurgent Seleucid empire under Antiochus III. While victorious in the field, it seems Antiochus came to realise that there were advantages in the status quo (perhaps sensing that Bactria could not be governed from Syria), and married one of his daughters to Euthydemus's son, thus legitimising the Greco-Bactrian dynasty. Soon afterwards the Greco-Bactrian kingdom seems to have expanded, possibly taking advantage of the defeat of the Parthian king Arsaces II by Antiochus. According to Strabo, the Greco-Bactrians seem to have had contacts with China through the silk road trade routes (Strabo, XI.XI.I). Indian sources also maintain religious contact between Buddhist monks and the Greeks, and some Greco-Bactrians did convert to Buddhism. Demetrius, son and successor of Euthydemus, invaded north-western India in 180 BC, after the destruction of the Mauryan Empire there; the Mauryans were probably allies of the Bactrians (and Seleucids). The exact justification for the invasion remains unclear, but by about 175 BC, the Greeks ruled over parts of north-western India. This period also marks the beginning of the obfuscation of Greco-Bactrian history. Demetrius possibly died about 180 BC; numismatic evidence suggests the existence of several other kings shortly thereafter. It is probable that at this point the Greco-Bactrian kingdom split into several semi-independent regions for some years, often warring amongst themselves. Heliocles was the last Greek to clearly rule Bactria, his power collapsing in the face of central Asian tribal invasions (Scythian and Yuezhi), by about 130 BC. However, Greek urban civilisation seems to have continued in Bactria after the fall of the kingdom, having a hellenising effect on the tribes which had displaced Greek rule. The Kushan Empire which followed continued to use Greek on their coinage and Greeks continued being influential in the empire. The separation of the Indo-Greek kingdom from the Greco-Bactrian kingdom resulted in an even more isolated position, and thus the details of the Indo-Greek kingdom are even more obscure than for Bactria. Many supposed kings in India are known only because of coins bearing their name. The numismatic evidence together with archaeological finds and the scant historical records suggest that the fusion of eastern and western cultures reached its peak in the Indo-Greek kingdom. After Demetrius' death, civil wars between Bactrian kings in India allowed Apollodotus I (from c. 180/175 BC) to make himself independent as the first proper Indo-Greek king (who did not rule from Bactria). Large numbers of his coins have been found in India, and he seems to have reigned in Gandhara as well as western Punjab. Apollodotus I was succeeded by or ruled alongside Antimachus II, likely the son of the Bactrian king Antimachus I. In about 155 (or 165) BC he seems to have been succeeded by the most successful of the Indo-Greek kings, Menander I. Menander converted to Buddhism, and seems to have been a great patron of the religion; he is remembered in some Buddhist texts as 'Milinda'. He also expanded the kingdom further east into Punjab, though these conquests were rather ephemeral. After the death of Menander (c. 130 BC), the Kingdom appears to have fragmented, with several 'kings' attested contemporaneously in different regions. This inevitably weakened the Greek position, and territory seems to have been lost progressively. Around 70 BC, the western regions of Arachosia and Paropamisadae were lost to tribal invasions, presumably by those tribes responsible for the end of the Bactrian kingdom. The resulting Indo-Scythian kingdom seems to have gradually pushed the remaining Indo-Greek kingdom towards the east. The Indo-Greek kingdom appears to have lingered on in western Punjab until about 10 AD, at which time it was finally ended by the Indo-Scythians. After conquering the Indo-Greeks, the Kushan empire took over Greco-Buddhism, the Greek language, Greek script, Greek coinage and artistic styles. Greeks continued being an important part of the cultural world of India for generations. The depictions of the Buddha appear to have been influenced by Greek culture: Buddha representations in the Ghandara period often showed Buddha under the protection of Herakles. Several references in Indian literature praise the knowledge of the Yavanas or the Greeks. The Mahabharata compliments them as "the all-knowing Yavanas" (sarvajnaa yavanaa); e.g., "The Yavanas, O king, are all-knowing; the Suras are particularly so. The mlecchas are wedded to the creations of their own fancy", such as flying machines that are generally called vimanas. The "Brihat-Samhita" of the mathematician Varahamihira says: "The Greeks, though impure, must be honored since they were trained in sciences and therein, excelled others....." . Other states and Hellenistic influencesEdit Hellenistic culture was at its height of world influence in the Hellenistic period. Hellenism or at least Philhellenism reached most regions on the frontiers of the Hellenistic kingdoms. Though some of these regions were not ruled by Greeks or even Greek speaking elites, certain Hellenistic influences can be seen in the historical record and material culture of these regions. Other regions had established contact with Greek colonies before this period, and simply saw a continued process of Hellenization and intermixing. Before the Hellenistic period, Greek colonies had been established on the coast of the Crimean and Taman peninsulas. The Bosporan Kingdom was a multi-ethnic kingdom of Greek city states and local tribal peoples such as the Maeotians, Thracians, Crimean Scythians and Cimmerians under the Spartocid dynasty (438–110 BC). The Spartocids were a hellenized Thracian family from Panticapaeum. The Bosporans had long lasting trade contacts with the Scythian peoples of the Pontic-Caspian steppe, and Hellenistic influence can be seen in the Scythian settlements of the Crimea, such as in the Scythian Neapolis. Scythian pressure on the Bosporan kingdom under Paerisades V led to its eventual vassalage under the Pontic king Mithradates VI for protection, c. 107 BC. It later became a Roman client state. Other Scythians on the steppes of Central Asia came into contact with Hellenistic culture through the Greeks of Bactria. Many Scythian elites purchased Greek products and some Scythian art shows Greek influences. At least some Scythians seem to have become Hellenized, because we know of conflicts between the elites of the Scythian kingdom over the adoption of Greek ways. These Hellenized Scythians were known as the "young Scythians". The peoples around Pontic Olbia, known as the Callipidae, were intermixed and Hellenized Greco-Scythians. The Greek colonies on the west coast of the Black sea, such as Istros, Tomi and Callatis traded with the Thracian Getae who occupied modern-day Dobruja. From the 6th century BC on, the multiethnic people in this region gradually intermixed with each other, creating a Greco-Getic populace. Numismatic evidence shows that Hellenic influence penetrated further inland. Getae in Wallachia and Moldavia coined Getic tetradrachms, Getic imitations of Macedonian coinage. The ancient Georgian kingdoms had trade relations with the Greek city-states on the Black Sea coast such as Poti and Sukhumi. The kingdom of Colchis, which later became a Roman client state, received Hellenistic influences from the Black Sea Greek colonies. In Arabia, Bahrain, which was referred to by the Greeks as Tylos, the centre of pearl trading, when Nearchus came to discover it serving under Alexander the Great. The Greek admiral Nearchus is believed to have been the first of Alexander's commanders to visit these islands. It is not known whether Bahrain was part of the Seleucid Empire, although the archaeological site at Qalat Al Bahrain has been proposed as a Seleucid base in the Persian Gulf. Alexander had planned to settle the eastern shores of the Persian Gulf with Greek colonists, and although it is not clear that this happened on the scale he envisaged, Tylos was very much part of the Hellenised world: the language of the upper classes was Greek (although Aramaic was in everyday use), while Zeus was worshipped in the form of the Arabian sun-god Shams. Tylos even became the site of Greek athletic contests. Carthage was a Phoenician colony on the coast of Tunisia. Carthaginian culture came into contact with the Greeks through Punic colonies in Sicily and through their widespread Mediterranean trade network. While the Carthaginians retained their Punic culture and language, they did adopt some Hellenistic ways, one of the most prominent of which was their military practices. In 550 BC, Mago I of Carthage began a series of military reforms which included copying the army of Timoleon, Tyrant of Syracuse. The core of Carthage's military was the Greek-style phalanx formed by citizen hoplite spearmen who had been conscripted into service, though their armies also included large numbers of mercenaries. After their defeat in the First Punic War, Carthage hired a Spartan mercenary captain, Xanthippus of Carthage, to reform their military forces. Xanthippus reformed the Carthaginian military along Macedonian army lines. By the 2nd century BC, the kingdom of Numidia also began to see Hellenistic culture influence its art and architecture. The Numidian royal monument at Chemtou is one example of Numidian Hellenized architecture. Reliefs on the monument also show the Numidians had adopted Greco-Macedonian type armor and shields for their soldiers. Ptolemaic Egypt was the center of Hellenistic influence in Africa and Greek colonies also thrived in the region of Cyrene, Libya. The kingdom of Meroë was in constant contact with Ptolemaic Egypt and Hellenistic influences can be seen in their art and archaeology. There was a temple to Serapis, the Greco-Egyptian god. Rise of RomeEdit Widespread Roman interference in the Greek world was probably inevitable given the general manner of the ascendency of the Roman Republic. This Roman-Greek interaction began as a consequence of the Greek city-states located along the coast of southern Italy. Rome had come to dominate the Italian peninsula, and desired the submission of the Greek cities to its rule. Although they initially resisted, allying themselves with Pyrrhus of Epirus, and defeating the Romans at several battles, the Greek cities were unable to maintain this position and were absorbed by the Roman republic. Shortly afterwards, Rome became involved in Sicily, fighting against the Carthaginians in the First Punic War. The end result was the complete conquest of Sicily, including its previously powerful Greek cities, by the Romans. Roman entanglement in the Balkans began when Illyrian piratical raids on Roman merchants led to invasions of Illyria (the First and, Second Illyrian Wars). Tension between Macedon and Rome increased when the young king of Macedon, Philip V, harbored one of the chief pirates, Demetrius of Pharos (a former client of Rome). As a result, in an attempt to reduce Roman influence in the Balkans, Philip allied himself with Carthage after Hannibal had dealt the Romans a massive defeat at the Battle of Cannae (216 BC) during the Second Punic War. Forcing the Romans to fight on another front when they were at a nadir of manpower gained Philip the lasting enmity of the Romans—the only real result from the somewhat insubstantial First Macedonian War (215–202 BC). Once the Second Punic War had been resolved, and the Romans had begun to regather their strength, they looked to re-assert their influence in the Balkans, and to curb the expansion of Philip. A pretext for war was provided by Philip's refusal to end his war with Attalid Pergamum and Rhodes, both Roman allies. The Romans, also allied with the Aetolian League of Greek city-states (which resented Philip's power), thus declared war on Macedon in 200 BC, starting the Second Macedonian War. This ended with a decisive Roman victory at the Battle of Cynoscephalae (197 BC). Like most Roman peace treaties of the period, the resultant 'Peace of Flaminius' was designed utterly to crush the power of the defeated party; a massive indemnity was levied, Philip's fleet was surrendered to Rome, and Macedon was effectively returned to its ancient boundaries, losing influence over the city-states of southern Greece, and land in Thrace and Asia Minor. The result was the end of Macedon as a major power in the Mediterranean. As a result of the confusion in Greece at the end of the Second Macedonian War, the Seleucid Empire also became entangled with the Romans. The Seleucid Antiochus III had allied with Philip V of Macedon in 203 BC, agreeing that they should jointly conquer the lands of the boy-king of Egypt, Ptolemy V. After defeating Ptolemy in the Fifth Syrian War, Antiochus concentrated on occupying the Ptolemaic possessions in Asia Minor. However, this brought Antiochus into conflict with Rhodes and Pergamum, two important Roman allies, and began a 'cold war' between Rome and Antiochus (not helped by the presence of Hannibal at the Seleucid court). Meanwhile, in mainland Greece, the Aetolian League, which had sided with Rome against Macedon, now grew to resent the Roman presence in Greece. This presented Antiochus III with a pretext to invade Greece and 'liberate' it from Roman influence, thus starting the Roman-Syrian War (192–188 BC). In 191 BC, the Romans under Manius Acilius Glabrio routed him at Thermopylae and obliged him to withdraw to Asia. During the course of this war Roman troops moved into Asia for the first time, where they defeated Antiochus again at the Battle of Magnesia (190 BC). A crippling treaty was imposed on Antiochus, with Seleucid possessions in Asia Minor removed and given to Rhodes and Pergamum, the size of the Seleucid navy reduced, and a massive war indemnity invoked. Thus, in less than twenty years, Rome had destroyed the power of one of the successor states, crippled another, and firmly entrenched its influence over Greece. This was primarily a result of the over-ambition of the Macedonian kings, and their unintended provocation of Rome, though Rome was quick to exploit the situation. In another twenty years, the Macedonian kingdom was no more. Seeking to re-assert Macedonian power and Greek independence, Philip V's son Perseus incurred the wrath of the Romans, resulting in the Third Macedonian War (171–168 BC). Victorious, the Romans abolished the Macedonian kingdom, replacing it with four puppet republics; these lasted a further twenty years before Macedon was formally annexed as a Roman province (146 BC) after yet another rebellion under Andriscus. Rome now demanded that the Achaean League, the last stronghold of Greek independence, be dissolved. The Achaeans refused and declared war on Rome. Most of the Greek cities rallied to the Achaeans' side, even slaves were freed to fight for Greek independence. The Roman consul Lucius Mummius advanced from Macedonia and defeated the Greeks at Corinth, which was razed to the ground. In 146 BC, the Greek peninsula, though not the islands, became a Roman protectorate. Roman taxes were imposed, except in Athens and Sparta, and all the cities had to accept rule by Rome's local allies. The Attalid dynasty of Pergamum lasted little longer; a Roman ally until the end, its final king Attalus III died in 133 BC without an heir, and taking the alliance to its natural conclusion, willed Pergamum to the Roman Republic. The final Greek resistance came in 88 BC, when King Mithridates of Pontus rebelled against Rome, captured Roman held Anatolia, and massacred up to 100,000 Romans and Roman allies across Asia Minor. Many Greek cities, including Athens, overthrew their Roman puppet rulers and joined him in the Mithridatic wars. When he was driven out of Greece by the Roman general Lucius Cornelius Sulla, the latter laid siege to Athens and razed the city. Mithridates was finally defeated by Gnaeus Pompeius Magnus (Pompey the Great) in 65 BC. Further ruin was brought to Greece by the Roman civil wars, which were partly fought in Greece. Finally, in 27 BC, Augustus directly annexed Greece to the new Roman Empire as the province of Achaea. The struggles with Rome had left Greece depopulated and demoralised. Nevertheless, Roman rule at least brought an end to warfare, and cities such as Athens, Corinth, Thessaloniki and Patras soon recovered their prosperity. Contrarily, having so firmly entrenched themselves into Greek affairs, the Romans now completely ignored the rapidly disintegrating Seleucid empire (perhaps because it posed no threat); and left the Ptolemaic kingdom to decline quietly, while acting as a protector of sorts, in as much as to stop other powers taking Egypt over (including the famous line-in-the-sand incident when the Seleucid Antiochus IV Epiphanes tried to invade Egypt). Eventually, instability in the near east resulting from the power vacuum left by the collapse of the Seleucid Empire caused the Roman proconsul Pompey the Great to abolish the Seleucid rump state, absorbing much of Syria into the Roman Republic. Famously, the end of Ptolemaic Egypt came as the final act in the republican civil war between the Roman triumvirs Mark Anthony and Augustus Caesar. After the defeat of Anthony and his lover, the last Ptolemaic monarch, Cleopatra VII, at the Battle of Actium, Augustus invaded Egypt and took it as his own personal fiefdom. He thereby completed both the destruction of the Hellenistic kingdoms and the Roman Republic, and ended (in hindsight) the Hellenistic era. In some fields Hellenistic culture thrived, particularly in its preservation of the past. The states of the Hellenistic period were deeply fixated with the past and its seemingly lost glories. The preservation of many classical and archaic works of art and literature (including the works of the three great classical tragedians, Aeschylus, Sophocles, and Euripides) are due to the efforts of the Hellenistic Greeks. The museum and library of Alexandria was the center of this conservationist activity. With the support of royal stipends, Alexandrian scholars collected, translated, copied, classified, and critiqued every book they could find. Most of the great literary figures of the Hellenistic period studied at Alexandria and conducted research there. They were scholar poets, writing not only poetry but treatises on Homer and other archaic and classical Greek literature. Athens retained its position as the most prestigious seat of higher education, especially in the domains of philosophy and rhetoric, with considerable libraries and philosophical schools. Alexandria had the monumental museum (i.e., research center) and Library of Alexandria which was estimated to have had 700,000 volumes. The city of Pergamon also had a large library and became a major center of book production. The island of Rhodes had a library and also boasted a famous finishing school for politics and diplomacy. Libraries were also present in Antioch, Pella, and Kos. Cicero was educated in Athens and Mark Antony in Rhodes. Antioch was founded as a metropolis and center of Greek learning which retained its status into the era of Christianity. Seleucia replaced Babylon as the metropolis of the lower Tigris. The spread of Greek culture and language throughout the Near East and Asia owed much to the development of newly founded cities and deliberate colonization policies by the successor states, which in turn was necessary for maintaining their military forces. Settlements such as Ai-Khanoum, on trade routes, allowed Greek culture to mix and spread. The language of Philip II's and Alexander's court and army (which was made up of various Greek and non-Greek speaking peoples) was a version of Attic Greek, and over time this language developed into Koine, the lingua franca of the successor states. The identification of local gods with similar Greek deities, a practice termed 'Interpretatio graeca', stimulated the building of Greek-style temples, and Greek culture in the cities meant that buildings such as gymnasia and theaters became common. Many cities maintained nominal autonomy while under the rule of the local king or satrap, and often had Greek-style institutions. Greek dedications, statues, architecture, and inscriptions have all been found. However, local cultures were not replaced, and mostly went on as before, but now with a new Greco-Macedonian or otherwise Hellenized elite. An example that shows the spread of Greek theater is Plutarch's story of the death of Crassus, in which his head was taken to the Parthian court and used as a prop in a performance of The Bacchae. Theaters have also been found: for example, in Ai-Khanoum on the edge of Bactria, the theater has 35 rows – larger than the theater in Babylon. The spread of Greek influence and language is also shown through Ancient Greek coinage. Portraits became more realistic, and the obverse of the coin was often used to display a propagandistic image, commemorating an event or displaying the image of a favored god. The use of Greek-style portraits and Greek language continued under the Roman, Parthian, and Kushan empires, even as the use of Greek was in decline. Hellenization and acculturationEdit The concept of Hellenization, meaning the adoption of Greek culture in non-Greek regions, has long been controversial. Undoubtedly Greek influence did spread through the Hellenistic realms, but to what extent, and whether this was a deliberate policy or mere cultural diffusion, have been hotly debated. It seems likely that Alexander himself pursued policies which led to Hellenization, such as the foundations of new cities and Greek colonies. While it may have been a deliberate attempt to spread Greek culture (or as Arrian says, "to civilise the natives"), it is more likely that it was a series of pragmatic measures designed to aid in the rule of his enormous empire. Cities and colonies were centers of administrative control and Macedonian power in a newly conquered region. Alexander also seems to have attempted to create a mixed Greco-Persian elite class as shown by the Susa weddings and his adoption of some forms of Persian dress and court culture. He also brought Persian and other non-Greek peoples into his military and even the elite cavalry units of the companion cavalry. Again, it is probably better to see these policies as a pragmatic response to the demands of ruling a large empire than to any idealized attempt to bringing Greek culture to the 'barbarians'. This approach was bitterly resented by the Macedonians and discarded by most of the Diadochi after Alexander's death. These policies can also be interpreted as the result of Alexander's possible megalomania during his later years. After Alexander's death in 323 BC, the influx of Greek colonists into the new realms continued to spread Greek culture into Asia. The founding of new cities and military colonies continued to be a major part of the Successors' struggle for control of any particular region, and these continued to be centers of cultural diffusion. The spread of Greek culture under the Successors seems mostly to have occurred with the spreading of Greeks themselves, rather than as an active policy. Throughout the Hellenistic world, these Greco-Macedonian colonists considered themselves by and large superior to the native "barbarians" and excluded most non-Greeks from the upper echelons of courtly and government life. Most of the native population was not Hellenized, had little access to Greek culture and often found themselves discriminated against by their Hellenic overlords. Gymnasiums and their Greek education, for example, were for Greeks only. Greek cities and colonies may have exported Greek art and architecture as far as the Indus, but these were mostly enclaves of Greek culture for the transplanted Greek elite. The degree of influence that Greek culture had throughout the Hellenistic kingdoms was therefore highly localized and based mostly on a few great cities like Alexandria and Antioch. Some natives did learn Greek and adopt Greek ways, but this was mostly limited to a few local elites who were allowed to retain their posts by the Diadochi and also to a small number of mid-level administrators who acted as intermediaries between the Greek speaking upper class and their subjects. In the Seleucid Empire, for example, this group amounted to only 2.5 percent of the official class. Hellenistic art nevertheless had a considerable influence on the cultures that had been affected by the Hellenistic expansion. As far as the Indian subcontinent, Hellenistic influence on Indian art was broad and far-reaching, and had effects for several centuries following the forays of Alexander the Great. Despite their initial reluctance, the Successors seem to have later deliberately naturalized themselves to their different regions, presumably in order to help maintain control of the population. In the Ptolemaic kingdom, we find some Egyptianized Greeks by the 2nd century onwards. In the Indo-Greek kingdom we find kings who were converts to Buddhism (e.g., Menander). The Greeks in the regions therefore gradually become 'localized', adopting local customs as appropriate. In this way, hybrid 'Hellenistic' cultures naturally emerged, at least among the upper echelons of society. The trends of Hellenization were therefore accompanied by Greeks adopting native ways over time, but this was widely varied by place and by social class. The farther away from the Mediterranean and the lower in social status, the more likely that a colonist was to adopt local ways, while the Greco-Macedonian elites and royal families usually remained thoroughly Greek and viewed most non-Greeks with disdain. It was not until Cleopatra VII that a Ptolemaic ruler bothered to learn the Egyptian language of their subjects. In the Hellenistic period, there was much continuity in Greek religion: the Greek gods continued to be worshiped, and the same rites were practiced as before. However the socio-political changes brought on by the conquest of the Persian empire and Greek emigration abroad meant that change also came to religious practices. This varied greatly by location. Athens, Sparta and most cities in the Greek mainland did not see much religious change or new gods (with the exception of the Egyptian Isis in Athens), while the multi-ethnic Alexandria had a very varied group of gods and religious practices, including Egyptian, Jewish and Greek. Greek emigres brought their Greek religion everywhere they went, even as far as India and Afghanistan. Non-Greeks also had more freedom to travel and trade throughout the Mediterranean and in this period we can see Egyptian gods such as Serapis, and the Syrian gods Atargatis and Hadad, as well as a Jewish synagogue, all coexisting on the island of Delos alongside classical Greek deities. A common practice was to identify Greek gods with native gods that had similar characteristics and this created new fusions like Zeus-Ammon, Aphrodite Hagne (a Hellenized Atargatis) and Isis-Demeter. Greek emigres faced individual religious choices they had not faced on their home cities, where the gods they worshiped were dictated by tradition. Hellenistic monarchies were closely associated with the religious life of the kingdoms they ruled. This had already been a feature of Macedonian kingship, which had priestly duties. Hellenestic kings adopted patron deities as protectors of their house and sometimes claimed descent from them. The Seleucids for example took on Apollo as patron, the Antigonids had Herakles, and the Ptolemies claimed Dionysus among others. The worship of dynastic ruler cults was also a feature of this period, most notably in Egypt, where the Ptolemies adopted earlier Pharaonic practice, and established themselves as god-kings. These cults were usually associated with a specific temple in honor of the ruler such as the Ptolemaieia at Alexandria and had their own festivals and theatrical performances. The setting up of ruler cults was more based on the systematized honors offered to the kings (sacrifice, proskynesis, statues, altars, hymns) which put them on par with the gods (isotheism) than on actual belief of their divine nature. According to Peter Green, these cults did not produce genuine belief of the divinity of rulers among the Greeks and Macedonians. The worship of Alexander was also popular, as in the long lived cult at Erythrae and of course, at Alexandria, where his tomb was located. The Hellenistic age also saw a rise in the disillusionment with traditional religion. The rise of philosophy and the sciences had removed the gods from many of their traditional domains such as their role in the movement of the heavenly bodies and natural disasters. The Sophists proclaimed the centrality of humanity and agnosticism; the belief in Euhemerism (the view that the gods were simply ancient kings and heroes), became popular. The popular philosopher Epicurus promoted a view of disinterested gods living far away from the human realm in metakosmia. The apotheosis of rulers also brought the idea of divinity down to earth. While there does seem to have been a substantial decline in religiosity, this was mostly reserved for the educated classes. Magic was practiced widely, and this, too, was a continuation from earlier times. Throughout the Hellenistic world, people would consult oracles, and use charms and figurines to deter misfortune or to cast spells. Also developed in this era was the complex system of astrology, which sought to determine a person's character and future in the movements of the sun, moon, and planets. Astrology was widely associated with the cult of Tyche (luck, fortune), which grew in popularity during this period. The Hellenistic period saw the rise of New Comedy, the only few surviving representative texts being those of Menander (born 342/1 BC). Only one play, Dyskolos, survives in its entirety. The plots of this new Hellenistic comedy of manners were more domestic and formulaic, stereotypical low born characters such as slaves became more important, the language was colloquial and major motifs included escapism, marriage, romance and luck (Tyche). Though no Hellenistic tragedy remains intact, they were still widely produced during the period, yet it seems that there was no major breakthrough in style, remaining within the classical model. The Supplementum Hellenisticum, a modern collection of extant fragments, contains the fragments of 150 authors. Hellenistic poets now sought patronage from kings, and wrote works in their honor. The scholars at the libraries in Alexandria and Pergamon focused on the collection, cataloging, and literary criticism of classical Athenian works and ancient Greek myths. The poet-critic Callimachus, a staunch elitist, wrote hymns equating Ptolemy II to Zeus and Apollo. He promoted short poetic forms such as the epigram, epyllion and the iambic and attacked epic as base and common ("big book, big evil" was his doctrine). He also wrote a massive catalog of the holdings of the library of Alexandria, the famous Pinakes. Callimachus was extremely influential in his time and also for the development of Augustan poetry. Another poet, Apollonius of Rhodes, attempted to revive the epic for the Hellenistic world with his Argonautica. He had been a student of Callimachus and later became chief librarian (prostates) of the library of Alexandria. Apollonius and Callimachus spent much of their careers feuding with each other. Pastoral poetry also thrived during the Hellenistic era, Theocritus was a major poet who popularized the genre. Around 240 BC Livius Andronicus, a Greek slave from southern Italy, translated Homer's Odyssey into Latin. Greek literature would have a dominant effect on the development of the Latin literature of the Romans. The poetry of Virgil, Horace and Ovid were all based on Hellenistic styles. During the Hellenistic period, many different schools of thought developed. Athens, with its multiple philosophical schools, continued to remain the center of philosophical thought. However, Athens had now lost her political freedom, and Hellenistic philosophy is a reflection of this new difficult period. In this political climate, Hellenistic philosophers went in search of goals such as ataraxia (un-disturbedness), autarky (self-sufficiency) and apatheia (freedom from suffering), which would allow them to wrest well-being or eudaimonia out of the most difficult turns of fortune. This occupation with the inner life, with personal inner liberty and with the pursuit of eudaimonia is what all Hellenistic philosophical schools have in common. The Epicureans and the Cynics rejected public offices and civic service, which amounted to a rejection of the polis itself, the defining institution of the Greek world. Epicurus promoted atomism and an asceticism based on freedom from pain as its ultimate goal. Cynics such as Diogenes of Sinope rejected all material possessions and social conventions (nomos) as unnatural and useless. The Cyrenaics, meanwhile, embraced hedonism, arguing that pleasure was the only true good. Stoicism, founded by Zeno of Citium, taught that virtue was sufficient for eudaimonia as it would allow one to live in accordance with Nature or Logos. Zeno became extremely popular; the Athenians set up a gold statue of him, and Antigonus II Gonatas invited him to the Macedonian court. The philosophical schools of Aristotle (the Peripatetics of the Lyceum) and Plato (Platonism at the Academy) also remained influential. The academy would eventually turn to Academic Skepticism under Arcesilaus until it was rejected by Antiochus of Ascalon (c. 90 BC) in favour of Neoplatonism. Hellenistic philosophy had a significant influence on the Greek ruling elite. Examples include Athenian statesman Demetrius of Phaleron, who had studied in the lyceum; the Spartan king Cleomenes III, who was a student of the Stoic Sphairos of Borysthenes; and Antigonus II, who was also a well known Stoic. This can also be said of the Roman upper classes, where Stoicism was dominant, as seen in the Meditations of the Roman emperor Marcus Aurelius and the works of Cicero. The spread of Christianity throughout the Roman world, followed by the spread of Islam, ushered in the end of Hellenistic philosophy and the beginnings of Medieval philosophy (often forcefully, as under Justinian I), which was dominated by the three Abrahamic traditions: Jewish philosophy, Christian philosophy, and early Islamic philosophy. In spite of this shift, Hellenistic philosophy continued to influence these three religious traditions and the renaissance thought which followed them. Hellenistic culture produced seats of learning throughout the Mediterranean. Hellenistic science differed from Greek science in at least two ways: first, it benefited from the cross-fertilization of Greek ideas with those that had developed in the larger Hellenistic world; secondly, to some extent, it was supported by royal patrons in the kingdoms founded by Alexander's successors. Especially important to Hellenistic science was the city of Alexandria in Egypt, which became a major center of scientific research in the 3rd century BC. Hellenistic scholars frequently employed the principles developed in earlier Greek thought: the application of mathematics and deliberate empirical research, in their scientific investigations. Hellenistic Geometers such as Archimedes (c. 287 – 212 BC), Apollonius of Perga (c. 262 – c. 190 BC), and Euclid (c. 325 – 265 BC), whose Elements became the most important textbook in mathematics until the 19th century, built upon the work of the Hellenic era Pythagoreans. Euclid developed proofs for the Pythagorean Theorem, for the infinitude of primes, and worked on the five Platonic solids. Eratosthenes used his knowledge of geometry to measure the circumference of the Earth. His calculation was remarkably accurate. He was also the first to calculate the tilt of the Earth's axis (again with remarkable accuracy). Additionally, he may have accurately calculated the distance from the Earth to the Sun and invented the leap day. Known as the "Father of Geography ", Eratosthenes also created the first map of the world incorporating parallels and meridians, based on the available geographical knowledge of the era. Astronomers like Hipparchus (c. 190 – c. 120 BC) built upon the measurements of the Babylonian astronomers before him, to measure the precession of the Earth. Pliny reports that Hipparchus produced the first systematic star catalog after he observed a new star (it is uncertain whether this was a nova or a comet) and wished to preserve astronomical record of the stars, so that other new stars could be discovered. It has recently been claimed that a celestial globe based on Hipparchus's star catalog sits atop the broad shoulders of a large 2nd-century Roman statue known as the Farnese Atlas. Another astronomer, Aristarchos of Samos developed a heliocentric system. The level of Hellenistic achievement in astronomy and engineering is impressively shown by the Antikythera mechanism (150–100 BC). It is a 37-gear mechanical computer which computed the motions of the Sun and Moon, including lunar and solar eclipses predicted on the basis of astronomical periods believed to have been learned from the Babylonians. Devices of this sort are not found again until the 10th century, when a simpler eight-geared luni-solar calculator incorporated into an astrolabe was described by the Persian scholar, Al-Biruni.[not in citation given] Similarly complex devices were also developed by other Muslim engineers and astronomers during the Middle Ages.[not in citation given] Medicine, which was dominated by the Hippocratic tradition, saw new advances under Praxagoras of Kos, who theorized that blood traveled through the veins. Herophilos (335–280 BC) was the first to base his conclusions on dissection of the human body and animal vivisection, and to provide accurate descriptions of the nervous system, liver and other key organs. Influenced by Philinus of Cos (fl. 250), a student of Herophilos, a new medical sect emerged, the Empiric school, which was based on strict observation and rejected unseen causes of the Dogmatic school. Bolos of Mendes made developments in alchemy and Theophrastus was known for his work in plant classification. Krateuas wrote a compendium on botanic pharmacy. The library of Alexandria included a zoo for research and Hellenistic zoologists include Archelaos, Leonidas of Byzantion, Apollodoros of Alexandria and Bion of Soloi. Technological developments from the Hellenistic period include cogged gears, pulleys, the screw, Archimedes' screw, the screw press, glassblowing, hollow bronze casting, surveying instruments, an odometer, the pantograph, the water clock, a water organ, and the Piston pump. The interpretation of Hellenistic science varies widely. At one extreme is the view of the English classical scholar Cornford, who believed that "all the most important and original work was done in the three centuries from 600 to 300 BC". At the other is the view of the Italian physicist and mathematician Lucio Russo, who claims that scientific method was actually born in the 3rd century BC, to be forgotten during the Roman period and only revived in the Renaissance. Hellenistic warfare was a continuation of the military developments of Iphicrates and Philip II of Macedon, particularly his use of the Macedonian Phalanx, a dense formation of pikemen, in conjunction with heavy companion cavalry. Armies of the Hellenistic period differed from those of the classical period in being largely made up of professional soldiers and also in their greater specialization and technical proficiency in siege warfare. Hellenistic armies were significantly larger than those of classical Greece relying increasingly on Greek mercenaries (misthophoroi; men-for-pay) and also on non-Greek soldiery such as Thracians, Galatians, Egyptians and Iranians. Some ethnic groups were known for their martial skill in a particular mode of combat and were highly sought after, including Tarantine cavalry, Cretan archers, Rhodian slingers and Thracian peltasts. This period also saw the adoption of new weapons and troop types such as Thureophoroi and the Thorakitai who used the oval Thureos shield and fought with javelins and the machaira sword. The use of heavily armored cataphracts and also horse archers was adopted by the Seleucids, Greco-Bactrians, Armenians and Pontus. The use of war elephants also became common. Seleucus received Indian war elephants from the Mauryan empire, and used them to good effect at the battle of Ipsus. He kept a core of 500 of them at Apameia. The Ptolemies used the smaller African elephant. Hellenistic military equipment was generally characterized by an increase in size. Hellenistic-era warships grew from the trireme to include more banks of oars and larger numbers of rowers and soldiers as in the Quadrireme and Quinquereme. The Ptolemaic Tessarakonteres was the largest ship constructed in Antiquity. New siege engines were developed during this period. An unknown engineer developed the torsion-spring catapult (c. 360) and Dionysios of Alexandria designed a repeating ballista, the Polybolos. Preserved examples of ball projectiles range from 4.4 kg to 78 kg (or over 170 lbs). Demetrius Poliorcetes was notorious for the large siege engines employed in his campaigns, especially during the 12-month siege of Rhodes when he had Epimachos of Athens build a massive 160 ton siege tower named Helepolis, filled with artillery. The term Hellenistic is a modern invention; the Hellenistic World not only included a huge area covering the whole of the Aegean, rather than the Classical Greece focused on the Poleis of Athens and Sparta, but also a huge time range. In artistic terms this means that there is huge variety which is often put under the heading of "Hellenistic Art" for convenience. Hellenistic art saw a turn from the idealistic, perfected, calm and composed figures of classical Greek art to a style dominated by realism and the depiction of emotion (pathos) and character (ethos). The motif of deceptively realistic naturalism in art (aletheia) is reflected in stories such as that of the painter Zeuxis, who was said to have painted grapes that seemed so real that birds came and pecked at them. The female nude also became more popular as epitomized by the Aphrodite of Cnidos of Praxiteles and art in general became more erotic (e.g., Leda and the Swan and Scopa's Pothos). The dominant ideals of Hellenistic art were those of sensuality and passion. People of all ages and social statuses were depicted in the art of the Hellenistic age. Artists such as Peiraikos chose mundane and lower class subjects for his paintings. According to Pliny, "He painted barbers' shops, cobblers' stalls, asses, eatables and similar subjects, earning for himself the name of rhyparographos [painter of dirt/low things]. In these subjects he could give consummate pleasure, selling them for more than other artists received for their large pictures" (Natural History, Book XXXV.112). Even barbarians, such as the Galatians, were depicted in heroic form, prefiguring the artistic theme of the noble savage. The image of Alexander the Great was also an important artistic theme, and all of the diadochi had themselves depicted imitating Alexander's youthful look. A number of the best-known works of Greek sculpture belong to the Hellenistic period, including Laocoön and his Sons, Venus de Milo, and the Winged Victory of Samothrace. Developments in painting included experiments in chiaroscuro by Zeuxis and the development of landscape painting and still life painting. Greek temples built during the Hellenistic period were generally larger than classical ones, such as the temple of Artemis at Ephesus, the temple of Artemis at Sardis, and the temple of Apollo at Didyma (rebuilt by Seleucus in 300 BC). The royal palace (basileion) also came into its own during the Hellenistic period, the first extant example being the massive 4th-century villa of Cassander at Vergina. There has been a trend in writing the history of this period to depict Hellenistic art as a decadent style, following the Golden Age of Classical Athens. Pliny the Elder, after having described the sculpture of the classical period, says: Cessavit deinde ars ("then art disappeared"). The 18th century terms Baroque and Rococo have sometimes been applied to the art of this complex and individual period. The renewal of the historiographical approach as well as some recent discoveries, such as the tombs of Vergina, allow a better appreciation of this period's artistic richness. Hellenistic period and modern cultureEdit The focus on the Hellenistic period over the course of the 19th century by scholars and historians has led to an issue common to the study of historical periods; historians see the period of focus as a mirror of the period in which they are living. Many 19th century scholars contended that the Hellenistic period represented a cultural decline from the brilliance of classical Greece. Though this comparison is now seen as unfair and meaningless, it has been noted that even commentators of the time saw the end of a cultural era which could not be matched again. This may be inextricably linked with the nature of government. It has been noted by Herodotus that after the establishment of the Athenian democracy: - ...the Athenians found themselves suddenly a great power. Not just in one field, but in everything they set their minds to...As subjects of a tyrant, what had they accomplished?...Held down like slaves they had shirked and slacked; once they had won their freedom, not a citizen but he could feel like he was labouring for himself" Thus, with the decline of the Greek polis, and the establishment of monarchical states, the environment and social freedom in which to excel may have been reduced. A parallel can be drawn with the productivity of the city states of Italy during the Renaissance, and their subsequent decline under autocratic rulers. However, William Woodthorpe Tarn, between World War I and World War II and the heyday of the League of Nations, focused on the issues of racial and cultural confrontation and the nature of colonial rule. Michael Rostovtzeff, who fled the Russian Revolution, concentrated predominantly on the rise of the capitalist bourgeoisie in areas of Greek rule. Arnaldo Momigliano, an Italian Jew who wrote before and after the Second World War, studied the problem of mutual understanding between races in the conquered areas. Moses Hadas portrayed an optimistic picture of synthesis of culture from the perspective of the 1950s, while Frank William Walbank in the 1960s and 1970s had a materialistic approach to the Hellenistic period, focusing mainly on class relations. Recently, however, papyrologist C. Préaux has concentrated predominantly on the economic system, interactions between kings and cities, and provides a generally pessimistic view on the period. Peter Green, on the other hand, writes from the point of view of late 20th century liberalism, his focus being on individualism, the breakdown of convention, experiments, and a postmodern disillusionment with all institutions and political processes. - Art of the Hellenistic Age and the Hellenistic Tradition. Heilbrunn Timeline of Art History, Metropolitan Museum of Art, 2013. Retrieved 27 May 2013. Archived here. - Hellenistic Age. Encyclopædia Britannica, 2013. Retrieved 27 May 2013. Archived here. - "Alexander the Great and the Hellenistic Age". www.penfield.edu. Retrieved 2017-10-08. - Green, Peter (2008). Alexander The Great and the Hellenistic Age. London: Orion. ISBN 0-7538-2413-2. - Professor Gerhard Rempel, Hellenistic Civilization (Western New England College) Archived 2008-07-05 at the Wayback Machine.. - Ulrich Wilcken, Griechische Geschichte im Rahmen der Altertumsgeschichte. - Green, p. xvii. - "Hellenistic Age". Encyclopædia Britannica Online. Encyclopædia Britannica, Inc. Retrieved 8 September 2012. - Green, P. Alexander The Great and the Hellenistic Age. p. xiii. ISBN 978-0-7538-2413-9. - Ἑλληνιστής. Liddell, Henry George; Scott, Robert; A Greek–English Lexicon at the Perseus Project. - Chaniotis, Angelos (2011). Greek History: Hellenistic. Oxford Bibliographies Online Research Guide. Oxford University Press. p. 8. ISBN 9780199805075. - Arnold, Matthew (1869). "Chapter IV". Culture and Anarchy. Smith, Elder & Co. p. 143. Arnold, Matthew; Garnett, Jane (editor) (2006). "Chapter IV". Culture and Anarchy. Oxford University Press. p. 96. ISBN 978-0-19-280511-9. - F.W. Walbank et al. THE CAMBRIDGE ANCIENT HISTORY, SECOND EDITION, VOLUME VII, PART I: The Hellenistic World, p. 1. - Green, Peter (2007). The Hellenistic Age (A Short History). New York: Modern Library Chronicles. - Green, Peter (1990); Alexander to Actium, the historical evolution of the Hellenistic age. University of California Press. Pages 7-8. - Green (1990), page 9. - Green (1990), page 14. - Green (1990), page 21. - Green (1990), page 30-31. - Green (1990), page 126. - Green (1990), page 129. - Green (1990), page 134. - Green (1990), p. 199 - Bugh, Glenn R. (editor). The Cambridge Companion to the Hellenistic World, 2007. p. 35 - Green, Peter; Alexander to Actium, the historical evolution of the Hellenistic age, page 11. - McGing, BC. The Foreign Policy of Mithridates VI Eupator, King of Pontus, P. 17. - Green (1990), p. 139. - Berthold, Richard M. Rhodes in the Hellenistic Age, p. 12. - Stanley M. Burstein, Walter Donlan, Jennifer Tolbert Roberts, and Sarah B. Pomeroy. A Brief History of Ancient Greece: Politics, Society, and Culture. Oxford University Presspage 255 - The Cambridge Ancient History, Volume 6: The Fourth Century BC by D. M. Lewis (Editor), John Boardman (Editor), Simon Hornblower (Editor), M. Ostwald (Editor), ISBN 0-521-23348-8, 1994, page 423, "Through contact with their Greek neighbors some Illyrian tribe became bilingual (Strabo Vii.7.8.Diglottoi) in particular the Bylliones and the Taulantian tribes close to Epidamnus" - Dalmatia: research in the Roman province 1970-2001 : papers in honour of J.J by David Davison, Vincent L. Gaffney, J. J. Wilkes, Emilio Marin, 2006, page 21, "...completely Hellenised town..." - The Illyrians: history and culture, History and Culture Series, The Illyrians: History and Culture, Aleksandar Stipčević, ISBN 0-8155-5052-9, 1977, page 174 - The Illyrians (The Peoples of Europe) by John Wilkes, 1996, page 233&236, "The Illyrians liked decorated belt-buckles or clasps (see figure 29). Some of gold and silver with openwork designs of stylised birds have a similar distribution to the Mramorac bracelets and may also have been produced under Greek influence." - Carte de la Macédoine et du monde égéen vers 200 av. J.-C. - Webber, Christopher; Odyrsian arms equipment and tactics. - The Odrysian Kingdom of Thrace: Orpheus Unmasked (Oxford Monographs on Classical Archaeology) by Z. H. Archibald,1998,ISBN 0-19-815047-4, page 3 - The Odrysian Kingdom of Thrace: Orpheus Unmasked (Oxford Monographs on Classical Archaeology) by Z. H. Archibald,1998,ISBN 0-19-815047-4, page 5 - The Peloponnesian War: A Military Study (Warfare and History) by J. F. Lazenby,2003, page 224,"... number of strongholds, and he made himself useful fighting 'the Thracians without a king' on behalf of the more Hellenized Thracian kings and their Greek neighbours (Nepos, Alc. ... - Walbank et al. (2008), p. 394. - Delamarre, Xavier. Dictionnaire de la langue gauloise. Editions Errance, Paris, 2008, p. 299 - Boardman, John (1993), The Diffusion of Classical Art in Antiquity, Princeton University Press, p.308. - Celtic Inscriptions on Gaulish and British Coins" by Beale Poste p.135 - Momigliano, Arnaldo. Alien Wisdom: The Limits of Hellenization, pp. 54-55. - Tang, Birgit (2005), Delos, Carthage, Ampurias: the Housing of Three Mediterranean Trading Centres, Rome: L'Erma di Bretschneider (Accademia di Danimarca), pp. 15–16, ISBN 8882653056 - Lapunzina, Alejandro (2005), Architecture of Spain, London: Greenwoood Press, ISBN 0-313-31963-4, pp. 69-71. - Tang, Birgit (2005), Delos, Carthage, Ampurias: the Housing of Three Mediterranean Trading Centres, Rome: L'Erma di Bretschneider (Accademia di Danimarca), pp. 17–18, ISBN 8882653056 - Lapunzina, Alejandro (2005), Architecture of Spain, London: Greenwoood Press, ISBN 0-313-31963-4, p. 70. - Lapunzina, Alejandro (2005), Architecture of Spain, London: Greenwoood Press, ISBN 0-313-31963-4, pp. 70-71. - Tang, Birgit (2005), Delos, Carthage, Ampurias: the Housing of Three Mediterranean Trading Centres, Rome: L'Erma di Bretschneider (Accademia di Danimarca), pp. 16–17, ISBN 8882653056 - Green (1990), 187 - Green (1990), 190 - Green (1990), p. 193. - Green (1990), 291. - Jones, Kenneth Raymond (2006). Provincial reactions to Roman imperialism: the aftermath of the Jewish revolt, A.D. 66-70, Parts 66-70. University of California, Berkeley. p. 174. ISBN 978-0-542-82473-9. ... and the Greeks, or at least the Greco-Macedonian Seleucid Empire, replace the Persians as the Easterners. - Society for the Promotion of Hellenic Studies (London, England) (1993). The Journal of Hellenic studies, Volumes 113-114. Society for the Promotion of Hellenic Studies. p. 211. The Seleucid kingdom has traditionally been regarded as basically a Greco-Macedonian state and its rulers thought of as successors to Alexander. - Baskin, Judith R.; Seeskin, Kenneth (2010). The Cambridge Guide to Jewish History, Religion, and Culture. Cambridge University Press. p. 37. ISBN 978-0-521-68974-8. The wars between the two most prominent Greek dynasties, the Ptolemies of Egypt and the Seleucids of Syria, unalterably change the history of the land of Israel.... As a result the land of Israel became part of the empire of the Syrian Greek Seleucids. - Glubb, John Bagot (1967). Syria, Lebanon, Jordan. Thames & Hudson. p. 34. OCLC 585939. In addition to the court and the army, Syrian cities were full of Greek businessmen, many of them pure Greeks from Greece. The senior posts in the civil service were also held by Greeks. Although the Ptolemies and the Seleucids were perpetual rivals, both dynasties were Greek and ruled by means of Greek officials and Greek soldiers. Both governments made great efforts to attract immigrants from Greece, thereby adding yet another racial element to the population. - Bugh, Glenn R. (editor). The Cambridge Companion to the Hellenistic World, 2007. p. 43. - Steven C. Hause; William S. Maltby (2004). Western civilization: a history of European society. Thomson Wadsworth. p. 76. ISBN 978-0-534-62164-3. The Greco-Macedonian Elite. The Seleucids respected the cultural and religious sensibilities of their subjects but preferred to rely on Greek or Macedonian soldiers and administrators for the day-to-day business of governing. The Greek population of the cities, reinforced until the second century BC by emigration from Greece, formed a dominant, although not especially cohesive, elite. - Victor, Royce M. (2010). Colonial education and class formation in early Judaism: a postcolonial reading. Continuum International Publishing Group. p. 55. ISBN 978-0-567-24719-3. Like other Hellenistic kings, the Seleucids ruled with the help of their “friends” and a Greco-Macedonian elite class separate from the native populations whom they governed. - Britannica, Seleucid kingdom, 2008, O.Ed. - Bugh, Glenn R. (editor). The Cambridge Companion to the Hellenistic World, 2007, p. 44. - Green (1990), 293-295. - Green (1990), 304. - Green (1990), p. 421. - "The Pergamon Altar". Smarthistory at Khan Academy. Retrieved April 5, 2013. - "Pergamum". Columbia Electronic Encyclopedia, 6th Edition, 1. - Shipley (2000) pp. 318-319. - Justin, Epitome of Pompeius Trogus, 25.2 and 26.2; the related subject of copulative compounds, where both are of equal weight, is exhaustively treated in Anna Granville Hatcher, Modern English Word-Formation and Neo-Latin: A Study of the Origins of English (Baltimore: Johns Hopkins University), 1951. - This distinction is remarked upon in William M. Ramsay (revised by Mark W. Wilson), Historical Commentary on Galatians 1997:302; Ramsay notes the 4th century AD Paphlagonian Themistius' usage Γαλατίᾳ τῇ Ἑλληνίδι. - McGing, B. C. (1986). The Foreign Policy of Mithridates VI Eupator, King of Pontus. Leiden, The Netherlands: E. J. Brill. pp. 91–92. - Grousset pp.90-91 - Bivar, A.D.H. (1983), "The Political History of Iran under the Arsacids", in Yarshater, Ehsan, Cambridge History of Iran 3.1, Cambridge UP, pp. 21–99. - Bedal, Leigh-Ann; The Petra Pool-complex: A Hellenistic Paradeisos in the Nabataean Capital, pg 178. - NABATAEAN PANTHEON, http://nabataea.net/gods.html - Discovery of ancient cave paintings in Petra stuns art scholars, https://www.theguardian.com/science/2010/aug/22/hellenistic-wall-paintings-petra - Green (1990), p. 499. - Green (1990), p. 501. - Green (1990), p. 504. - Ponet, James (22 December 2005). "The Maccabees and the Hellenists". Faith-based. Slate. Retrieved 4 December 2012. - "The Revolt of the Maccabees". Simpletoremember.com. Retrieved 2012-08-13. - Demetrius is said to have founded Taxila (archaeological excavations), and also Sagala in the Punjab, which he seemed to have called Euthydemia, after his father ("the city of Sagala, also called Euthydemia" (Ptolemy, Geographia, VII 1)) - Bopearachchi, Monnaies, p.63 - Avari, Burjor (2016). India: The Ancient Past: A History of the Indian Subcontinent from C. 7000 BCE to CE 1200. Routledge. p. 167. ISBN 9781317236733. - Hinüber (2000), pp. 83-86, para. 173-179. - Ghose, Sanujit (2011). "Cultural links between India and the Greco-Roman world". Ancient History Encyclopedia. - Yavana#cite note-10 - Yavana#cite note-11 - Boardman, 131-133 - Claessen & Skalník (editors), The Early State, page 428. - Gent, John. The Scythie nations, down to the fall of the Western empire, p. 4. - Pârvan, Vasile. Dacia, page 92. - Pârvan, Vasile. Dacia, page 100. - Curtis E. Larsen. Life and Land Use on the Bahrain Islands: The Geoarcheology of an Ancient Society. p. 13. - Ian Morris (ed.). Classical Greece: Ancient histories and modern archaeologies. Routledge. p. 184. - Phillip Ward. Bahrain: A Travel Guide. Oleander Press. p. 68. - W. B. Fisher; et al. (1968). The Cambridge History of Iran. Cambridge University Press. p. 40. - Justin, 19, 1.1 - Prag & Quinn (editors). The Hellenistic West, pp. 229-237. - Green, P. Alexander The Great and the Hellenistic Age. pp. 98–99. ISBN 978-0-7538-2413-9. - Green, P. Alexander The Great and the Hellenistic Age. pp. 102–103. ISBN 978-0-7538-2413-9. - Holland, T. Rubicon: Triumph and Tragedy in the Roman Republic. ISBN 978-0-349-11563-4. - Cosmos: A Personal Voyage, Sagan, C 1980, on YouTube - Green (1990), pp. xx, 68-69. - Bugh, Glenn R. (editor). The Cambridge Companion to the Hellenistic World, 2007. p. 190. - Roy M. MacLeod (2004). The Library Of Alexandria: Centre Of Learning In The Ancient World. I.B. Tauris. ISBN 1-85043-594-4. - John Boardman, "The Diffusion of Classical Art in Antiquity", Princeton University Press, 1993, p.130: "The Indian king's grandson, Asoka, was a convert to Buddhism. His edicts appear carved on rocks and a number of free-standing pillars which are found right across India. These owe something to the pervasive influence of Achaemenid architecture and sculpture, with no little Greek architectural ornament and sculptural style as well. Notice the florals on the bull capital from Rampurva, and the style of the horse on the Sarnath capital, now the emblem of the Republic of India." - Green, p. 21. - Green, p. 23. - Green (1990), p. 313. - Green (1990), p. 315. - Green, p. 22. - Bugh, pp. 206-210. - Bugh, p. 209. - Wallbank et al. (2008), p. 84. - Wallbank et al. (2008), p. 86. - Green (1990), p. 402. - Green (1990), p. 396. - Green (1990), p. 399. - Green (1990), page 66-74. - Green (1990), page 65. - Green (1990), p. 179. - Green, Peter; Alexander to Actium, the historical evolution of the Hellenistic age, page 53. - Bill Casselman. "One of the Oldest Extant Diagrams from Euclid". University of British Columbia. Retrieved 2008-09-26. - Lloyd (1973), p. 177. - Bugh, p. 245. - Alfred, Randy (June 19, 2008). "June 19, 240 B.C.E: The Earth Is Round, and It's This Big". Wired. Retrieved 2013-06-22. - "The Antikythera Mechanism Research Project", The Antikythera Mechanism Research Project. Retrieved 2007-07-01 Quote: "The Antikythera Mechanism is now understood to be dedicated to astronomical phenomena and operates as a complex mechanical 'computer' which tracks the cycles of the Solar System." - Paphitis, Nicholas (November 30, 2006). "Experts: Fragments an Ancient Computer". The Washington Post. Imagine tossing a top-notch laptop into the sea, leaving scientists from a foreign culture to scratch their heads over its corroded remains centuries later. A Roman shipmaster inadvertently did something just like it 2,000 years ago off southern Greece, experts said late Thursday. - Otto Neugebauer (1975). A History of Ancient Mathematical Astronomy. New York: Springer. pp. 284–5.; Lloyd (1973), pp. 69-71. - Schaefer, Bradley E. (2005). "The Epoch of the Constellations on the Farnese Atlas and Their Origin in Hipparchus's Lost Catalogue". Journal for the History of Astronomy. 36: 167–96. Bibcode:2005JHA....36..167S.; But see also Duke, Dennis W. (2006). "Analysis of the Farnese Globe". Journal for the History of Astronomy. 37: 87–100. Bibcode:2006JHA....37...87D. - Freeth, T.; et al. (2006). "Decoding the ancient Greek astronomical calculator known as the Antikythera Mechanism". Nature. 444 (7119): 587–91. Bibcode:2006Natur.444..587F. doi:10.1038/nature05357. PMID 17136087.; Marchant, Jo (2006). "In Search of Lost Time". Nature. 444 (7119): 534–8. Bibcode:2006Natur.444..534M. doi:10.1038/444534a. PMID 17136067.; - Charette, François (2006). "High tech from Ancient Greece". Nature. 444 (7119): 551–2. Bibcode:2006Natur.444..551C. doi:10.1038/444551a. PMID 17136077.; Noble Wilford, John (2006-11-30). "Early Astronomical 'Computer' Found to Be Technically Complex". The New York Times. Retrieved 2006-11-30. - Green (1990), p. 467. - F. M. Cornford. The Unwritten Philosophy and Other Essays. p. 83. quoted in Lloyd (1973), p. 154. - Russo, Lucio (2004). The Forgotten Revolution: How Science Was Born in 300 BC and Why It Had To Be Reborn. Berlin: Springer. ISBN 3-540-20396-6. But see the critical reviews by Mott Greene, Nature, vol 430, no. 7000 (5 Aug 2004):614 and Michael Rowan-Robinson, Physics World, vol. 17, no. 4 (April 2004). - Bugh, p. 285. - Green, Peter; Alexander to Actium, the historical evolution of the Hellenistic age, page 92. - Green (1990), p. 342. - Green, Peter; Alexander to Actium, the historical evolution of the Hellenistic age, page 117-118. - Pliny the Elder, Natural History (XXXIV, 52) - Green, p. xv. - Herodotus (Holland, T. Persian Fire, p. 193.) - Austin, M. M. The Hellenistic World From Alexander to the Roman Conquest: A Selection of Ancient Sources In Translation. Cambridge: Cambridge University Press, 1981. - Bugh, Glenn Richard (ed.). The Cambridge Companion to the Hellenistic World. Cambridge: Cambridge University Press, 2006. - Börm, Henning and Nino Luraghi (eds.). The Polis in the Hellenistic World. Stuttgart: Franz Steiner Verlag, 2018. - Cary, M. A History of the Greek World, From 323 to 146 B.C. London: Methuen, 1963. - Chamoux, François. Hellenistic Civilization. Malden, MA: Blackwell Pub., 2003. - Champion, Michael and Lara O'Sullivan. Cultural Perceptions of Violence In the Hellenistic World. New York: Routledge, 2017. - Erskine, Andrew (ed.). A Companion to the Hellenistic World. Hoboken: Wiley, 2008. - Goodman, Martin. “Under the influence: Hellenism in ancient Jewish life.” Biblical Archaeology Review 36, no. 1 (2010), 60. - Grainger, John D. Great Power Diplomacy In the Hellenistic World. New York: Routledge, 2017. - Green, Peter. Alexander to Actium: The Historical Evolution of the Hellenistic Age. Berkeley: University of California Press, 1990. - Kralli, Ioanna. The Hellenistic Peloponnese: Interstate Relations: a Narrative and Analytic History, From the Fourth Century to 146 BC. Swansea: The Classical Press of Wales, 2017. - Lewis, D. M., John Boardman, and Simon Hornblower. Cambridge Ancient History Vol. 6: The Fourth Century BC. 2nd ed. Cambridge: Cambridge University Press, 1994. - Rimell, Victoria and Markus Asper. Imagining Empire: Political Space In Hellenistic and Roman Literature. Heidelberg: Universitätsverlag Winter GmbH, 2017. - Thonemann, Peter. The Hellenistic Age. First edition. Oxford: Oxford University Press, 2016. - Walbank, F. W. The Hellenistic World. Cambridge: Harvard University Press, 1982.
What is the Total Probability Rule? The Total Probability Rule (also known as the Law of Total Probability) is a fundamental rule in statistics relating to conditional and marginal probabilities. The rule states that if the probability of an event is unknown, it can be calculated using the known probabilities of several distinct events. Consider the situation in the image below: There are three events: A, B, and C. Events B and C are distinct from each other while event A intersects with both events. We do not know the probability of event A. However, we know the probability of event A under condition B and the probability of event A under condition C. The total probability rule states that by using the two conditional probabilities, we can find the probability of event A (i.e., total probability). Formula for the Total Probability Rule Mathematically, the total probability rule can be written in the following equation: - n – the number of events - Bn – the distinct event Remember that the multiplication probability rule states the following: P(A ∩ B) = P(A|B) × P(B) For example, the total probability of event A from the situation above can be found using the equation below: P(A) = P(A ∩ B) + P(A ∩ C) The Total Probability Rule and Decision Trees The decision tree is a simple and convenient method of visualizing problems with the total probability rule. The decision tree depicts all possible events in the sequence. Using the decision tree, you can quickly identify the relationships between the events and calculate the conditional probabilities. In order to understand how to utilize a decision tree for the calculation of the total probability, let’s consider the following example: You are a stock analyst following ABC Corp. You discovered that the company is planning to launch a new project that is likely to affect the company’s stock price. You identified the following probabilities: - There is a 60% probability of launching a new project. - If a company launches the project, there is a 75% probability that its stock price will increase. - If a company does not launch the project, there is a 30% probability that its stock price will increase. You want to find the probability that the company’s stock price will increase. The decision tree for the problem is: You need only the probability of the events when the stock price increases. Using the decision tree, we can calculate the following conditional probabilities: P(Launch a project|Stock price increases) = 0.6 × 0.75 = 0.45 P(Do not launch|Stock price increases) = 0.4 × 0.30 = 0.12 According to the total probability rule, the probability of a stock price increase is: P(Stock price increases) = P(Launch a project|Stock price increases) + P(Do not launch|Stock price increases) = 0.45 + 0.12 = 0.57 Thus, there is a 57% probability that the company’s share price will increase. CFI offers the Financial Modeling & Valuation Analyst (FMVA)™ certification program for those looking to take their careers to the next level. To keep learning and advancing your career, the following resources will be helpful:
A major representative of English materialism was John Locke (1632-1704). He substantiated the principle of materialistic sensationalism – the origin of all knowledge from the sensory perception of the external world. The struggle against scholasticism brought to the fore the question of the method of knowledge, which was closely related to questions of the theory of knowledge; it is to these questions that Locke’s main work, An Essay on the Human Mind, is devoted. Locke’s treatise begins with a critique of Descartes’ doctrine of innate ideas. Locke proves that in the human mind there are no ideas innate to the mind, they do not exist neither in theoretical thinking nor in moral convictions. The only source of all ideas can only be experience. In accordance with this, he indicates two experimental (empirical) sources of our ideas: the first of them is sensation, the second is reflection. Ideas of sensation arise from the action on the sense organs of things that are not in us. Such, for example, are the ideas acquired through sight, hearing, touch, smell, etc. The ideas of sensation are the basic fund of all our ideas. Ideas of reflection arise in us when our mind considers the inner states and activities of our soul. Such, for example, are the ideas about the various operations of our thinking, emotions, desires, etc. Through the ideas of sensation we perceive the qualities of things. Locke divides these ideas into two classes: 1) ideas of primary qualities and 2) ideas of secondary qualities. Locke calls primary qualities that belong to the objects themselves and reside in them such as they appear to us in our sensations. Primary qualities are inseparable from the body and remain in it constantly with all its changes. Since the primary qualities are in the bodies themselves, Locke calls them real qualities; such are density, extension, figure, movement (or rest), and number. The ideas of primary qualities are copies of those qualities themselves. Secondary Locke calls qualities that seem to us to belong to the things themselves, but in fact are not in the things themselves. Locke considers the ideas of secondary qualities to be the ideas of color, sound, taste, etc. In things themselves, there is only the ability to produce these sensations in us. What in the idea appears pleasant, blue or warm, in the things themselves there is only a certain volume, figure and movement of particles inaccessible to perception. However, with all the differences between primary and secondary qualities, they have something in common: both of them produce their ideas through a “push”. So, for example, the violet, through the “shocks” of particles of matter inaccessible to perception, differing in volume and shape, degrees and types of their movements, produces in the soul the ideas of the blue color and smell of this flower. Locke’s doctrine of the difference between primary and secondary qualities is a development of ideas outlined by the ancient Greek atomist Democritus, and in modern times revived by Descartes and Galileo. This doctrine is based on the absolute opposition of the subjective to the objective. Rationalism of Descartes The philosopher René Descartes (1596–1650) stood at the origins of the rationalist tradition. Descartes was educated at the Jesuit College of La Flèche. He early began to doubt the value of book learning, since, in his opinion, many sciences lack a reliable foundation. Leaving his books, he began to travel. Although Descartes was a Catholic, at one time he participated on the side of the Protestants in the Thirty Years’ War. At the age of 23, while staying in winter quarters in Germany, he formulated the main ideas of his method. Ten years later, he moved to Holland to do research in peace and quiet. In 1649 he went to Stockholm to Queen Christina. The Swedish winter was too harsh for him, he fell ill and died in February 1650. His major works include Discourse on Method (1637) and Metaphysical Meditations (1647), Elements of Philosophy, Rules for the Direction of the Mind. According to Descartes, there are disagreements in philosophy on any issue. The only truly reliable method is mathematical deduction. Therefore, Descartes considers mathematics as a scientific ideal. This ideal became the defining factor of Cartesian philosophy. Descartes is the founder of rationalism (from ratio – mind) – a philosophical direction, whose representatives considered the mind to be the main source of knowledge. Rationalism is the opposite of empiricism. If philosophy is to be a deductive system like Euclidean geometry, then it is necessary to find the true premises (axioms). If the premises are not obvious and doubtful, then the conclusions (theorems) of the deductive system are of little value. But how can one find absolutely obvious and definite premises for a deductive philosophical system? Methodological doubt allows answering this question. It is a means of excluding all propositions that we can logically doubt, and a means of finding propositions that are logically certain. It is precisely such indisputable propositions that we can use as premises of true philosophy. Methodical doubt is a way (method) of excluding all statements that cannot be prerequisites of a deductive philosophical system. With the help of methodical doubt, Descartes puts various kinds of knowledge to the test. 1. First, he considers the philosophical tradition. Is it possible in principle to doubt what the philosophers say? Yes, says Descartes. This is possible because philosophers did, and still do, disagree on many issues. 2) Is it possible to logically doubt our sense perceptions? Yes, says Descartes, and makes the following argument. It is a fact that sometimes we are subject to illusions and hallucinations. For example, a tower may appear to be round, although it is later discovered to be square. Our senses cannot provide us with absolutely obvious premises for a deductive philosophical system. 3) As a special argument, Descartes points out that he has no criterion for determining whether he is fully conscious or in a state of sleep. For this reason, he may in principle doubt the real existence of the external world. Is there anything we cannot doubt? Yes, says Descartes. Even if we doubt everything, we cannot doubt that we doubt, that is, that we are conscious and exist. We therefore have the absolutely true statement: “I think, therefore I am” (cogito ergo sum). The person who makes the statement cogito ergo sum expresses knowledge that he cannot doubt. It is reflexive knowledge and cannot be refuted. He who doubts cannot, as a doubter, doubt (or deny) that he doubts and therefore that he exists. Of course, this statement is not enough to build a whole deductive system. Additional claims by Descartes are related to his proof of the existence of God. From the idea of the perfect, he concludes that there is a perfect being, God. A perfect God does not deceive people. This gives us confidence in the method: everything that seems to us as self-evident as the statement cogito ergo sum must be knowledge that is just as certain. This is the source of the Cartesian rationalistic theory of knowledge: the criterion for the truth of knowledge is not empirical justification (as in empiricism), but ideas that appear clear and distinct before our mind. Descartes claims that for him, as self-evident as his own existence and the presence of consciousness, is the existence of a thinking being (soul) and an extended being (matter). Descartes introduces the doctrine of a thinking thing (soul) and an extended thing (matter) as the only existing (besides God) two fundamentally different phenomena. The soul is only thinking, not extended. Matter is only extended, but not thinking. Matter is understood with the help of mechanics alone (mechanical-materialistic picture of the world), while the soul is free and rational. Descartes’ criterion of truth is rationalistic. What the mind, as a result of systematic and consistent reasoning, considers as clear and distinct can be accepted as true. Sense perceptions must be controlled by the mind. It is important for us to understand the position of the rationalists (Descartes, Leibniz and Spinoza). Roughly speaking, it lies in the fact that we have two kinds of knowledge. In addition to experimental knowledge of individual phenomena of the external and internal world, we can obtain rational knowledge about the essence of things in the form of universally valid truths. The argument between rationalism and empiricism is mainly centered around the second kind of knowledge. Rationalists argue that with the help of rational intuition we gain knowledge of universal truths (for example, we know God, human nature and morality). Empiricists deny the rational intuition that gives us such knowledge. According to empiricism, we gain knowledge through experience, which they ultimately reduce to sensory experience. Experience can be interpreted as a passive perceptual process in which the subject is supplied with simple impressions of external things. Then the subject combines these impressions according to their appearance together or separately, according to their similarity and difference, which leads to the emergence of knowledge about these perceived things. The exception is knowledge gained through concept analysis and deduction, as is the case in logic and mathematics. However, these two kinds of knowledge, according to empiricists, do not tell us anything about the essential features of being. It can be said that rationalists think that we are able to know reality (something real) with the help of concepts alone, while empiricists derive all knowledge of reality from experience. Descartes’ methodology was anti-scholastic. This orientation was manifested, first of all, in the desire to achieve such knowledge that would strengthen the power of man over nature, and would not be an end in itself or a means of proving religious truths. Another important feature of Cartesian methodology is the critique of scholastic syllogistics. Scholasticism, as is well known, considered the syllogism the main instrument of man’s cognitive efforts. Descartes sought to prove the failure of this approach. He did not refuse to use the syllogism as a way of reasoning, a means of communicating already discovered truths. But new knowledge, in their opinion, syllogism cannot give. Therefore, he sought to develop a method that would be effective in finding new knowledge. Rationalism is a philosophical direction that recognizes reason as the basis of human knowledge and behavior. Opposes both irrationalism and sensationalism. Having spoken out against the Middle Ages. scholasticism and religious dogmatism, classical rationalism of the 17th–18th centuries. proceeded from the idea of natural order – an endless causal chain that permeates the whole world. Scientific (i.e. objective, universal, necessary) knowledge, according to rationalism, is achievable only through reason – both the source of knowledge and the criterion of its truth. The limitations of rationalism consisted in the separation of rational cognition from sensory cognition, in the idealistic conception of innate ideas. Rationalism is one of the philosophical sources of the ideology of the Enlightenment. (development of materialism) Descartes is a dualist, mathematician, physicist. Recognized the existence of 2 substances: 1. Spiritual substance. 2. Material substance. Matter, which has the attributes of extension, and spirit, whose attribute is thinking. Matter is an infinite universe, which consists of capsules, divisible to infinity. Descartes endowed matter with an independent force and considered movement as a manifestation of the life of matter, which is the only substance, the only basis of being. He criticized scholasticism and theology from the standpoint of rationalism. This concept contains two elements: 1) the idea of reason (logical thinking) as the highest way to comprehend the truth. The idea of omnipotence, the infallibility of reason. Doubt is central. 2) The source of rational knowledge of the world is innate ideas. Descartes discovered the most important method of scientific thinking – inductive. Descartes’ doctrine of matter/corporeal substance identifies matter with extension. The common cause of motion is God, he created matter and, together with motion and rest, preserves in it the same amount of motion and rest. In man, the soulless bodily mechanism is connected with the thinking soul. The essence of the soul is in thinking. Substance is a thing that does not need anything other than itself for its existence, therefore the substance is God. The world is divided into spiritual and material substances. Spiritual substances are indivisible. The main attributes are thinking and extension. The main attribute of a material substance is extension. Material substance Descartes identifies with nature. The concept of innate ideas: a person is born with a relatively formed consciousness. General approaches to understanding the world: 1) true knowledge must be presented in a clear form. 2) it is impossible (necessary) to decompose a thing into its constituent elements. 3) knowledge is achieved from simple to complex. 4) when studying a thing, it is impossible (necessary) to take into account all its connections. The universe is filled with matter. The form of its existence is movement (indestructible and uncreatable). Philosophy of Marxism. Classical Marxist philosophy arose in Germany in the 40s of the 19th century on the wave of the labor movement, as an ideological expression of this process. Its founders were Marx and Engels, and its theoretical sources are French materialism of the 18th century and German classical philosophy. The specificity of Marxist philosophy consisted in its initial appeal to the problems of the earth, i.e. to topical issues of public life – the economy, social relations, political life. The philosophy of Marxism is historical and dialectical materialism. Materialism was applied to the study of nature, society and man himself. Dialectics is inherent in Marxist philosophy as a method of philosophical thinking and a theory of development. This philosophy is characterized by an orientation towards a practical change in the world in which a working person exists. The philosophy of Marxism is called dialectical and historical materialism. Its founders were Karl Marx (1818-1883) and Friedrich Engels (1820-1895). The philosophy of Marxism originated in the 1840s in Germany, and its emergence was due to a number of circumstances: The beginning of the industrial revolution, the accelerated formation of the capitalist mode of production and the revolutionary events in Europe, which set a number of tasks for philosophy in the study of the laws of development of society. There was a need for a philosophical understanding of the achievements in natural science in the first half of the 19th century, which changed the scientific picture of the world: first of all, this is the discovery of the cellular structure of living organisms, the law of conservation and transformation of energy, Darwin’s evolutionary doctrine, which approved the idea of communication and development in the understanding of nature. There were theoretical prerequisites that made it possible to take further steps in the development of philosophical knowledge. The leading role in this was played by German classical philosophy – the Hegelian doctrine of the dialectical method and Feuerbach’s materialism. The philosophical evolution of Marx and Engels was expressed in the transition from idealism to materialism and was the basis for their rethinking of their economic and socio-political views. English political economy in the person of A. Smith and D. Ricardo and French utopian socialism (A. de Saint-Simon and C. Fourier) had a significant influence on the formation of the philosophical positions of Mraks and Engels. 1844-1848 is a very important period in the life of Marx and Engels. when they get to know each other and develop the philosophical foundations of a new worldview in the process of revising the philosophical heritage of Hegel and Feuerbach. The main provisions of the new philosophy were: An organic combination of the principle of materialism with the dialectical method of cognizing nature and society, which found expression in the development of dialectical and historical materialism. Using the dialectical method of thinking developed by Hegel, Marx and Engels applied it to the analysis of objective reality, arguing that subjective dialectics (the dialectics of thinking) is nothing more than a reflection in the minds of people of objective dialectics, that is, the development and connections of nature itself and society. The central category of Marxism was “practice”, understood as a purposeful socio-historical material activity of people to transform the objective world. Thus, the active active nature of man’s attitude to the world (the transformation of nature and society) was emphasized. Practice was also considered as the basis, source and goal of knowledge and an objective criterion of truth. Quite innovative in Marxism was the consideration of society as a complex system in which the leading role was played by material being, which is based on the economic activity of people, giving rise to the social class division of society. The thesis about the primacy of social being and the secondary nature of social consciousness was a way of solving the main question of philosophy in relation to society. This made it possible to overcome the one-sidedness of social idealism, which dominated the history of philosophical thought until the middle of the 19th century. The spread of the materialistic principle in explaining the world to the understanding of history made it possible to see internal social contradictions as a source of the development of society. The historical process appeared as a progressive change of socio-economic formations and the methods of material production underlying them. The humanistic orientation of Marxist philosophy is connected with the search for ways to free a person from social alienation. It is this idea that permeated all the joint early works of Marx and Engels, connected with the rethinking of Feuerbach’s anthropological materialism. General ideological attitudes did not at all exclude the peculiarities of the philosophical views of each of the founders of Marxism. Thus, Engels focused his attention on the study of the problems of the philosophy of nature; in his works “Dialectics of Nature” and “Anti-Dühring” he gives a philosophical analysis of the achievements of natural science in creating a scientific picture of the world. The principles of classification of the forms of matter movement put forward by him and the study of the process of anthropogenesis and sociogenesis have not lost their significance for modern science. The philosophical views of Marx are essentially anthropocentric, since he is primarily interested in the problems of the essence of man and the conditions of his existence in society. This is the focus of his early work “Economic and Philosophical Manuscripts of 1844”, first published in 1932, in which he explores the conditions of human alienation in society. The basis of social alienation, according to Marx, is the alienation of a person in the sphere of the economy, associated with the emergence of private property, which leads to the alienation of a person from the very process of labor and its products, as well as to alienation in the sphere of communication, to the breaking of social ties. The process of historical development is considered by him as a gradual removal of social alienation and an increase in the degree of human freedom in society. Communism as an ideal of social development must lead to the elimination of alienation and the creation of conditions for the free and harmonious development of man. In fact, the creation of the main work of his life, Capital, was caused not only by an interest in analyzing the trends in the development of the bourgeois economic system, but also by the search for real conditions for the liberation of a person from the shameful consequences of forced labor. Thus, in contrast to Feuerbach’s abstract humanism, Marx’s humanism is based on a deep analysis of reality itself. Rousseau’s Marxist solution to the problem of human alienation is based on the notion that capitalist society is an inhuman environment that generates social inequalities. Marxism divided the entire historical process into two major epochs: 1. Prehistory (primitive, slaveholding, feudal and bourgeois formations). In these societies, a person is not free, because he is suppressed by the power of the community or the state, the elements of the market, etc. Prehistory should be replaced by a true story, which will be created by conscious people. The idea of a socialist revolution is the idea of a radical way for the transition of society from a state of unfreedom into the realm of true freedom. In Marxism, the revolution is seen as a change in the economic foundations of society, the overcoming of private property, as a source of exploitation of man by man. This revolution must be carried out by the proletariat, as a propertyless class, and the revolution itself will become the engine of the historical process. According to Marxists, communism will become a new era in the history of mankind, an era of complete human control over the social and natural world. The formation of communism is a long process, a period of profound transformations in the entire system of social relations, a change in the very way of life of people. As a result, an association of free workers will be established on a worldwide scale. The Communist Manifesto is the first programmatic work of Marxism. “Capital” is the main work of Marxism in which Marx revealed the economic structure of contemporary capitalist society. In Dialectics of Nature, Engels developed the Marxist doctrine of matter, its properties, forms, and modes of existence. Marxism consists of three parts: materialist philosophy, political economy, and the theory of scientific socialism. In Western Europe – Mering, Lafargue, Kautsky, etc. Thanks to their efforts, Marxism became an international phenomenon. In Russia, Marxist theory began to penetrate in the 80s of the 19th century thanks to Plekhanov and his associates. Leninism is the Marxism of the era of the preparation and practical implementation of proletarian revolutions in some European countries. Lenin’s views are expounded in “Philosophical Notebooks”, “State and Revolution”, “Materialism and Imperial Criticism”. Lenin’s views were very radical. In Marxist theory, he saw, first of all, an instrumental function that would serve the practice of political struggle. The main thing in the system of Marxism is the spirit of active transformation of society in an effort to arrange the world reasonably and justly. The fate of the teachings of Marx and Engels is very dramatic, since the further development of Marxism as a socio-political and philosophical trend was accompanied by countless falsifications and one-sided interpretations. In this regard, we can talk about the variety of versions of Marxism in the context of different eras and the peculiarities of the national perception of his teachings in different countries. So, in relation to Russia, one can speak of Lenin, Plekhanov, Stalin and other versions of Marxism. The main stages in the formation and development of Marxist philosophy Young Hegelian period in the works of Marx and Engels. Active development of the theoretical heritage of the German classics. Hegelian position in philosophy. Democratic sympathies of Marx and Engels in the socio-political field. This period covers 1839-43. Criticism of Hegel’s idealism. The beginning of the formation of Marxist views proper. Transition to positions of materialism and communism. 1843-44 The final formulation of the philosophical ideas of Marxism. 1845-50 The development of the philosophical, socio-philosophical and methodological provisions of Marxism in the works of Marx and Engels in the remaining period of their lives. The development of Marxist philosophy in the works of the students of Marx and Engels in the 70s – 90s of the XIX century. Lenin’s stage in the philosophy of Marxism. It covers 1895 – 1924. Marxist-Leninist philosophy in the USSR in the 20-80s of the XX century. Western Marxism in the 20th century. The current state of Marxist thought. The main philosophical ideas of Marxism Idea of practice; Ideas and principles of materialistic dialectics; Dialectical-materialistic understanding of history; The most significant philosophical works of the classics of Marxism K. Marx’s works: “On the Criticism of the Hegelian Philosophy of Law”. 1843; “Economic and Philosophical Manuscripts of 1844”; “Theses on Feuerbach” (1845), “Poverty of Philosophy” (1847); “The Eighteenth Brumaire of Louis Bonaparte” (1852); “Capital” (1857-70). Works of F. Engels: “Sketch to the Critique of Political Economy” (1844); Anti-Dühring (1878), Dialectics of Nature (1873-76); “The Origin of the Family, Private Property and the State” (1884); “Ludwig Feuerbach and the End of Classical German Philosophy” (1886), “Letters on Historical Materialism” (1890-94); Joint works of K. Marx and F. Engels: “The Holy Family” (1845); “German Ideology” (1846); “Manifesto of the Communist Party” (1848). Summary of the main provisions of the philosophy of Marxism The processing by Marx and Engels of the idealistic dialectics of Hegel and the main provisions of the materialism of that time was carried out not through their mechanical combination, but through the prism of the principle of human activity. This is the problem of concretizing the essence of a person: either he simply lives in the world, contemplating it, or he changes reality, makes it suitable for himself. Labor as an activity to change nature and social relations is an essential parameter of human being. Marx and Engels use practice as a synonym for labor, a category concretizing the concept of labor. Under it, they understood the sensual-objective, purposeful activity of a person, focused on the development and transformation of the conditions of his existence and, in parallel with this, on the improvement of the person himself. Practice is primary and determines the spiritual world of a person, his culture. It has a social character, serves as the basis for communication between people, a prerequisite for various forms of community life. The practice is historical, its methods and forms change over time, become more and more refined, contribute to the manifestation of the most diverse aspects of human essence, allow discovering new aspects in the surrounding world. On the need to introduce the idea of practice into philosophy, Marx first speaks in the work “Theses on Feuerbach”, where he criticizes Feuerbach’s materialism for its contemplative character. Practice is an objective activity that has the following structure: need – goal – motive – actually expedient activity – means – result. Although practice is the opposite of theory, there is a close relationship between them on the following points: Practice is a source of theory, acts as a “customer” of certain developments. Things that have no practical value are developed extremely rarely. Practice is the criterion of the truth of the theory. Practice is the goal of any theory. Practice as a holistic process is described using the categories of objectification and deobjectification. Objectification is a process in which human abilities pass into an object and are embodied in it, due to which this object becomes a human object. Activity is objectified not only in the external world, but also in the qualities of the person himself. Deobjectification is a process in which the properties, essence, logic of an object become the property of a person. Man appropriates the forms and content of the previous culture. The dialectic of objectification and deobjectification in the philosophy of Marxism clearly demonstrates the structure of practice, shows the mechanisms of continuity in the development of culture. Marx and Engels used Hegel’s achievements in developing the dialectical method in order to show the essence and dynamics of human practical activity. Marxist philosophy is often called dialectical and historical materialism, emphasizing that its core is the method of materialist dialectics. The term “dialectics” or “dialectical” is used in the works of the classics of Marxism in two main meanings: “objective dialectics” and “subjective dialectics”. Objective dialectics is life itself, which is an integral system that exists and develops according to dialectical laws and principles. Subjective dialectics is the reproduction of objective dialectics in various forms of human activity, but, above all, in cognition. Sometimes, instead of the expression “subjective dialectics” the concept of “dialectical method” is used. The development of materialist dialectics as a theory and method was carried out by Marx and Engels in the following works: “German Ideology”, “Holy Family”, “Capital”, “Theses on Feuerbach”, “Dialectics of Nature”, “Anti-Dühring”. The main thing in dialectics is the understanding of the world as an organic system. This means that it consists of many diverse, but necessary, interconnected elements. And, most importantly, it contains the cause of its development in itself. Dialectics takes place where the development of the world is carried out at the expense of internal contradiction. Thus, dialectics acts as a doctrine of the world as an integral system, the main law of which is the law of the contradictory, necessary connection of its elements. Under “connection” in dialectics is understood such a relationship between things or processes, when a change in properties or states in one automatically entails a change in properties or state in others. The concept of development is central in dialectics. It is seen as self-development. Following Hegel, Marx and Engels subject the process of development to the action of three laws: The law of unity and struggle of opposites. The law of mutual transition of quantitative and qualitative changes. The law of negation of negation. Each of these laws expresses a certain aspect of the integral process of development: the law of the unity and struggle of opposites characterizes the source of development; the law of mutual transition of quantitative and qualitative changes is the mechanism of development, and the law of negation of negation is the goal of development. Most critics of Marxist philosophy believe that assertions about the objective character of dialectics are groundless. If dialectics has the right to exist, then only as one of the methods of cognition. The idea of dialectics as a system of methods of cognition occupies an important place in Marxism. Unlike their later critics, Marx and Engels considered the dialectical method to be the universal method of cognition. The dialectical method is a system of methods and principles that make it possible to reproduce in thought the objective logic of an object or phenomenon. Basic Methodological Principles of Marxist Dialectics The principle of system. The principle of ascent from the abstract to the concrete. The principle of unity of historical and logical. Categories of Marxist dialectics Marx and Engels almost completely borrowed the categorical apparatus of their philosophy from Hegel. Categories are built into a systematic unity, according to the logic of the movement of thought from the most general and abstract to the concrete. At the beginning stands the category of the individual, at the end – the category of reality. The transition from one category to another is carried out according to the laws of dialectics. Thus, dialectics, as a method, is a system of interrelated and interdependent laws, principles and categories that prescribes a strictly defined order of cognition and transformation of reality.
Want to stay on top of all the space news? Follow @universetoday on Twitter Some of the most frequently asked questions we get here at Universe Today and Astronomy Cast deal with black holes. Everyone wants to know what conditions would be like at the event horizon, or even inside a black hole. Answering those questions is difficult because so much about black holes is unknown. Black holes can’t be observed directly because their immense gravity won’t let light escape. But in just the past week, three different research teams have released their findings in their attempts to create black holes – or at least conditions analogous to them to advance our understanding. Make Your Own Accretion Disk A team of researchers from Osaka University in Japan wanted to sharpen their insights into the behavior of matter and energy in extreme conditions. What could be more extreme than the conditions of the swirling cloud of matter surrounding a black hole, known as the accretion disk? Their unique approach was to blast a plastic pellet with high-energy laser beams. Accretion disks get crunched and heated by a black hole’s gravitational energy. Because of this, the disks glow in x-ray light. Analyzing the spectra of these x-rays gives researchers clues about the physics of the black hole. However, scientists don’t know precisely how much energy is required to produce such x-rays. Part of the difficulty is a process called photoionization, in which the high-energy photons conveying the x-rays strip away electrons from atoms within the accretion disk. That lost energy alters the characteristics of the x-ray spectra, making it more difficult to measure precisely the total amount of energy being emitted. To get a better handle on how much energy those photoionized atoms consume, researchers zapped a tiny plastic pellet with 12 laser beams fired simultaneously and allowed some of the resulting radiation to blast a pellet of silicon, a common element in accretion disks. The synchronized laser strikes caused the plastic pellet to implode, creating an extremely hot and dense core of gas, or plasma. That turned the pellet into “a source of [immensely powerful] x-rays similar to those from an accretion disk around a black hole,” says physicist and lead author Shinsuke Fujioka. The team said the x-rays photoionized the silicon, and that interaction mimicked the emissions observed in accretion disks. By measuring the energy lost from the photoionization, the researchers could measure total energy emitted from the implosion and use it to improve their understanding of the behavior of x-rays emitted by accretion disks. The Portable Black Hole Another group of physicists created a tiny device that can create a black hole by sucking up microwave light and converting it into heat. At just 22 centimeters across, the device can fit in your pocket. The device uses ‘metamaterials’, specially engineered materials that can bend light in unusual ways. Previously, scientists have used such metamaterials to build ‘invisibility carpets’ and super-clear lenses. This latest black hole was made by Qiang Chen and Tie Jun Cui of Southeast University in Nanjing, China. Real black holes use their huge mass to warp space around it. Light that travels too close to it can become trapped forever. The new meta-black hole also bends light, but in a very different way. Rather than relying on gravity, the black hole uses a series of metallic ‘resonators’ arranged in 60 concentric circles. The resonators affect the electric and magnetic fields of a passing light wave, causing it to bend towards the centre of the hole. It spirals closer and closer to the black hole’s ‘core’ until it reaches the 20 innermost layers. Those layers are made of another set of resonators that convert light into heat. The result: what goes in cannot come out. “The light into the core is totally absorbed,” Cui said. Not only is the device useful in studying black holes, but the research team hopes to create a version of the device that will suck up light of optical frequencies. If it works, it could be used in applications such as solar cells. Black holes in your computer? Could you create a black hole in your computer? Maybe if you had a really big one. Scientists at Rochester Institute of Technology (RIT) hope to make use of two of the fastest supercomputers in the world in their quest to “shine light” on black holes. The team was approved for grants and computing time to study the evolution of black holes and other objects with the “NewHorizons,” a cluster consisting of 85 nodes with four processors each, connected via an Infiniband network that passes data at 10-gigabyte-per-second speeds. The team has created computer algorithms to simulate with mathematics and computer graphics what cannot be seen directly. “It is a thrilling time to study black holes,” said Manuela Campanelli, center director. “We’re nearing the point where our calculations will be used to test one of the last unexplored aspects of Einstein’s General Theory of Relativity, possibly confirming that it properly describes the strongest gravitational fields in the universe.”
I. Number Systems IV. Coordinate Geometry VII. Statistics and Probability Appendix: 1. Proofs in Mathematics 2. Mathematical Modelling Unit I: Number Systems Real Numbers ( Periods 15) Euclid’s division lemma, Fundamental Theorem of Arithmetic – statements after reviewing workdone earlier and after illustrating and motivating through examples. Proofs of results – irrationality of √2,√3,√5 , decimal expansions of rational numbers in terms of terminating/non-terminating recurring decimals. Unit II: Algebra 1. Polynomials ( Periods 6) Zeros of a polynomial. Relationship between zeros and coefficients of a polynomial with particular reference to quadratic polynomials. Statement and simple problems on division algorithm for polynomials with real coefficients. 2. Pair of Linear Equations in Two Variables ( Periods 15) Pair of linear equations in two variables. Geometric representation of different possibilities of solutions/inconsistency. Algebraic conditions for number of solutions. Solution of pair of linear equations in two variables algebraically – by substitution, by elimination and by cross multiplication. Simple situational problems must be included. Simple problems on equations reducible to linear equations may be included. 3. Quadratic Equations ( Periods 15) Standard form of a quadratic equation ax + bx + c = 0, (a ≠ 0). Solution of quadratic equations (only real roots) by factorization and by completing the square, i.e., by using quadratic formula. Relationship between discriminant and nature of roots. Problems related to day-to-day activities to be incorporated. 4. Arithmetic Progressions (AP) ( Periods 8) Motivation for studying AP. Derivation of standard results of finding the n th term and sum of first n terms. Unit III: Trigonometry 1. Introduction to Trigonometry ( Periods 18) Trigonometric ratios of an acute angle of a right-angled triangle. Proof of their existence (well defined); motivate the ratios, whichever are defined at 0° and 90°. Values (with proofs) of the trigonometric ratios of 30°, 45° and 60°. Relationships between the ratios. Trigonometric Identities: Proof and applications of the identity sin 2 A + cos 2 A = 1. Only simple identities to be given. Trigonometric ratios of complementary angles. 2. Heights and Distances ( Periods 8) Simple and believable problems on heights and distances. Problems should not involve more than two right triangles. Angles of elevation/depression should be only 30 0 , 45 0 , 60 0 . Unit IV: Coordinate Geometry Lines (In two-dimensions) ( Periods 15) Review the concepts of coordinate geometry done earlier including graphs of linear equations. Awareness of geometrical representation of quadratic polynomials. Distance between two points and section formula (internal). Area of a triangle. Unit V: Geometry 1. Triangles ( Periods 15) Definitions, examples, counterexamples of similar triangles. 1. (Prove) If a line is drawn parallel to one side of a triangle to intersect the other two sides in distinct points, the other two sides are divided in the same ratio. 2. (Motivate) If a line divides two sides of a triangle in the same ratio, the line is parallel to the third side. 3. (Motivate) If in two triangles, the corresponding angles are equal, their corresponding sides are proportional and the triangles are similar. 4. (Motivate) If the corresponding sides of two triangles are proportional, their corresponding angles are equal and the two triangles are similar. 5. (Motivate) If one angle of a triangle is equal to one angle of another triangle and the sides including these angles are proportional, the two triangles are similar. 6. (Motivate) If a perpendicular is drawn from the vertex of the right angle to the hypotenuse, the triangles on each side of the perpendicular are similar to the whole triangle and to each other. 7. (Prove) The ratio of the areas of two similar triangles is equal to the ratio of the squares on their corresponding sides. 8. (Prove) In a right triangle, the square on the hypotenuse is equal to the sum of the squares on the other two sides. 9. (Prove) In a triangle, if the square on one side is equal to sum of the squares on the other two sides, the angles opposite to the first side is a right triangle. 2. Circles ( Periods 8) Tangents to a circle motivated by chords drawn from points coming closer and closer to the point. 1. (Prove) The tangent at any point of a circle is perpendicular to the radius through the point of contact. 2. (Prove) The lengths of tangents drawn from an external point to a circle are equal. 3. Constructions ( Periods 8) 1. Division of a line segment in a given ratio (internally). 2. Tangent to a circle from a point outside it. 3. Construction of a triangle similar to a given triangle. Unit VI: Mensuration 1. Areas Related to Circles ( Periods 12) Motivate the area of a circle; area of sectors and segments of a circle. Problems based on areas and perimeter/circumference of the above said plane figures. (In calculating area of segment of a circle, problems should be restricted to central angle of 60°, 90° and 120° only. Plane figures involving triangles, simple quadrilaterals and circle should be taken.) 2. Surface Areas and Volumes ( Periods 12) 1. Problems on finding surface areas and volumes of combinations of any two of the following: cubes, cuboids, spheres, hemispheres and right circular cylinders/cones. Frustum of a cone. 2. Problems involving converting one type of metallic solid into another and other mixed problems. (Problems with combination of not more than two different solids be taken.). Unit VII: Statistics and Probability 1. Statistics ( Periods 15) Mean, median and mode of grouped data (bimodal situation to be avoided). Cumulative frequency graph. 2. Probability ( Periods 10) Classical definition of probability. Connection with probability as given in Class IX. Simple problems on single events, not using set notation.
Solve quadratic equations in one variable. Use the method of completing the square to transform any quadratic equation in x into an equation of the form (x - p)2 = q that has the same solutions. Derive the quadratic formula from this form. We've worked on linear equations. To learn about linear equations, click here. These equations had a constant rate of change, sometimes known as the slope. Quadratic Equations have a variable rate of change; a variable slope. This is an example of a quadratic equation: Linear equation graphs were straight lines. Quadratic equation graphs are parabolas. This video shows where we can see quadratic equations in the real world. To learn how to complete the square when solving quadratic equations, click here. This is an example of completing the square.
In mathematics, the cross product or vector product (occasionally directed area product to emphasize the geometric significance) is a binary operation on two vectors in three-dimensional space and is denoted by the symbol . Given two linearly independent vectors and , the cross product, (read "a cross b"), is a vector that is perpendicular to both and and thus normal to the plane containing them. It has many applications in mathematics, physics, engineering, and computer programming. It should not be confused with the dot product (projection product). If two vectors have the same direction (or have the exact opposite direction from one another, i.e. are not linearly independent) or if either one has zero length, then their cross product is zero. More generally, the magnitude of the product equals the area of a parallelogram with the vectors for sides; in particular, the magnitude of the product of two perpendicular vectors is the product of their lengths. The cross product is anticommutative (i.e., ) and is distributive over addition (i.e., ). The space together with the cross product is an algebra over the real numbers, which is neither commutative nor associative, but is a Lie algebra with the cross product being the Lie bracket. Like the dot product, it depends on the metric of Euclidean space, but unlike the dot product, it also depends on a choice of orientation or "handedness". The product can be generalized in various ways; it can be made independent of orientation by changing the result to pseudovector, or in arbitrary dimensions the exterior product of vectors can be used with a bivector or two-form result. Also, using the orientation and metric structure just as for the traditional 3-dimensional cross product, one can in dimensions take the product of vectors to produce a vector perpendicular to all of them. But if the product is limited to non-trivial binary products with vector results, it exists only in three and seven dimensions. (See § Generalizations, below, for other dimensions.) - 1 Definition - 2 Names - 3 Computing the cross product - 4 Properties - 5 Alternative ways to compute the cross product - 6 Applications - 7 Cross product as an external product - 8 Cross product and handedness - 9 Generalizations - 10 History - 11 See also - 12 Notes - 13 References - 14 External links The cross product of two vectors a and b is defined only in three-dimensional space and is denoted by a × b. In physics, sometimes the notation a ∧ b is used, though this is avoided in mathematics to avoid confusion with the exterior product. The cross product a × b is defined as a vector c that is perpendicular (orthogonal) to both a and b, with a direction given by the right-hand rule and a magnitude equal to the area of the parallelogram that the vectors span. where θ is the angle between a and b in the plane containing them (hence, it is between 0° and 180°), ‖a‖ and ‖b‖ are the magnitudes of vectors a and b, and n is a unit vector perpendicular to the plane containing a and b in the direction given by the right-hand rule (illustrated). If the vectors a and b are parallel (i.e., the angle θ between them is either 0° or 180°), by the above formula, the cross product of a and b is the zero vector 0. By convention, the direction of the vector n is given by the right-hand rule, where one simply points the forefinger of the right hand in the direction of a and the middle finger in the direction of b. Then, the vector n is coming out of the thumb (see the adjacent picture). Using this rule implies that the cross product is anti-commutative, i.e., b × a = −(a × b). By pointing the forefinger toward b first, and then pointing the middle finger toward a, the thumb will be forced in the opposite direction, reversing the sign of the product vector. Using the cross product requires the handedness of the coordinate system to be taken into account (as explicit in the definition above). If a left-handed coordinate system is used, the direction of the vector n is given by the left-hand rule and points in the opposite direction. This, however, creates a problem because transforming from one arbitrary reference system to another (e.g., a mirror image transformation from a right-handed to a left-handed coordinate system), should not change the direction of n. The problem is clarified by realizing that the cross product of two vectors is not a (true) vector, but rather a pseudovector. See cross product and handedness for more detail. In 1877, to emphasize the fact that the result of a dot product is a scalar while the result of a cross product is a vector, William Kingdon Clifford coined the alternative names scalar product and vector product for the two operations. These alternative names are still widely used in the literature. Both the cross notation (a × b) and the name cross product were possibly inspired by the fact that each scalar component of a × b is computed by multiplying non-corresponding components of a and b. Conversely, a dot product a ⋅ b involves multiplications between corresponding components of a and b. As explained below, the cross product can be expressed in the form of a determinant of a special 3 × 3 matrix. According to Sarrus's rule, this involves multiplications between matrix elements identified by crossed diagonals. Computing the cross product The standard basis vectors i, j, and k satisfy the following equalities in a right hand coordinate system: which imply, by the anticommutativity of the cross product, that The anticommutativity of the cross product (and the obvious lack of linear independence) also implies that - (the zero vector). These equalities, together with the distributivity and linearity of the cross product (but both do not follow easily from the definition given above), are sufficient to determine the cross product of any two vectors a and b. Each vector can be defined as the sum of three orthogonal components parallel to the standard basis vectors: Their cross product a × b can be expanded using distributivity: This can be interpreted as the decomposition of a × b into the sum of nine simpler cross products involving vectors aligned with i, j, or k. Each one of these nine cross products operates on two vectors that are easy to handle as they are either parallel or orthogonal to each other. From this decomposition, by using the above-mentioned equalities and collecting similar terms, we obtain: meaning that the three scalar components of the resulting vector s = s1i + s2j + s3k = a × b are Using column vectors, we can represent the same result as follows: which gives the components of the resulting vector directly. Since the result of the scalar triple product may be negative, the volume of the parallelepiped is given by its absolute value. For instance, Because the magnitude of the cross product goes by the sine of the angle between its arguments, the cross product can be thought of as a measure of perpendicularity in the same way that the dot product is a measure of parallelism. Given two unit vectors, their cross product has a magnitude of 1 if the two are perpendicular and a magnitude of zero if the two are parallel. The dot product of two unit vectors behaves just oppositely: it is zero when the unit vectors are perpendicular and 1 if the unit vectors are parallel. Unit vectors enable two convenient identities: the dot product of two unit vectors yields the cosine (which may be positive or negative) of the angle between the two unit vectors. The magnitude of the cross product of the two unit vectors yields the sine (which will always be positive). If the cross product of two vectors is the zero vector (i.e. a × b = 0), then either one or both of the inputs is the zero vector, (a = 0 or b = 0) or else they are parallel or antiparallel (a ∥ b) so that the sine of the angle between them is zero (θ = 0° or θ = 180° and sinθ = 0). The self cross product of a vector is the zero vector: The cross product is anticommutative, distributive over addition, and compatible with scalar multiplication so that Distributivity, linearity and Jacobi identity show that the R3 vector space together with vector addition and the cross product forms a Lie algebra, the Lie algebra of the real orthogonal group in 3 dimensions, SO(3). The cross product does not obey the cancellation law: that is, a × b = a × c with a ≠ 0 does not imply b = c, but only that: This can be the case where b and c cancel, but additionally where a and b − c are parallel; that is, they are related by a scale factor t, leading to: for some scalar t. If, in addition to a × b = a × c and a ≠ 0 as above, it is the case that a ⋅ b = a ⋅ c then As b − c cannot be simultaneously parallel (for the cross product to be 0) and perpendicular (for the dot product to be 0) to a, it must be the case that b and c cancel: b = c. From the geometrical definition, the cross product is invariant under proper rotations about the axis defined by a × b. In formulae: - , where is a rotation matrix with . More generally, the cross product obeys the following identity under matrix transformations: The cross product of two vectors lies in the null space of the 2 × 3 matrix with the vectors as rows: For the sum of two cross products, the following identity holds: The product rule of differential calculus applies to any bilinear operation, and therefore also to the cross product: where a and b are vectors that depend on the real variable t. Triple product expansion The cross product is used in both forms of the triple product. The scalar triple product of three vectors is defined as It is the signed volume of the parallelepiped with edges a, b and c and as such the vectors can be used in any order that's an even permutation of the above ordering. The following therefore are equal: The vector triple product is the cross product of a vector with the result of another cross product, and is related to the dot product by the following formula The mnemonic "BAC minus CAB" is used to remember the order of the vectors in the right hand member. This formula is used in physics to simplify vector calculations. A special case, regarding gradients and useful in vector calculus, is where ∇2 is the vector Laplacian operator. Other identities relate the cross product to the scalar triple product: where I is the identity matrix. The cross product and the dot product are related by: The right-hand side is the Gram determinant of a and b, the square of the area of the parallelogram defined by the vectors. This condition determines the magnitude of the cross product. Namely, since the dot product is defined, in terms of the angle θ between the two vectors, as: the above given relationship can be rewritten as follows: Invoking the Pythagorean trigonometric identity one obtains: which is the magnitude of the cross product expressed in terms of θ, equal to the area of the parallelogram defined by a and b (see definition above). The combination of this requirement and the property that the cross product be orthogonal to its constituents a and b provides an alternative definition of the cross product. where a and b may be n-dimensional vectors. This also shows that the Riemannian volume form for surfaces is exactly the surface element from vector calculus. In the case where n = 3, combining these two equations results in the expression for the magnitude of the cross product in terms of its components: The same result is found directly using the components of the cross product found from: In R3, Lagrange's equation is a special case of the multiplicativity |vw| = |v||w| of the norm in the quaternion algebra. If a = c and b = d this simplifies to the formula above. Infinitesimal generators of rotations The cross product conveniently describes the infinitesimal generators of rotations in R3. Specifically, if n is a unit vector in R3 and R(φ, n) denotes a rotation about the axis through the origin specified by n, with angle φ (measured in radians, counterclockwise when viewed from the tip of n), then for every vector x in R3. The cross product with n therefore describes the infinitesimal generator of the rotations about n. These infinitesimal generators form the Lie algebra so(3) of the rotation group SO(3), and we obtain the result that the Lie algebra R3 with cross product is isomorphic to the Lie algebra so(3). Alternative ways to compute the cross product Conversion to matrix multiplication where superscript T refers to the transpose operation, and [a]× is defined by: The columns [a]×,i of the skew-symmetric matrix for a vector a can be also obtained by calculating the cross product with unit vectors, i.e.: where is the outer product operator. Also, if a is itself expressed as a cross product: Proof by substitution Evaluation of the cross product gives Hence, the left hand side equals Now, for the right hand side, And its transpose is Evaluation of the right hand side gives Comparison shows that the left hand side equals the right hand side. This result can be generalized to higher dimensions using geometric algebra. In particular in any dimension bivectors can be identified with skew-symmetric matrices, so the product between a skew-symmetric matrix and vector is equivalent to the grade-1 part of the product of a bivector and vector. In three dimensions bivectors are dual to vectors so the product is equivalent to the cross product, with the bivector instead of its vector dual. In higher dimensions the product can still be calculated but bivectors have more degrees of freedom and are not equivalent to vectors. This notation is also often much easier to work with, for example, in epipolar geometry. From the general properties of the cross product follows immediately that and from fact that [a]× is skew-symmetric it follows that The above-mentioned triple product expansion (bac–cab rule) can be easily proven using this notation. As mentioned above, the Lie algebra R3 with cross product is isomorphic to the Lie algebra so(3), whose elements can be identified with the 3×3 skew-symmetric matrices. The map a → [a]× provides an isomorphism between R3 and so(3). Under this map, the cross product of 3-vectors corresponds to the commutator of 3x3 skew-symmetric matrices. Matrix conversion for cross product with canonical base vectors Denoting with the -th canonical base vector, the cross product of a generic vector with is given by: , where These matrices share the following properties: - Both trace and determinant are zero; - (see below); The orthogonal projection matrix of a vector is given by . The projection matrix onto the orthogonal complement is given by , where is the identity matrix. For the special case of , it can be verified that For other properties of orthogonal projection matrices, see projection (linear algebra). Index notation for tensors The cross product can alternatively be defined in terms of the Levi-Civita symbol εijk and a dot product ηmi (= δmi for an orthonormal basis), which are useful in converting vector notation for tensor applications: in which repeated indices are summed over the values 1 to 3. This representation is another form of the skew-symmetric representation of the cross product: In classical mechanics: representing the cross product by using the Levi-Civita symbol can cause mechanical symmetries to be obvious when physical systems are isotropic. (An example: consider a particle in a Hooke's Law potential in three-space, free to oscillate in three dimensions; none of these dimensions are "special" in any sense, so symmetries lie in the cross-product-represented angular momentum, which are made clear by the abovementioned Levi-Civita representation). The word "xyzzy" can be used to remember the definition of the cross product. The second and third equations can be obtained from the first by simply vertically rotating the subscripts, x → y → z → x. The problem, of course, is how to remember the first equation, and two options are available for this purpose: either to remember the relevant two diagonals of Sarrus's scheme (those containing i), or to remember the xyzzy sequence. Similarly to the mnemonic device above, a "cross" or X can be visualized between the two vectors in the equation. This may be helpful for remembering the correct cross product formula. If we want to obtain the formula for we simply drop the and from the formula, and take the next two components down: When doing this for the next two elements down should "wrap around" the matrix so that after the z component comes the x component. For clarity, when performing this operation for , the next two components should be z and x (in that order). While for the next two components should be taken as x and y. For then, if we visualize the cross operator as pointing from an element on the left to an element on the right, we can take the first element on the left and simply multiply by the element that the cross points to in the right hand matrix. We then subtract the next element down on the left, multiplied by the element that the cross points to here as well. This results in our formula – We can do this in the same way for and to construct their associated formulas. The cross product has applications in various contexts: e.g. it is used in computational geometry, physics and engineering. A non-exhaustive list of examples follows. The cross product appears in the calculation of the distance of two skew lines (lines not in the same plane) from each other in three-dimensional space. The cross product can be used to calculate the normal for a triangle or polygon, an operation frequently performed in computer graphics. For example, the winding of a polygon (clockwise or anticlockwise) about a point within the polygon can be calculated by triangulating the polygon (like spoking a wheel) and summing the angles (between the spokes) using the cross product to keep track of the sign of each angle. In computational geometry of the plane, the cross product is used to determine the sign of the acute angle defined by three points and . It corresponds to the direction (upward or downward) of the cross product of the two coplanar vectors defined by the two pairs of points and . The sign of the acute angle is the sign of the expression which is the signed length of the cross product of the two vectors. In the "right-handed" coordinate system, if the result is 0, the points are collinear; if it is positive, the three points constitute a positive angle of rotation around from to , otherwise a negative angle. From another point of view, the sign of tells whether lies to the left or to the right of line Angular momentum and torque The angular momentum of a particle about a given origin is defined as: where is the position vector of the particle relative to the origin, is the linear momentum of the particle. In the same way, the moment of a force applied at point B around point A is given as: In mechanics the moment of a force is also called torque and written as Since position , linear momentum and force are all true vectors, both the angular momentum and the moment of a force are pseudovectors or axial vectors. The cross product frequently appears in the description of rigid motions. Two points P and Q on a rigid body can be related by: where is the point's position, is its velocity and is the body's angular velocity. Since position and velocity are true vectors, the angular velocity is a pseudovector or axial vector. The cross product is used to describe the Lorentz force experienced by a moving electric charge : Since velocity , force and electric field are all true vectors, the magnetic field is a pseudovector. The trick of rewriting a cross product in terms of a matrix multiplication appears frequently in epipolar and multi-view geometry, in particular when deriving matching constraints. Cross product as an external product The cross product can be defined in terms of the exterior product. In this context,[which?] it is an external product. This view[which?] allows for a natural geometric interpretation of the cross product. In exterior algebra the exterior product of two vectors is a bivector. A bivector is an oriented plane element, in much the same way that a vector is an oriented line element. Given two vectors a and b, one can view the bivector a ∧ b as the oriented parallelogram spanned by a and b. The cross product is then obtained by taking the Hodge star of the bivector a ∧ b, mapping 2-vectors to vectors: This can be thought of as the oriented multi-dimensional element "perpendicular" to the bivector. Only in three dimensions is the result an oriented line element – a vector – whereas, for example, in 4 dimensions the Hodge dual of a bivector is two-dimensional – another oriented plane element. So, only in three dimensions is the cross product of a and b the vector dual to the bivector a ∧ b: it is perpendicular to the bivector, with orientation dependent on the coordinate system's handedness, and has the same magnitude relative to the unit normal vector as a ∧ b has relative to the unit bivector; precisely the properties described above. Cross product and handedness When measurable quantities involve cross products, the handedness of the coordinate systems used cannot be arbitrary. However, when physics laws are written as equations, it should be possible to make an arbitrary choice of the coordinate system (including handedness). To avoid problems, one should be careful to never write down an equation where the two sides do not behave equally under all transformations that need to be considered. For example, if one side of the equation is a cross product of two vectors, one must take into account that when the handedness of the coordinate system is not fixed a priori, the result is not a (true) vector but a pseudovector. Therefore, for consistency, the other side must also be a pseudovector. More generally, the result of a cross product may be either a vector or a pseudovector, depending on the type of its operands (vectors or pseudovectors). Namely, vectors and pseudovectors are interrelated in the following ways under application of the cross product: - vector × vector = pseudovector - pseudovector × pseudovector = pseudovector - vector × pseudovector = vector - pseudovector × vector = vector. So by the above relationships, the unit basis vectors i, j and k of an orthonormal, right-handed (Cartesian) coordinate frame must all be pseudovectors (if a basis of mixed vector types is disallowed, as it normally is) since i × j = k, j × k = i and k × i = j. Because the cross product may also be a (true) vector, it may not change direction with a mirror image transformation. This happens, according to the above relationships, if one of the operands is a (true) vector and the other one is a pseudovector (e.g., the cross product of two vectors). For instance, a vector triple product involving three (true) vectors is a (true) vector. A handedness-free approach is possible using exterior algebra. There are several ways to generalize the cross product to the higher dimensions. The cross product can be seen as one of the simplest Lie products, and is thus generalized by Lie algebras, which are axiomatized as binary products satisfying the axioms of multilinearity, skew-symmetry, and the Jacobi identity. Many Lie algebras exist, and their study is a major field of mathematics, called Lie theory. For example, the Heisenberg algebra gives another Lie algebra structure on In the basis the product is The cross product can also be described in terms of quaternions, and this is why the letters i, j, k are a convention for the standard basis on R3. The unit vectors i, j, k correspond to "binary" (180 deg) rotations about their respective axes (Altmann, S. L., 1986, Ch. 12), said rotations being represented by "pure" quaternions (zero real part) with unit norms. For instance, the above given cross product relations among i, j, and k agree with the multiplicative relations among the quaternions i, j, and k. In general, if a vector [a1, a2, a3] is represented as the quaternion a1i + a2j + a3k, the cross product of two vectors can be obtained by taking their product as quaternions and deleting the real part of the result. The real part will be the negative of the dot product of the two vectors. Alternatively, using the above identification of the 'purely imaginary' quaternions with R3, the cross product may be thought of as half of the commutator of two quaternions. A cross product for 7-dimensional vectors can be obtained in the same way by using the octonions instead of the quaternions. The nonexistence of nontrivial vector-valued cross products of two vectors in other dimensions is related to the result from Hurwitz's theorem that the only normed division algebras are the ones with dimension 1, 2, 4, and 8. In general dimension, there is no direct analogue of the binary cross product that yields specifically a vector. There is however the exterior product, which has similar properties, except that the exterior product of two vectors is now a 2-vector instead of an ordinary vector. As mentioned above, the cross product can be interpreted as the exterior product in three dimensions by using the Hodge star operator to map 2-vectors to vectors. The Hodge dual of the exterior product yields an (n − 2)-vector, which is a natural generalization of the cross product in any number of dimensions. As mentioned above, the cross product can be interpreted in three dimensions as the Hodge dual of the exterior product. In any finite n dimensions, the Hodge dual of the exterior product of n − 1 vectors is a vector. So, instead of a binary operation, in arbitrary finite dimensions, the cross product is generalized as the Hodge dual of the exterior product of some given n − 1 vectors. This generalization is called external product. Interpreting the three-dimensional vector space of the algebra as the 2-vector (not the 1-vector) subalgebra of the three-dimensional geometric algebra, where , , and , the cross product corresponds exactly to the commutator product in geometric algebra and both use the same symbol . The commutator product is defined for 2-vectors and in geometric algebra as: The commutator product could be generalised to arbitrary multivectors in three dimensions, which results in a multivector consisting of only elements of grades 1 (1-vectors/true vectors) and 2 (2-vectors/pseudovectors). While the commutator product of two 1-vectors is indeed the same as the exterior product and yields a 2-vector, the commutator of a 1-vector and a 2-vector yields a true vector, corresponding instead to the left and right contractions in geometric algebra. The commutator product of two 2-vectors has no corresponding equivalent product, which is why the commutator product is defined in the first place for 2-vectors. Furthermore, the commutator triple product of three 2-vectors is the same as the vector triple product of the same three pseudovectors in vector algebra. However, the commutator triple product of three 1-vectors in geometric algebra is instead the negative of the vector triple product of the same three true vectors in vector algebra. Generalizations to higher dimensions is provided by the same commutator product of 2-vectors in higher-dimensional geometric algebras, but the 2-vectors are no longer pseudovectors. Just as the commutator product/cross product of 2-vectors in three dimensions correspond to the simplest Lie algebra, the 2-vector subalgebras of higher dimensional geometric algebra equipped with the commutator product also correspond to the Lie algebras. Also as in three dimensions, the commutator product could be further generalised to arbitrary multivectors. In the context of multilinear algebra, the cross product can be seen as the (1,2)-tensor (a mixed tensor, specifically a bilinear map) obtained from the 3-dimensional volume form,[note 2] a (0,3)-tensor, by raising an index. In detail, the 3-dimensional volume form defines a product by taking the determinant of the matrix given by these 3 vectors. By duality, this is equivalent to a function (fixing any two inputs gives a function by evaluating on the third input) and in the presence of an inner product (such as the dot product; more generally, a non-degenerate bilinear form), we have an isomorphism and thus this yields a map which is the cross product: a (0,3)-tensor (3 vector inputs, scalar output) has been transformed into a (1,2)-tensor (2 vector inputs, 1 vector output) by "raising an index". Translating the above algebra into geometry, the function "volume of the parallelepiped defined by " (where the first two vectors are fixed and the last is an input), which defines a function , can be represented uniquely as the dot product with a vector: this vector is the cross product From this perspective, the cross product is defined by the scalar triple product, In the same way, in higher dimensions one may define generalized cross products by raising indices of the n-dimensional volume form, which is a -tensor. The most direct generalizations of the cross product are to define either: - a -tensor, which takes as input vectors, and gives as output 1 vector – an -ary vector-valued product, or - a -tensor, which takes as input 2 vectors and gives as output skew-symmetric tensor of rank n − 2 – a binary product with rank n − 2 tensor values. One can also define -tensors for other k. These products are all multilinear and skew-symmetric, and can be defined in terms of the determinant and parity. The -ary product can be described as follows: given vectors in define their generalized cross product as: - perpendicular to the hyperplane defined by the - magnitude is the volume of the parallelotope defined by the which can be computed as the Gram determinant of the - oriented so that is positively oriented. This is the unique multilinear, alternating product which evaluates to , and so forth for cyclic permutations of indices. In coordinates, one can give a formula for this -ary analogue of the cross product in Rn by: This formula is identical in structure to the determinant formula for the normal cross product in R3 except that the row of basis vectors is the last row in the determinant rather than the first. The reason for this is to ensure that the ordered vectors (v1, ...,vn−1, Λ(v1, ...,vn−1)) have a positive orientation with respect to (e1, ..., en). If n is odd, this modification leaves the value unchanged, so this convention agrees with the normal definition of the binary product. In the case that n is even, however, the distinction must be kept. This -ary form enjoys many of the same properties as the vector cross product: it is alternating and linear in its arguments, it is perpendicular to each argument, and its magnitude gives the hypervolume of the region bounded by the arguments. And just like the vector cross product, it can be defined in a coordinate independent way as the Hodge dual of the wedge product of the arguments. If the cross product is defined as a binary operation, it takes as input exactly two vectors. If its output is not required to be a vector or a pseudovector but instead a matrix, then it can be generalized in an arbitrary number of dimensions. In mechanics, for example, the angular velocity can be interpreted either as a pseudovector or as an anti-symmetric matrix or skew-symmetric tensor . In the latter case, the velocity law for a rigid body looks: where Ω is formally defined from the rotation matrix associated to body's frame: In three-dimensions holds: In quantum mechanics the angular momentum is often represented as an anti-symmetric matrix or tensor operator. More precisely, it is the result of cross product involving position and linear momentum : Since both and can have an arbitrary number of components, that kind of cross product can be extended to any dimension, holding the "physical" interpretation of the operation. See § Alternative ways to compute the cross product for numerical details. In 1773, Joseph-Louis Lagrange introduced the component form of both the dot and cross products in order to study the tetrahedron in three dimensions. In 1843, William Rowan Hamilton introduced the quaternion product, and with it the terms "vector" and "scalar". Given two quaternions [0, u] and [0, v], where u and v are vectors in R3, their quaternion product can be summarized as [−u ⋅ v, u × v]. James Clerk Maxwell used Hamilton's quaternion tools to develop his famous electromagnetism equations, and for this and other reasons quaternions for a time were an essential part of physics education. In 1878 William Kingdon Clifford published his Elements of Dynamic which was an advanced text for its time. He defined the product of two vectors to have magnitude equal to the area of the parallelogram of which they are two sides, and direction perpendicular to their plane. Oliver Heaviside and Josiah Willard Gibbs also felt that quaternion methods were too cumbersome, often requiring the scalar or vector part of a result to be extracted. Thus, about forty years after the quaternion product, the dot product and cross product were introduced — to heated opposition. Pivotal to (eventual) acceptance was the efficiency of the new approach, allowing Heaviside to reduce the equations of electromagnetism from Maxwell's original 20 to the four commonly seen today. Largely independent of this development, and largely unappreciated at the time, Hermann Grassmann created a geometric algebra not tied to dimension two or three, with the exterior product playing a central role. In 1853 Augustin-Louis Cauchy, a contemporary of Grassmann, published a paper on algebraic keys which were used to solve equations and had the same multiplication properties as the cross product. Clifford combined the algebras of Hamilton and Grassmann to produce Clifford algebra, where in the case of three-dimensional vectors the bivector produced from two vectors dualizes to a vector, thus reproducing the cross product. The cross notation and the name "cross product" began with Gibbs. Originally they appeared in privately published notes for his students in 1881 as Elements of Vector Analysis. The utility for mechanics was noted by Aleksandr Kotelnikov. Gibbs's notation and the name "cross product" later reached a wide audience through Vector Analysis, a textbook by Edwin Bidwell Wilson, a former student. Wilson rearranged material from Gibbs's lectures, together with material from publications by Heaviside, Föpps, and Hamilton. He divided vector analysis into three parts: First, that which concerns addition and the scalar and vector products of vectors. Second, that which concerns the differential and integral calculus in its relations to scalar and vector functions. Third, that which contains the theory of the linear vector function. Two main kinds of vector multiplications were defined, and they were called as follows: - The direct, scalar, or dot product of two vectors - The skew, vector, or cross product of two vectors Several kinds of triple products and products of more than three vectors were also examined. The above-mentioned triple product expansion was also included. - Cartesian product – A product of two sets - Dot product - Exterior algebra - Geometric algebra: Rotating systems - Multiple cross products – Products involving more than three vectors - × (the symbol) - Here, "formal" means that this notation has the form of a determinant, but does not strictly adhere to the definition; it is a mnemonic used to remember the expansion of the cross product. - By a volume form one means a function that takes in n vectors and gives out a scalar, the volume of the parallelotope defined by the vectors: This is an n-ary multilinear skew-symmetric form. In the presence of a basis, such as on this is given by the determinant, but in an abstract vector space, this is added structure. In terms of G-structures, a volume form is an -structure. - WS Massey (1983). "Cross products of vectors in higher dimensional Euclidean spaces". The American Mathematical Monthly. 90 (10): 697–701. doi:10.2307/2323537. JSTOR 2323537. If one requires only three basic properties of the cross product ... it turns out that a cross product of vectors exists only in 3-dimensional and 7-dimensional Euclidean space. - Jeffreys, H; Jeffreys, BS (1999). Methods of mathematical physics. Cambridge University Press. OCLC 41158050. - Wilson 1901, p. 60–61 - Dennis G. Zill; Michael R. Cullen (2006). "Definition 7.4: Cross product of two vectors". Advanced engineering mathematics (3rd ed.). Jones & Bartlett Learning. p. 324. ISBN 0-7637-4591-X. - A History of Vector Analysis by Michael J. Crowe, Math. UC Davis - Dennis G. Zill; Michael R. Cullen (2006). "Equation 7: a × b as sum of determinants". cited work. Jones & Bartlett Learning. p. 321. ISBN 0-7637-4591-X. - M. R. Spiegel; S. Lipschutz; D. Spellman (2009). Vector Analysis. Schaum's outlines. McGraw Hill. p. 29. ISBN 978-0-07-161545-7. - WS Massey (Dec 1983). "Cross products of vectors in higher dimensional Euclidean spaces". The American Mathematical Monthly. The American Mathematical Monthly, Vol. 90, No. 10. 90 (10): 697–701. doi:10.2307/2323537. JSTOR 2323537. - Vladimir A. Boichenko; Gennadiĭ Alekseevich Leonov; Volker Reitmann (2005). Dimension theory for ordinary differential equations. Vieweg+Teubner Verlag. p. 26. ISBN 3-519-00437-2. - Pertti Lounesto (2001). Clifford algebras and spinors (2nd ed.). Cambridge University Press. p. 94. ISBN 0-521-00551-5. - Shuangzhe Liu; Gõtz Trenkler (2008). "Hadamard, Khatri-Rao, Kronecker and other matrix products" (PDF). Int J Information and systems sciences. Institute for scientific computing and education. 4 (1): 160–177. - by Eric W. Weisstein (2003). "Binet-Cauchy identity". CRC concise encyclopedia of mathematics (2nd ed.). CRC Press. p. 228. ISBN 1-58488-347-2. - Lounesto, Pertti (2001). Clifford algebras and spinors. Cambridge: Cambridge University Press. p. 193. ISBN 978-0-521-00551-7. - Greub, W (1978). Multilinear Algebra. - Hogben, L, ed. (2007). Handbook of Linear Algebra.[page needed] - Arthur, John W. (2011). Understanding Geometric Algebra for Electromagnetic Theory. IEEE Press. p. 49. ISBN 978-0470941638. - Doran, Chris; Lasenby, Anthony (2003). Geometric Algebra for Physicists. Cambridge University Press. pp. 401–408. ISBN 978-0521715959. - A. W. McDavid; C. D. McMullen (2006). "Generalizing Cross Products and Maxwell's Equations to Universal Extra Dimensions" (PDF). Cite journal requires - C. A. Gonano (2011). Estensione in N-D di prodotto vettore e rotore e loro applicazioni (PDF). Politecnico di Milano, Italy. - C. A. Gonano; R. E. Zich (2014). "Cross product in N Dimensions – the doublewedge product" (PDF). Cite journal requires - Lagrange, JL (1773). "Solutions analytiques de quelques problèmes sur les pyramides triangulaires". Oeuvres. vol 3. - William Kingdon Clifford (1878) Elements of Dynamic[permanent dead link], Part I, page 95, London: MacMillan & Co; online presentation by Cornell University Historical Mathematical Monographs - Nahin, Paul J. (2000). Oliver Heaviside: the life, work, and times of an electrical genius of the Victorian age. JHU Press. pp. 108–109. ISBN 0-8018-6909-9. - Crowe, Michael J. (1994). A History of Vector Analysis. Dover. p. 83. ISBN 0-486-67910-1. - Cauchy, Augustin-Louis (1900). Ouvres. 12. p. 16. - Cajori, Florian (1929). A History Of Mathematical Notations Volume II. Open Court Publishing. p. 134. ISBN 978-0-486-67766-8. - E. A. Milne (1948) Vectorial Mechanics, Chapter 2: Vector Product, pp 11 –31, London: Methuen Publishing. - Wilson, Edwin Bidwell (1901). Vector Analysis: A text-book for the use of students of mathematics and physics, founded upon the lectures of J. Willard Gibbs. Yale University Press. - T. Levi-Civita; U. Amaldi (1949). Lezioni di meccanica razionale (in Italian). Bologna: Zanichelli editore. - Hazewinkel, Michiel, ed. (2001) , "Cross product", Encyclopedia of Mathematics, Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4 - Weisstein, Eric W. "Cross Product". MathWorld. - A quick geometrical derivation and interpretation of cross products - Gonano, Carlo Andrea; Zich, Riccardo Enrico (21 July 2014). "Cross product in N Dimensions – the doublewedge product". arXiv:1408.5799 [math.GM]. Polytechnic University of Milan, Italy. - Silagadze, Zurab K. (30 April 2002). "Multi-dimensional vector product". Journal of Physics A: Mathematical and General. 35: 4949–4953. arXiv:math/0204357. Bibcode:2002JPhA...35.4949S. doi:10.1088/0305-4470/35/23/310. (it is only possible in 7-D space) - An interactive tutorial created at Syracuse University – (requires java) - W. Kahan (2007). Cross-Products and Rotations in Euclidean 2- and 3-Space. University of California, Berkeley (PDF).
a model species of roundworm The nematodes or roundworms constitute the phylum Nematoda (also called Nemathelminthes). They are a diverse animal phylum inhabiting a broad range of environments. Taxonomically, they are classified along with insects and other moulting animals in the clade Ecdysozoa, and unlike flatworms, have tubular digestive systems with openings at both ends. Like tardigrades they have a reduced number of Hox genes, but as their sister phylum Nematomorpha has kept the ancestral protostome Hox genotype, it shows that the reduction has occurred within the nematode phylum. Nematode species can be difficult to distinguish from one another. Consequently, estimates of the number of nematode species described to date vary by author and may change rapidly over time. A 2013 survey of animal biodiversity published in the mega journal Zootaxa puts this figure at over 25,000. Estimates of the total number of extant species are subject to even greater variation. A widely referenced article published in 1993 estimated there may be over 1 million species of nematode, a claim which has since been repeated in numerous publications, without additional investigation, in an attempt to accentuate the importance and ubiquity of nematodes in the global ecosystem (rather than as a sign of agreement with the estimated taxonomic figure). Many other publications have since vigorously refuted this claim on the grounds that it is unsupported by fact, and is the result of speculation and sensationalism. More recent, fact-based estimates have placed the true figure closer to 40,000 species worldwide. Nematodes have successfully adapted to nearly every ecosystem: from marine (salt) to fresh water, soils, from the polar regions to the tropics, as well as the highest to the lowest of elevations. They are ubiquitous in freshwater, marine, and terrestrial environments, where they often outnumber other animals in both individual and species counts, and are found in locations as diverse as mountains, deserts, and oceanic trenches. They are found in every part of the earth's lithosphere, even at great depths, 0.9–3.6 km (3,000–12,000 ft) below the surface of the Earth in gold mines in South Africa. They represent 90% of all animals on the ocean floor. In total, 4.4 × 1020 nematodes inhabit the Earth's topsoil, or approximately 60 billion for each human, with the highest densities observed in tundra and boreal forests. Their numerical dominance, often exceeding a million individuals per square meter and accounting for about 80% of all individual animals on earth, their diversity of lifecycles, and their presence at various trophic levels point to an important role in many ecosystems. They have been shown to play crucial roles in polar ecosystem. The roughly 2,271 genera are placed in 256 families. The many parasitic forms include pathogens in most plants and animals. A third of the genera occur as parasites of vertebrates; about 35 nematode species occur in humans. In short, if all the matter in the universe except the nematodes were swept away, our world would still be dimly recognizable, and if, as disembodied spirits, we could then investigate it, we should find its mountains, hills, vales, rivers, lakes, and oceans represented by a film of nematodes. The location of towns would be decipherable since, for every massing of human beings, there would be a corresponding massing of certain nematodes. Trees would still stand in ghostly rows representing our streets and highways. The location of the various plants and animals would still be decipherable, and, had we sufficient knowledge, in many cases even their species could be determined by an examination of their erstwhile nematode parasites. - 1 Etymology - 2 Taxonomy and systematics - 3 Anatomy - 4 Reproduction - 5 Free-living species - 6 Parasitic species - 7 Epidemiology - 8 Soil ecosystems - 9 Society and culture - 10 See also - 11 References - 12 Further reading - 13 External links The word nematode comes from the Modern Latin compound of nemat- "thread" (from Greek nema, genitive nematos "thread," from stem of nein "to spin"; see needle) + -odes "like, of the nature of" (see -oid). Taxonomy and systematics The name of the group Nematoda, informally called "nematodes", came from Nematoidea, originally defined by Karl Rudolphi (1808), from Ancient Greek νῆμα (nêma, nêmatos, 'thread') and -eiδἠς (-eidēs, 'species'). It was treated as family Nematodes by Burmeister (1837). At its origin, the "Nematoidea" erroneously included Nematodes and Nematomorpha, attributed by von Siebold (1843). Along with Acanthocephala, Trematoda, and Cestoidea, it formed the obsolete group Entozoa, created by Rudolphi (1808). They were also classed along with Acanthocephala in the obsolete phylum Nemathelminthes by Gegenbaur (1859). In 1861, K. M. Diesing treated the group as order Nematoda. In 1877, the taxon Nematoidea, including the family Gordiidae (horsehair worms), was promoted to the rank of phylum by Ray Lankester. The first clear distinction between the nemas and gordiids was realized by Vejdovsky when he named a group to contain the horsehair worms the order Nematomorpha. In 1919, Nathan Cobb proposed that nematodes should be recognized alone as a phylum. He argued they should be called "nema" in English rather than "nematodes" and defined the taxon Nemates (later emended as Nemata, Latin plural of nema), listing Nematoidea sensu restricto as a synonym. However, in 1910, Grobben proposed the phylum Aschelminthes and the nematodes were included in as class Nematoda along with class Rotifera, class Gastrotricha, class Kinorhyncha, class Priapulida, and class Nematomorpha (The phylum was later revived and modified by Libbie Henrietta Hyman in 1951 as Pseudoceolomata, but remained similar). In 1932, Potts elevated the class Nematoda to the level of phylum, leaving the name the same. Despite Potts' classification being equivalent to Cobbs', both names have been used (and are still used today) and Nematode became a popular term in zoological science. Since Cobb was the first to include nematodes in a particular phylum separated from Nematomorpha, some researchers consider the valid taxon name to be Nemates or Nemata, rather than Nematoda, because of the zoological rule that gives priority to the first used term in case of synonyms. The phylogenetic relationships of the nematodes and their close relatives among the protostomian Metazoa are unresolved. Traditionally, they were held to be a lineage of their own, but in the 1990s, they were proposed to form the group Ecdysozoa together with moulting animals, such as arthropods. The identity of the closest living relatives of the Nematoda has always been considered to be well resolved. Morphological characters and molecular phylogenies agree with placement of the roundworms as a sister taxon to the parasitic Nematomorpha; together, they make up the Nematoida. Along with the Scalidophora (formerly Cephalorhyncha), the Nematoida form the clade Cycloneuralia, but much disagreement occurs both between and among the available morphological and molecular data. The Cycloneuralia or the Introverta—depending on the validity of the former—are often ranked as a superphylum. Due to the lack of knowledge regarding many nematodes, their systematics is contentious. An earliest and influential classification was proposed by Chitwood and Chitwood—later revised by Chitwood—who divided the phylum into two—the Aphasmidia and the Phasmidia. These were later renamed Adenophorea (gland bearers) and Secernentea (secretors), respectively. The Secernentea share several characteristics, including the presence of phasmids, a pair of sensory organs located in the lateral posterior region, and this was used as the basis for this division. This scheme was adhered to in many later classifications, though the Adenophorea were not in a uniform group. As it seems, the Secernentea are indeed a natural group of closest relatives, but the "Adenophorea" appear to be a paraphyletic assemblage of roundworms simply retaining a good number of ancestral traits. The old Enoplia do not seem to be monophyletic, either, but to contain two distinct lineages. The old group "Chromadoria" seems to be another paraphyletic assemblage, with the Monhysterida representing a very ancient minor group of nematodes. Among the Secernentea, the Diplogasteria may need to be united with the Rhabditia, while the Tylenchia might be paraphyletic with the Rhabditia. The understanding of roundworm systematics and phylogeny as of 2002 is summarised below: - Basal order Monhysterida - Class Dorylaimida - Class Enoplea - Class Secernentea - "Chromadorea" assemblage Later work has suggested the presence of 12 clades. The Secernentea—a group that includes virtually all major animal and plant 'nematode' parasites—apparently arose from within the Adenophorea. A major effort to improve the systematics of this phylum is in progress and being organised by the 959 Nematode Genomes. A complete checklist of the world's nematode species can be found in the World Species Index: Nematoda. An analysis of the mitochondrial DNA suggests that the following groupings are valid - subclass Dorylaimia - orders Rhabditida, Trichinellida and Mermithida - suborder Rhabditina - infraorders Spiruromorpha and Oxyuridomorpha The monophyly of the Ascaridomorph is uncertain. Nematodes are very small, slender worms: typically about 5 to 100 µm thick, and 0.1 to 2.5 mm long. The smallest nematodes are microscopic, while free-living species can reach as much as 5 cm (2 in), and some parasitic species are larger still, reaching over 1 m (3 ft) in length.:271 The body is often ornamented with ridges, rings, bristles, or other distinctive structures. The head of a nematode is relatively distinct. Whereas the rest of the body is bilaterally symmetrical, the head is radially symmetrical, with sensory bristles and, in many cases, solid 'head-shields' radiating outwards around the mouth. The mouth has either three or six lips, which often bear a series of teeth on their inner edges. An adhesive 'caudal gland' is often found at the tip of the tail. The epidermis is either a syncytium or a single layer of cells, and is covered by a thick collagenous cuticle. The cuticle is often of a complex structure and may have two or three distinct layers. Underneath the epidermis lies a layer of longitudinal muscle cells. The relatively rigid cuticle works with the muscles to create a hydroskeleton, as nematodes lack circumferential muscles. Projections run from the inner surface of muscle cells towards the nerve cords; this is a unique arrangement in the animal kingdom, in which nerve cells normally extend fibers into the muscles rather than vice versa. The oral cavity is lined with cuticle, which is often strengthened with structures, such as ridges, and especially in carnivorous species, which may bear a number of teeth. The mouth often includes a sharp stylet, which the animal can thrust into its prey. In some species, the stylet is hollow and can be used to suck liquids from plants or animals. The oral cavity opens into a muscular, sucking pharynx, also lined with cuticle. Digestive glands are found in this region of the gut, producing enzymes that start to break down the food. In stylet-bearing species, these may even be injected into the prey. No stomach is present, with the pharynx connecting directly to a muscleless intestine that forms the main length of the gut. This produces further enzymes, and also absorbs nutrients through its single-cell-thick lining. The last portion of the intestine is lined by cuticle, forming a rectum, which expels waste through the anus just below and in front of the tip of the tail. The movement of food through the digestive system is the result of the body movements of the worm. The intestine has valves or sphincters at either end to help control the movement of food through the body. Nitrogenous waste is excreted in the form of ammonia through the body wall, and is not associated with any specific organs. However, the structures for excreting salt to maintain osmoregulation are typically more complex. In many marine nematodes, one or two unicellulars 'renette glands' excrete salt through a pore on the underside of the animal, close to the pharynx. In most other nematodes, these specialized cells have been replaced by an organ consisting of two parallel ducts connected by a single transverse duct. This transverse duct opens into a common canal that runs to the excretory pore. Four peripheral nerves run the length of the body on the dorsal, ventral, and lateral surfaces. Each nerve lies within a cord of connective tissue lying beneath the cuticle and between the muscle cells. The ventral nerve is the largest, and has a double structure forward of the excretory pore. The dorsal nerve is responsible for motor control, while the lateral nerves are sensory, and the ventral combines both functions. At the anterior end of the animal, the nerves branch from a dense, circular nerve (nerve ring) round surrounding the pharynx, and serving as the brain. Smaller nerves run forward from the ring to supply the sensory organs of the head. The bodies of nematodes are covered in numerous sensory bristles and papillae that together provide a sense of touch. Behind the sensory bristles on the head lie two small pits, or 'amphids'. These are well supplied with nerve cells and are probably chemoreception organs. A few aquatic nematodes possess what appear to be pigmented eye-spots, but whether or not these are actually sensory in nature is unclear. Most nematode species are dioecious, with separate male and female individuals, though some, such as Caenorhabditis elegans, are androdioecious, consisting of hermaphrodites and rare males. Both sexes possess one or two tubular gonads. In males, the sperm are produced at the end of the gonad and migrate along its length as they mature. The testis opens into a relatively wide seminal vesicle and then during intercourse into a glandular and muscular ejaculatory duct associated with the vas deferens and cloaca. In females, the ovaries each open into an oviduct (in hermaphrodites, the eggs enter a spermatheca first) and then a glandular uterus. The uteri both open into a common vulva/vagina, usually located in the middle of the morphologically ventral surface. Reproduction is usually sexual, though hermaphrodites are capable of self-fertilization. Males are usually smaller than females or hermaphrodites (often much smaller) and often have a characteristically bent or fan-shaped tail. During copulation, one or more chitinized spicules move out of the cloaca and are inserted into the genital pore of the female. Amoeboid sperm crawl along the spicule into the female worm. Nematode sperm is thought to be the only eukaryotic cell without the globular protein G-actin. Eggs may be embryonated or unembryonated when passed by the female, meaning their fertilized eggs may not yet be developed. A few species are known to be ovoviviparous. The eggs are protected by an outer shell, secreted by the uterus. In free-living roundworms, the eggs hatch into larvae, which appear essentially identical to the adults, except for an underdeveloped reproductive system; in parasitic roundworms, the lifecycle is often much more complicated. Nematodes as a whole possess a wide range of modes of reproduction. Some nematodes, such as Heterorhabditis spp., undergo a process called endotokia matricida: intrauterine birth causing maternal death. Some nematodes are hermaphroditic, and keep their self-fertilized eggs inside the uterus until they hatch. The juvenile nematodes then ingest the parent nematode. This process is significantly promoted in environments with a low food supply. The nematode model species C. elegans and C. briggsae exhibit androdioecy, which is very rare among animals. The single genus Meloidogyne (root-knot nematodes) exhibits a range of reproductive modes, including sexual reproduction, facultative sexuality (in which most, but not all, generations reproduce asexually), and both meiotic and mitotic parthenogenesis. The genus Mesorhabditis exhibits an unusual form of parthenogenesis, in which sperm-producing males copulate with females, but the sperm do not fuse with the ovum. Contact with the sperm is essential for the ovum to begin dividing, but because no fusion of the cells occurs, the male contributes no genetic material to the offspring, which are essentially clones of the female. Different free-living species feed on materials as varied as algae, fungi, small animals, fecal matter, dead organisms, and living tissues. Free-living marine nematodes are important and abundant members of the meiobenthos. They play an important role in the decomposition process, aid in recycling of nutrients in marine environments, and are sensitive to changes in the environment caused by pollution. One roundworm of note, C. elegans, lives in the soil and has found much use as a model organism. C. elegans has had its entire genome sequenced, the developmental fate of every cell determined, and every neuron mapped. Nematodes that commonly parasitise humans include ascarids (Ascaris), filarias, hookworms, pinworms (Enterobius), and whipworms (Trichuris trichiura). The species Trichinella spiralis, commonly known as the 'trichina worm', occurs in rats, pigs, bears, and humans, and is responsible for the disease trichinosis. Baylisascaris usually infests wild animals, but can be deadly to humans, as well. Dirofilaria immitis is known for causing heartworm disease by inhabiting the hearts, arteries, and lungs of dogs and some cats. Haemonchus contortus is one of the most abundant infectious agents in sheep around the world, causing great economic damage to sheep. In contrast, entomopathogenic nematodes parasitize insects and are mostly considered beneficial by humans, but some attack beneficial insects. One form of nematode is entirely dependent upon fig wasps, which are the sole source of fig fertilization. They prey upon the wasps, riding them from the ripe fig of the wasp's birth to the fig flower of its death, where they kill the wasp, and their offspring await the birth of the next generation of wasps as the fig ripens. A newly discovered parasitic tetradonematid nematode, Myrmeconema neotropicum, apparently induces fruit mimicry in the tropical ant Cephalotes atratus. Infected ants develop bright red gasters (abdomens), tend to be more sluggish, and walk with their gasters in a conspicuous elevated position. These changes likely cause frugivorous birds to confuse the infected ants for berries, and eat them. Parasite eggs passed in the bird's feces are subsequently collected by foraging C. atratus and are fed to their larvae, thus completing the lifecycle of M. neotropicum. Similarly, multiple varieties of nematodes have been found in the abdominal cavities of the primitively social sweat bee, Lasioglossum zephyrus. Inside the female body, the nematode hinders ovarian development and renders the bee less active, thus less effective in pollen collection. Plant-parasitic nematodes include several groups causing severe crop losses. The most common genera are Aphelenchoides (foliar nematodes), Ditylenchus, Globodera (potato cyst nematodes), Heterodera (soybean cyst nematodes), Longidorus, Meloidogyne (root-knot nematodes), Nacobbus, Pratylenchus (lesion nematodes), Trichodorus, and Xiphinema (dagger nematodes). Several phytoparasitic nematode species cause histological damages to roots, including the formation of visible galls (e.g. by root-knot nematodes), which are useful characters for their diagnostic in the field. Some nematode species transmit plant viruses through their feeding activity on roots. One of them is Xiphinema index, vector of grapevine fanleaf virus, an important disease of grapes, another one is Xiphinema diversicaudatum, vector of arabis mosaic virus. Other nematodes attack bark and forest trees. The most important representative of this group is Bursaphelenchus xylophilus, the pine wood nematode, present in Asia and America and recently discovered in Europe. Agriculture and horticulture Depending on the species, a nematode may be beneficial or detrimental to plant health. From agricultural and horticulture perspectives, the two categories of nematodes are the predatory ones, which kill garden pests such as cutworms and corn earworm moths, and the pest nematodes, such as the root-knot nematode, which attack plants, and those that act as vectors spreading plant viruses between crop plants. Predatory nematodes can be bred by soaking a specific recipe of leaves and other detritus in water, in a dark, cool place, and can even be purchased as an organic form of pest control. Rotations of plants with nematode-resistant species or varieties is one means of managing parasitic nematode infestations. For example, marigolds, grown over one or more seasons (the effect is cumulative), can be used to control nematodes. Another is treatment with natural antagonists such as the fungus Gliocladium roseum. Chitosan, a natural biocontrol, elicits plant defense responses to destroy parasitic cyst nematodes on roots of soybean, corn, sugar beet, potato, and tomato crops without harming beneficial nematodes in the soil. Soil steaming is an efficient method to kill nematodes before planting a crop, but indiscriminately eliminates both harmful and beneficial soil fauna. The golden nematode Globodera rostochiensis is a particularly harmful variety of nematode pest that has resulted in quarantines and crop failures worldwide. CSIRO has found a 13- to 14-fold reduction of nematode population densities in plots having Indian mustard Brassica juncea green manure or seed meal in the soil. About 90% of nematodes reside in the top 15 cm of soil. Nematodes do not decompose organic matter, but, instead, are parasitic and free-living organisms that feed on living material. Nematodes can effectively regulate bacterial population and community composition—they may eat up to 5,000 bacteria per minute. Also, nematodes can play an important role in the nitrogen cycle by way of nitrogen mineralization. Society and culture Nematode worms (C. elegans), part of an ongoing research project conducted on the 2003 Space Shuttle Columbia mission STS-107, survived the re-entry breakup. It is believed to be the first known life form to survive a virtually unprotected atmospheric descent to Earth's surface. - Biological pest control - List of organic gardening and farming topics - List of parasites of humans - Toxocariasis: A helminth infection of humans caused by the dog or cat roundworm, Toxocara canis or Toxocara cati - Worm bagging - "Nematode Fossils—Nematoda". The Virtual Fossil Museum.[permanent dead link] - Classification of Animal Parasites - Garcia, Lynne (29 October 1999). "Classification of Human Parasites, Vectors, and Similar Organisms" (PDF). Los Angeles, California: Department of Pathology and Laboratory Medicine, UCLA Medical Center. Retrieved 21 July 2017. - How Weird is The Worm? Evolution of the Developmental Gene Toolkit in Caenorhabditis elegans - MDPI - Hodda, M (2011). "Phylum Nematoda Cobb, 1932. In: Zhang, Z.-Q. (Ed.) Animal biodiversity: An outline of higher-level classification and survey of taxonomic richness". Zootaxa. 3148: 63–95. doi:10.11646/zootaxa.3148.1.11. - Zhang, Z (2013). "Animal biodiversity: An update of classification and diversity in 2013. In: Zhang, Z.-Q. (Ed.) Animal Biodiversity: An Outline of Higher-level Classification and Survey of Taxonomic Richness (Addenda 2013)". Zootaxa. 3703 (1): 5–11. doi:10.11646/zootaxa.3703.1.3. - "Recent developments in marine benthic biodiversity research". ResearchGate. Retrieved 5 November 2018. - Lambshead, PJD (1993). "Recent developments in marine benthic biodiversity research". Oceanis. 19 (6): 5–24. Anderson, Roy C. (8 February 2000). Nematode Parasites of Vertebrates: Their Development and Transmission. CABI. pp. 1–2. ISBN 9780851994215. Estimates of 500,000 to a million species have no basis in fact. - Borgonie G, García-Moyano A, Litthauer D, Bert W, Bester A, van Heerden E, Möller C, Erasmus M, Onstott TC (June 2011). "Nematoda from the terrestrial deep subsurface of South Africa". Nature. 474 (7349): 79–82. Bibcode:2011Natur.474...79B. doi:10.1038/nature09974. hdl:1854/LU-1269676. PMID 21637257. - Lemonick MD (8 June 2011). "Could 'worms from Hell' mean there's life in space?". Time. ISSN 0040-781X. Retrieved 8 June 2011. - Bhanoo SN (1 June 2011). "Nematode found in mine is first subsurface multicellular organism". The New York Times. ISSN 0362-4331. Retrieved 13 June 2011. - "Gold mine". Nature. 474 (7349): 6. June 2011. doi:10.1038/474006b. PMID 21637213. - Drake N (1 June 2011). "Subterranean worms from hell: Nature News". Nature News. doi:10.1038/news.2011.342. Retrieved 13 June 2011. - Borgonie G, García-Moyano A, Litthauer D, Bert W, Bester A, van Heerden E, Möller C, Erasmus M, Onstott TC (2 June 2011). "Nematoda from the terrestrial deep subsurface of South Africa". Nature. 474 (7349): 79–82. Bibcode:2011Natur.474...79B. doi:10.1038/nature09974. hdl:1854/LU-1269676. ISSN 0028-0836. PMID 21637257. - Danovaro R, Gambi C, Dell'Anno A, Corinaldesi C, Fraschetti S, Vanreusel A, Vincx M, Gooday AJ (January 2008). "Exponential decline of deep-sea ecosystem functioning linked to benthic biodiversity loss". Curr. Biol. 18 (1): 1–8. doi:10.1016/j.cub.2007.11.056. PMID 18164201. Lay summary – EurekAlert!. - van den Hoogen, Johan; Geisen, Stefan; Routh, Devin; Ferris, Howard; Traunspurger, Walter; Wardle, David A.; de Goede, Ron G. M.; Adams, Byron J.; Ahmad, Wasim (2019-07-24). "Soil nematode abundance and functional group composition at a global scale". Nature. 572 (7768): 194–198. doi:10.1038/s41586-019-1418-6. ISSN 0028-0836. - Platt HM (1994). "foreword". In Lorenzen S, Lorenzen SA (eds.). The phylogenetic systematics of freeliving nematodes. London, UK: The Ray Society. ISBN 978-0-903874-22-9. - Cary, S. Craig; Green, T. G. Allan; Storey, Bryan C.; Sparrow, Ashley D.; Hogg, Ian D.; Katurji, Marwan; Zawar-Reza, Peyman; Jones, Irfon; Stichbury, Glen A. (2019-02-15). "Biotic interactions are an unexpected yet critical control on the complexity of an abiotically driven polar ecosystem". Communications Biology. 2 (1): 62. doi:10.1038/s42003-018-0274-5. ISSN 2399-3642. PMC 6377621. PMID 30793041. - Adams, Byron J.; Wall, Diana H.; Storey, Bryan C.; Green, T. G. Allan; Barrett, John E.; S. Craig Cary; Hopkins, David W.; Lee, Charles K.; Bottos, Eric M. (2019-02-15). "Nematodes in a polar desert reveal the relative role of biotic interactions in the coexistence of soil animals". Communications Biology. 2 (1): 63. doi:10.1038/s42003-018-0260-y. ISSN 2399-3642. PMC 6377602. PMID 30793042. - Roy C. Anderson (8 February 2000). Nematode Parasites of Vertebrates: Their development and transmission. CABI. p. 1. ISBN 978-0-85199-786-5. - Cobb, Nathan (1914). "Nematodes and their relationships". Yearbook. United States Department of Agriculture. pp. 472, 457–490. Archived from the original on 9 June 2016. Retrieved 25 September 2012. Quote on p. 472. - Chitwood BG (1957). "The English word "Nema" revised". Systematic Biology. 4 (45): 1619. doi:10.2307/sysbio/6.4.184. - Siddiqi MR (2000). Tylenchida: parasites of plants and insects. Wallingford, Oxon, UK: CABI Pub. ISBN 978-0-85199-202-0. - Schmidt-Rhaesa A (2014). "Gastrotricha, Cycloneuralia and Gnathifera: General History and Phylogeny". In Schmidt-Rhaesa A (ed.). Handbook of Zoology (founded by W. Kükenthal). 1, Nematomorpha, Priapulida, Kinorhyncha, Loricifera. Berlin, Boston: de Gruyter. - Cobb NA (1919). "The orders and classes of nemas". Contrib. Sci. Nematol. 8: 213–216. - Wilson, E. O. "Phylum Nemata". Plant and insect parasitic nematodes. Retrieved 29 April 2018. - "ITIS report: Nematoda". Itis.gov. Retrieved 12 June 2012. - "Bilateria". Tree of Life Web Project. Tree of Life Web Project. 2002. Retrieved 2 November 2008. - Chitwood BG, Chitwood MB (1933). "The characters of a protonematode". J Parasitol. 20: 130. - Chitwood BG (1937). "A revised classification of the Nematoda". Papers on Helminthology published in commemoration of the 30 year Jubileum of ... K.J. Skrjabin ... Moscow: All-Union Lenin Academy of Agricultural Sciences. pp. 67–79. - Chitwood BG (1958). "The designation of official names for higher taxa of invertebrates". Bull Zool Nomencl. 15: 860–895. doi:10.5962/bhl.part.19410. - Coghlan, A. (7 Sep 2005). "Nematode genome evolution" (PDF). WormBook: 1–15. doi:10.1895/wormbook.1.15.1. PMC 4781476. PMID 18050393. Retrieved 13 January 2016. - Blaxter ML, De Ley P, Garey JR, Liu LX, Scheldeman P, Vierstraete A, Vanfleteren JR, Mackey LY, Dorris M, Frisse LM, Vida JT, Thomas WK (March 1998). "A molecular evolutionary framework for the phylum Nematoda". Nature. 392 (6671): 71–75. Bibcode:1998Natur.392...71B. doi:10.1038/32160. PMID 9510248. - "Nematoda". Tree of Life Web Project. Tree of Life Web Project. 2002. Retrieved 2 November 2008. - Holterman M, van der Wurff A, van den Elsen S, van Megen H, Bongers T, Holovachov O, Bakker J, Helder J (2006). "Phylum-wide analysis of SSU rDNA reveals deep phylogenetic relationships among nematodes and accelerated evolution toward crown Clades". Mol Biol Evol. 23 (9): 1792–1800. doi:10.1093/molbev/msl044. PMID 16790472. - "959 Nematode Genomes – NematodeGenomes". Nematodes.org. 11 November 2011. Retrieved 12 June 2012. - World Species Index: Nematoda. 2012. - Liu GH, Shao R, Li JY, Zhou DH, Li H, Zhu XQ (2013). "The complete mitochondrial genomes of three parasitic nematodes of birds: a unique gene order and insights into nematode phylogeny". BMC Genomics. 14 (1): 414. doi:10.1186/1471-2164-14-414. PMC 3693896. PMID 23800363. - Nyle C. Brady & Ray R. Weil (2009). Elements of the Nature and Properties of Soils (3rd ed.). Prentice Hall. ISBN 9780135014332. - Ruppert EE, Fox RS, Barnes RD (2004). Invertebrate Zoology: A Functional Evolutionary Approach (7th ed.). Belmont, California: Brooks/Cole. ISBN 978-0-03-025982-1. - Weischer B, Brown DJ (2000). An Introduction to Nematodes: General Nematology. Sofia, Bulgaria: Pensoft. pp. 75–76. ISBN 978-954-642-087-9. - Barnes RG (1980). Invertebrate zoology. Philadelphia: Sanders College. ISBN 978-0-03-056747-6. - "The sensory cilia of Caenorhabditis elegans". www.wormbook.org. - Kavlie, RG; Kernan, MJ; Eberl, DF (May 2010). "Hearing in Drosophila requires TilB, a conserved protein associated with ciliary motility". Genetics. 185 (1): 177–88. doi:10.1534/genetics.110.114009. PMC 2870953. PMID 20215474. - Lalošević, V.; Lalošević, D.; Capo, I.; Simin, V.; Galfi, A.; Traversa, D. (2013). "High infection rate of zoonotic Eucoleus aerophilus infection in foxes from Serbia". Parasite. 20: 3. doi:10.1051/parasite/2012003. PMC 3718516. PMID 23340229. - Bell G (1982). The masterpiece of nature: the evolution and genetics of sexuality. Berkeley: University of California Press. ISBN 978-0-520-04583-5. - Johnigk SA, Ehlers RU (1999). "Endotokia matricida in hermaphrodites of Heterorhabditis spp. and the effect of the food supply". Nematology. 1 (7–8): 717–726. doi:10.1163/156854199508748. ISSN 1388-5545. - Yanoviak SP, Kaspari M, Dudley R, Poinar G (April 2008). "Parasite-induced fruit mimicry in a tropical canopy ant". Am. Nat. 171 (4): 536–44. doi:10.1086/528968. PMID 18279076. - Batra, Suzanne W. T. (1965-10-01). "Organisms associated with Lasioglossum zephyrum (Hymenoptera: Halictidae)". Journal of the Kansas Entomological Society. 38 (4): 367–389. JSTOR 25083474. - Purcell M, Johnson MW, Lebeck LM, Hara AH (1992). "Biological Control of Helicoverpa zea (Lepidoptera: Noctuidae) with Steinernema carpocapsae (Rhabditida: Steinernematidae) in Corn Used as a Trap Crop". Environmental Entomology. 21 (6): 1441–1447. doi:10.1093/ee/21.6.1441. - Riotte L (1975). Secrets of companion planting for successful gardening. p. 7. - US application 2008072494, Stoner RJ, Linden JC, "Micronutrient elicitor for treating nematodes in field crops", published 2008-03-27 - Loothfar R, Tony S (22 March 2005). "Suppression of root knot nematode (Meloidogyne javanica) after incorporation of Indian mustard cv. Nemfix as green manure and seed meal in vineyards". Australasian Plant Pathology. 34 (1): 77–83. doi:10.1071/AP04081. Retrieved 14 June 2010. - Pramer C (1964). "Nematode-trapping fungi". Science. 144 (3617): 382–388. Bibcode:1964Sci...144..382P. doi:10.1126/science.144.3617.382. PMID 14169325. - Hauser JT (December 1985). "Nematode-trapping fungi" (PDF). Carnivorous Plant Newsletter. 14 (1): 8–11. - Ahrén D, Ursing BM, Tunlid A (1998). "Phylogeny of nematode-trapping fungi based on 18S rDNA sequences". FEMS Microbiology Letters. 158 (2): 179–184. doi:10.1016/s0378-1097(97)00519-3. PMID 9465391. - "Columbia Survivors". Astrobiology Magazine. Jan 1, 2006. - Szewczyk, Nathaniel J.; Mancinelli, Rocco L.; McLamb, William; Reed, David; Blumberg, Baruch S.; Conley, Catharine A. (December 2005). "Caenorhabditis elegans Survives Atmospheric Breakup of STS–107, Space Shuttle Columbia". Astrobiology. 5 (6): 690–705. Bibcode:2005AsBio...5..690S. doi:10.1089/ast.2005.5.690. PMID 16379525. - Atkinson, H.J. (1973). "The respiratory physiology of the marine nematodes Enoplus brevis (Bastian) and E. communis (Bastian): I. The influence of oxygen tension and body size" (PDF). J. Exp. Biol. 59 (1): 255–266. - "Worms survived Columbia disaster". BBC News. 1 May 2003. Retrieved 4 Nov 2008. - Gubanov, N.M. (1951). "Giant nematoda from the placenta of Cetacea; Placentonema gigantissima nov. gen., nov. sp". Proc. USSR Acad. Sci. 77 (6): 1123–1125. [in Russian]. - Kaya, Harry K.; et al. (1993). "An Overview of Insect-Parasitic and Entomopathogenic Nematodes". In Bedding, R.A. (ed.). Nematodes and the Biological Control of Insect Pests. Csiro Publishing. ISBN 9780643105911. - "Giant kidney worm infection in mink and dogs". Merck Veterinary Manual (MVM). 2006. Archived from the original on 3 March 2016. Retrieved 10 February 2007. - White JG, Southgate E, Thomson JN, Brenner S (August 1976). "The structure of the ventral nerve cord of Caenorhabditis elegans". Philos. Trans. R. Soc. Lond. B Biol. Sci. 275 (938): 327–348. Bibcode:1976RSPTB.275..327W. doi:10.1098/rstb.1976.0086. PMID 8806. - Lee, Donald L, ed. (2010). The biology of nematodes. London: Taylor & Francis. ISBN 978-0415272117. Retrieved 16 December 2014. - De Ley, P & Blaxter, M (2004). "A new system for Nematoda: combining morphological characters with molecular trees, and translating clades into ranks and taxa". In R Cook & DJ Hunt (eds.). Nematology Monographs and Perspectives. 2. E.J. Brill, Leiden. pp. 633–653.CS1 maint: uses authors parameter (link) CS1 maint: uses editors parameter (link) |Wikimedia Commons has media related to Nematoda.| |Wikisource has the text of the 1911 Encyclopædia Britannica article Nematoda.| - Harper Adams University College Nematology Research - Nematodes/roundworms of man - European Society of Nematologists - Nematode.net: Repository of parasitic nematode sequences. - NeMys World free-living Marine Nematodes database - Nematode Virtual Library - International Federation of Nematology Societies - Society of Nematologists - Australasian Association of Nematologists - Research on nematodes and longevity - Nematode on BBC - Nematode worms in an aquarium - Phylum Nematoda – nematodes on the UF / *IFAS Featured Creatures Web site
|The word asteroid means "star-like", even if these minor bodies of the Solar System don't emit light on their own, but are visible only because they reflect sunlight. The size of asteroids range from dust particles to significant bodies hundreds of miles in diameter (Ceres, the largest observed is 913 km of diameter). Globally, the total mass of all the asteroids is less than that of the Moon. Asteroids are found in different places in the solar system: most of them orbit around the Sun, grouped in the main belt, while others are farther objects, with highly unpredictable orbits, such as the Trojans, which lie on the orbit of Jupiter, or such as the Centaurs, in the very outer solar system. Asteroids that, for some dynamical mechanism, closely approach the Earth are named Near-Earth Asteroids (NEAs) and form a class of particular interest. Comets are made of several distinct parts: first of all, there is a solid snowball, called nucleus, made of dust and ice. When the comet comes near the Sun, the nucleus heats up and becomes active, causing volatile gas to sublime. The released gas and dust form a cloud, or coma, and the dust element of Let's see in detail the different parts of a comet : The Lyapounov time (L) is a parameter that measures the speed rate at which orbits diverge (or, in other words, it measures how much chaotic an orbit is). L is the period of time for the distance between two near possible orbits to increase of a factor e. The bigger this value is, the more stable is the orbit (L has typical values of a few years for chaotic asteroids, upt to 5 million years for the inner planets and hundred million years for the external planets). |For a non chaotic orbit (above picture) the distance between two near orbits that have similar initial conditions, is a linear function of time. The two orbits diverge slowly, and their distance can be expressed as a linear function of time: For chaotic orbits, the distance between two near orbits can be expressed as an exponential function of time with the formula: This formula means that after a period of 2 L, the 2 orbits will be at a distance of , meaning that this distance varies exponentially with time. The energy freed during an impact is usually measured by Megatons (MT). 1 MT is the energy of almost 100 Hiroshima bombs. To have an idea of the scale of energies: if a 2 meters compact body with a speed of 20 Km/s impacts the Earth, about 1 MT is released. The word NEO stands for Near Earth Object, meaning a minor body of the solar system (or in other words a comet or an asteroid) which comes into the Earth neighborhood. A first classification of NEOs divides NEC (Near earth comets) from NEAs, Near Earth Asteroids. NEAs constitute the vast majority of NEOs and are further divided into three main families, depending on the features of their orbits. In particular they are classified in three groups (Amor, Apollo and Atens) according to their perihelion and aphelion distances and their semi major axes. |In this image you can see the Earth's orbit (in blue ) and the classical shape of the obits of the three main classes of NEAs.| Region of uncertainty - virtual asteroid Let's consider an asteroid: making a first unique observation, its real position can only be determined with some errors. This means we can associate to the asteroid a region of uncertainty. Every point inside this region is a possible position of the object, and is therefore called virtual asteroid. For every virtual asteroid a trajectory can be calculated using computers. This can be done over periods of 50 years maximum. Doing this determination for every virtual asteroid inside the region, it is possible to know how this region evolves in time, moving and changing shape (since every virtual asteroid can follow a different orbit).
The orbital elements of each planet are the eccentricity and the direction of the apsidal line of its orbit defined by the ecliptic longitude of either of its apses, i.e., the two points on its orbit where the planet is either furthest from or closest to the Earth, which are called the planet’s apogee and perigee. In the geocentric view of the solar system, the eccentricity of Venus is a bit less than half of the solar one, and its apogee is located behind that of the Sun. Ptolemy correctly found that the apogee of Venus is behind that of the Sun, but determined the eccentricity of Venus to be exactly half the solar one. In the Indian Midnight System of Āryabhaṭa (b. ad 476), the eccentricity of Venus is assumed to be half the solar one, and also the longitudes of their apogees are assumed to be the same. This hypothesis became prevalent in early medieval Middle Eastern astronomy (ad 800–1000), where its adoption resulted in large errors of more than 10° in the values for the longitude of the apogee of Venus adopted by Yaḥyā b. Abī Manṣūr (d. ad 830), al-Battānī (d. ad 929), and Ibn Yūnus (d. ad 1007). In Western Islamic astronomy, it was used in combination with Ibn al-Zarqālluh’s (d. ad 1100) solar model with variable eccentricity, which only by coincidence resulted in accurate values for the eccentricity of Venus. In late Islamic Middle Eastern astronomy (from ad 1000 onwards), Āryabhaṭa’s hypothesis gradually lost its dominance. Ibn al-A‘lam (d. ad 985) seems to have been the first Islamic astronomer who rejected it. Late Eastern Islamic astronomers from the middle of the thirteenth century onwards arrived at the correct understanding that the eccentricity of Venus should be somewhat less than half of the solar one. Its most accurate medieval value was measured in the Samarqand observatory in the fifteenth century. Also, the values for the longitude of the apogee of Venus show a significant improvement in late Middle Eastern Islamic works, reaching an accuracy better than a degree in Khāzinī’s Mu‘tabar zīj, Ibn al-Fahhād’s ‘Alā’ī zīj, the Īlkhānī zīj, and Ulugh Beg’s Sulṭānī zīj. Note of Editor: This paper appeared for the first time in Journal for the History of Astronomy, 2019, Vol. 50(1) 46 –8. The PDF can be retrieved online via(Source) This paper contains, in its core, a case study on the interaction between the Ptolemaic, Indian, and medieval Islamic astronomical traditions. It particularly discusses on the consequences of the incorporation of a hypothesis by Āryabhaṭa in Ptolemy’s model for Venus as well as its use together with Ibn al-Zarqālluh’s solar model for the accuracy of the values adopted for the longitude of the apogee of Venus in medieval Islamic astronomy. Improving Ptolemy’s values for the planetary parameters was one of the main purposes of observational astronomy in the medieval Islamic period. Thus, another goal of our study is to determine the accuracy attained by Islamic astronomers in the measurement of the orbital elements of Venus in the different periods and the various domains of the Islamic realm. This paper is organized to contain the following sections. In section “Ptolemy’s model for Venus,” we introduce Ptolemy’s model for Venus and the values he measured for its orbital elements. In section “Derivation of the geocentric orbital elements of Venus from the heliocentric parameters,” we discuss the derivation of the geocentric orbital elements of Venus from the heliocentric ones in order to compute their true values for the medieval period. This will enable us to evaluate the accuracy of the historical values. The reader may skip this technical section without missing anything indispensable for an understanding of the later discussion. In section “Medieval Islamic astronomers’ values for the fundamental parameters of Venus,” the historical values are chronologically classified and discussed with reference to both the medieval Islamic geographical domains from which they originate (either the Middle East or Western Islamic lands) and the way in which they were determined, i.e., from the interaction and/or combination of the traditions and hypotheses or from observations. In section “Discussion and conclusion,” the main findings of the study are discussed and summarized. Ptolemy’s model for Venus is structurally similar to his model for the superior planets.1 The planet P (Figure 1) revolves counterclockwise, i.e., in the direction of increasing longitude, on an epicycle of radius r at a constant angular velocity relative to the mean epicyclic apogee A′m, which is a point on the circumference of the epicycle defined by the prolongation of the line which connects the centre E of the uniform motion (the so-called equant point) and the centre C of the epicycle. The centre C of the epicycle itself revolves in the direction of increasing longitude on a fixed eccentric with the radius OC, which according to Ptolemy’s norm has the arbitrary length R = 60 units. The centre O of the eccentric is displaced from the Earth T by the eccentricity OT = e1. The motion of C on the eccentric is uniform, at a constant angular velocity equal to the solar mean motion, with respect to the point E, which is displaced from O opposite to T by the eccentricity OE = e2. Accordingly, the vector extended from E to C points to the mean Sun. The line passing through T, O, and E defines the apsidal line of the eccentric, the apogee A on the side of E, and the perigee Π on the side of T. In the case of both inferior planets, Ptolemy first determines the direction of the apsidal line by observations of, at least, two equal maximum elongations of either planet in opposite directions, once as a morning star (i.e. at a maximum western elongation) and another time as an evening star (i.e. at a maximum eastern elongation). Such a situation clearly indicates that the centre of the epicycle occupies at both instances symmetrical positions with respect to the apsidal line, which thus passes midway through the ecliptic arcs between the two longitudes of Venus (which can be directly derived from the observations) or those of the mean Sun (which can be calculated from an adopted solar theory). Next, in order to determine which of the two apses marked by the direction of the apsidal line stands for the apogee/perigee, one requires two additional observations of the maximum elongations, when the centre of the epicycle, i.e., the mean Sun, is located at either of the two apses. It is clear that the maximum elongation of an inner planet when at the apogee (A in Figure 1) would be less than at the perigee (Π). From these latter two observations, the eccentricity e1 of the eccentric and the radius r of the epicycle can be derived as well. The eccentricity of the equant point from the Earth (e1 + e2) is computed from the maximum elongation of the planet from the mean Sun when the centre C of the epicycle is at the orbital quadratures (i.e. ∠AEC = 90° or 270°).2 In Almagest X.1−3,3 Ptolemy finds that the two eccentricities e1 and e2 are identical: e = e1 = e2 = 1;15. This value is also equal to half of his value for the eccentricity of the Sun/Earth in Almagest III.44 (all values given for the eccentricities in this paper are according to Ptolemy’s norm R = 60). Moreover, Ptolemy determines the apogee of Venus as being located at a longitude of 55° behind the solar apogee, which Ptolemy derived to be tropically fixed at a longitude of 65.5°. Ptolemy’s value for the longitude of the apogee of Venus has an error of about −2.5°.5 Each planet in our solar system revolves around the Sun on an elliptical orbit with a semi-major axis a and a semi-minor axis b. The Sun is located in one of the two foci of the orbit. The distance of either of the foci from the centre of the orbit is indicated by c. The eccentricity of an ellipse is defined as e = c/a. Hence, the distance between the Sun and the centre of the orbit is c = ea. There is a conventional difference in the concept of the eccentricity between ancient and modern astronomy. As we have seen in the previous section, in ancient astronomy, the eccentricity indicates the “distance” of the centre of the circular orbit from the central body (the Earth) or that between the equant point and the centre of the orbit or the central body in terms of the same arbitrary length assigned to the radius of the orbit of all planets. In modern astronomy, it stands for the “ratio” between the distance of the centre of the orbit from the central body (the Sun) and the semi-major axis of the orbit. The extension of the semi-major axes of the orbit of a planet forms its apsidal line, whose direction with respect to the zero point in a reference system of coordinates (e.g. the vernal equinox in the tropical reference system) represents the spatial orientation of its orbit. The apsis denoting its greatest distance from the Sun is the aphelion (apogee in the case of the Earth) and the other one diametrically opposed to the aphelion/apogee, showing its least distance from the Sun, is the perihelion/perigee. For the derivation of the geocentric orbital elements of a planet from the heliocentric ones, the following considerations should be taken into account:6 1. The eccentricity of the geocentric orbit (the eccentric deferent) of each planet is the distance between the centres of the elliptical orbits of the Earth and that planet, which is equal to the vector sum of the distances of the centres of the elliptical orbits of the Earth and that planet from the Sun. Since the planetary orbits are inclined to the orbital plane of the Earth (i.e. the ecliptic), the distances between the centre of their orbits and the Sun should be projected onto the Earth’s orbital plane. 2. The extension into both directions of the geocentric eccentricity thus determined demarcates the geocentric apsidal line. In the case of the inferior planets, a further condition is also required: 3. The equant point is the projection of the equant point (i.e. the empty focus) of the Earth’s elliptical orbit onto the geocentric apsidal line. Also, it is evident that after the derivation of the geocentric eccentricities, the orbit of the Earth serves as the deferent of an inferior planet, while the orbit of an inferior planet stands for its epicycle. This criterion shall be clarified schematically in what follows. In Figure 2, the heliocentric elliptical orbits of the Earth and Venus are drawn to scale for Ptolemy’s time. The large ellipse shows the Earth’s orbit, with Π0 being its perigee and A0 its apogee. The small ellipse indicates the orbit of Venus, with Π′ being its perihelion and A′ its aphelion. Note that because of the extreme smallness of the eccentricities, both orbits can hardly be distinguished from circles. For the same reason, also the distances between the centres of the orbits of the two planets cannot be exhibited properly. The inset in Figure 2 shows a close-up of the orientations and relative sizes of the heliocentric orbital elements of the Earth and Venus with respect to each other. The Sun is located in S. The point O stands for the centre of the Earth’s elliptical orbit; OS, the eccentricity e0 of the Earth (note that the semi-major axis a0 of the Earth is taken as 1 Astronomical Unit, AU; hence, OS = e0a0 = e0); the point T, the centre of Venus’ elliptical orbit as projected onto the Earth’s orbital plane (i.e. the ecliptic). In order to compute the distance TS, first the heliocentric eccentricity e′ of Venus should be multiplied by the semi-major axis of Venus (a′ ≈ 0.72 AU); then, since the orbit of Venus is inclined from that of the Earth at an angle i′ (this angle slightly changed during the past two millennia from about 3;22° at the beginning of the Common Era to about 3;24° in ad 2000), the result should also be projected onto the Earth’s orbital plane; therefore, ST = e′ a′ cos i, where i is the inclination of the heliocentric apsidal line of Venus from the Earth’s orbital plane. Α0Π0 and Α′Π′ indicate, respectively, the directions of the heliocentric orbits of the Earth and Venus. Thus, the two vectors SO and TS are combined in order to form the geocentric eccentricity TO = e1. Then, when the whole system is transformed to the geocentric view, the point O is the centre of the circular geocentric orbit (the eccentric deferent) and the point T stands for the place of the fictitious Earth. TO extended to both directions serves as the geocentric apsidal line, which makes an angle η0 (= ∠TOS) with the Earth’s apsidal line. The point M is the empty focus of the Earth’s orbit, which, projected onto the geocentric apsidal line, marks the equant point E at an eccentricity EO = e2 from the centre O of the eccentric deferent. Therefore, both eccentricities and the longitude λA of the geocentric apogee can be simply computed with a precision sufficient for the evaluation of the accuracy of medieval values. Since the eccentricity e0 of the Earth/Sun remains more than twice as large as e′ (precisely speaking, 2.25 at the beginning of the Common Era to 2.47 in ad 2000), the eccentricities e1 and e2 of Venus are substantially more dependent on the eccentricity e0 of the Earth than the heliocentric eccentricity e′ of the planet. Thus, both e1 and e2 remain a bit smaller than the eccentricity e0 of the Earth or, in other words, a bit smaller than half the eccentricity of the Sun in the Ptolemaic solar model.7 Also, because of the smallness of e′, the geocentric apsidal line of Venus remains close to the Earth’s apsidal line, so that the angle η0 changes only from 13.8° at the beginning of the Common Era to 10.7° in ad 2000. In addition, since the heliocentric eccentricities e0 and e′ and the angle between the heliocentric apsidal lines of the Earth and Venus decrease with the passing of time, the geocentric eccentricities e1 and e2 decrease as well.8 The formulae we derived for the geocentric orbital elements of the planet are as follows: in which T = (JD – 2,451,545.0)/365,250 is the time measured in thousands of Julian years from 1 January 2000 (JD 2,451,545.0). The two eccentricities are for an orbital radius equal to 1 and, therefore, should be multiplied by 60 to correspond to the Ptolemaic norm. Also, the annual motion of the apsidal line is the coefficient of T multiplied by 10−3: 67.6″/y or ~ 1°/53.2y. These formulae can safely be used in order to determine the accuracy of any historical values for the orbital elements of Venus in the Ptolemaic context. The changes in the past 2000 years are given in the following: A key point in the derivation of the geocentric orbital elements from the heliocentric ones in the case of an inferior planet is that the condition (2) in the criterion mentioned earlier should be checked for consistency with Ptolemy’s conception of the apsidal line of an inferior planet. As said in the previous section, the apsidal line of an inferior planet defines the spatial direction of the diameter of its deferent on which its epicycle (i.e. its heliocentric elliptical orbit) appears to have the largest and smallest angular sizes as seen from the Earth (i.e. at the perigee and the apogee). Now, if AΠ in Figure 2 is in reality the geocentric apsidal line of Venus, its orbit as appearing to an Earth-bound observer has its maximum angular size when the Earth is at A (in this situation, the heliocentric orbit of Venus, corresponding to its epicycle in a geocentric view, is along the direction to the perigee Π of the deferent); conversely, the orbit has its minimum angular size when the Earth is located at Π (in this situation, the line of sight to the orbit/epicycle of Venus points to the apogee A of the deferent). The six values for the angular sizes of the orbit/epicycle of Venus shown in Figure 2 provide rough estimates for the critical values in the three situations: (1) the Earth being on its apsidal line, A0Π0; (2) on the heliocentric apsidal line of Venus, A′Π′; and (3) on the geocentric apsidal line of Venus derived according to the criterion settled forth above, i.e., AΠ. Obviously, an observer on the Earth will see the greatest and least angular sizes of the orbit/epicycle of Venus when it is located along its geocentric apsidal line as derived according to the above criterion. This assures us that our criterion is in agreement with Ptolemy’s conception of the apsidal line of Venus.9 The values of e1, e2, and ½(e1 + e2) are plotted against time in Figure 3. The graphs of e1 and e2 represent upper and lower limits of the tolerance band of the eccentricity of Venus. Figure 4 shows the graph of the longitude λA of the apogee of the Sun and Venus. Historical values are indicated in both figures. These values will be discussed in the next section. The values adopted for the solar and Venus’ maximum equations of centre (qmax; the greatest size of angle ECT in Figure 1) and the corresponding eccentricities in medieval Middle Eastern zījes are summarized in Table 1. Except for the works in which the eccentricity values are explicitly given, they are extracted from the values for the maximum equation of centre (in the solar eccentric model: e = R sin(qmax), and in Ptolemy’s eccentric equant model of the superior planets and Venus: e1 = e2 = R tan(qmax/2)). The eccentricity values are also shown in Figure 3 along with the graphs of the geocentric eccentricities of the Sun and Venus. The values for the longitude of the apogee of Venus from these sources are listed in Table 2 and are illustrated in Figure 4 along with the graphs of the longitude of the geocentric apogees of the Sun and Venus. The medieval values are arranged in chronological order and, for a reason discussed below, in two separate groups. An important note in our discussion in the sequel is to consider the relation between the medieval astronomers’ values for the eccentricities of the Sun and Venus in Table 1. In doing so, the maximum values for the equations of centre of the Sun and Venus should be taken into account. For Yaḥyā, Ḥabash, al-Battānī, Ibn Yūnus, Ibn al-A‘lam, Ibn al-Fahhād, and Jamāl al-Dīn al-Zaydī, the equations of centre of the Sun and Venus are equal to each other, and thus, eVenus = e1 = e2 = ½eSun. Note that al-Battānī, Ibn al-A‘lam, and Jamāl al-Dīn give the values for the equation of centre of Venus with a precision up to arc-minutes, which means that they have the rounded values of the maximum equation centre of the Sun as the maximum equation of centre of Venus (1;59,10° ≈ 1;59°, 2;0,10° ≈ 2;0°, and 2;0,47° ≈ 2;1°, respectively). The values for the eccentricities of the Sun and of Venus in Eastern Islamic zījes. In the earliest phase of the rise of astronomy in the medieval Middle East, about the latter part of the eighth century, Indian astronomical hypotheses and systems were very influential. One of them was the Midnight System (Ārdharātrika) developed by Āryabhaṭa (b. ad 476), which has been substantially preserved in the Pañcasiddhāntikā of Varāhamihira (ad 505–587) and the Khaṇḍakhādyaka of Brahmagupta (ad 598–670). The early Islamic astronomers became familiar with it through pre-Islamic Persian astronomy, particularly the tradition of the Shāh zīj. A hypothesis of this system is the equality of the orbital elements of the Sun and Venus, in the sense that the apsidal lines of the Sun and Venus coincide with each other (in both of the works mentioned above, the apogees of the Sun and Venus share a common longitude of 80°), and their eccentricities are equal (2;20) (i.e. converted to the Ptolemaic models: eSun = 2eVenus = e1 + e2).10 With regard to our analysis set forth in the previous section, the emergence of such a hypothesis at some moment in medieval astronomy does not come as a surprise. Rather, it should have been quite probable that the poor and inaccurate observations of Venus could lead to the result that its geocentric orbital elements are equal to those of the Sun, because of the contiguity of the spatial directions of their orbits as appearing to an Earth-bound observer.11 Although some early Islamic astronomers, such as Ya‘qūb b. Ṭāriq, adopted this hypothesis of the Midnight System and its parameters via the Shāh zīj, some of his contemporaries, like al-Fazārī (d. ca.ad 796–806) and al-Khwārizmī (ca.ad 780–850), based their works upon other Indian traditions and so made use of different values for the orbital elements of the Sun and Venus.12 After the reception of the Almagest in Islamic astronomy in the ninth century, some astronomers kept Āryabhaṭa’s hypothesis of the equality of the orbital elements of the Sun and Venus, something like a single theoretical element, for any reasons unknown to us at present, and incorporated it into Ptolemy’s planetary hypotheses/models.13 It is not the only instance of maintaining some elements of Indian astronomy and mixing them with Ptolemaic astronomy in the medieval Islamic period.14 As we have already seen in section “Ptolemy’s model for Venus,” the double eccentricity of Venus is also equal to the eccentricity of the Sun in the Almagest, and thus, the only thing that indicates the adoption of Āryabhaṭa’s hypothesis in the works of early Islamic astronomy is to put their apogees at the same longitude. According to Bīrūnī’s account in his al-Qānūn al-mas’ūdī X.4:15 The information given in the first paragraph is surprising, because both the motion of the solar apogee and Āryabhaṭa’s hypothesis (as indicated in Tables 1 and 2) can be found in the two extant manuscripts of the Mumtaḥan zīj, which was written prior to Ḥabash’s zīj. What Bīrūnī says can be considered an aspect of the mysterious situation surrounding the available manuscripts of the Mumtaḥan zīj, concerning (1) their originality: Both were copied after Ibn al-A‘lam’s time (d. ad 985) and ultimately go back to a recension of the Mumtaḥan zīj, presumably compiled in the tenth century,16 and (2) the fact that it is not known precisely which parts of this work were resulted from Yaḥyā’s observations in the Shammāsiyya quarter of Baghdad and which ones are the achievements of other astronomers of the Mumtaḥan group, working in Damascus after Yaḥyā’s death.17 From Bīrūnī’s statements, it is clear that he did not found the motion of the solar apogee in a version of the Mumtaḥan zīj available to him, which was attributed to Yaḥyā; this is not implausible at all, since the discovery of the motion of the solar apogee did not appear to have taken place immediately after the measurement of a value of 82° (or 82;39° as found in the Mumtaḥan zīj) for its longitude in the first half of the ninth century, which is ~17° more than Ptolemy’s value of 65.5°; we know that this topic was a matter of discussion until the turn of the eleventh century, and even Bīrūnī himself found it necessary to deal with it in depth.18 Also, his sayings give the strong impression that in that version of the Mumtaḥan zīj, Yaḥyā had converted Ptolemy’s values for the longitudes of the planetary apogees to his epoch, since this is what Bīrūnī did (the conversion of Ptolemy’s values to his epoch by an increment of about 13°; see Note 80). The values for the longitudes of the planetary apogees in the Mumtaḥan zīj might have been dependent upon the Almagest in one way or another, although the differences between them amount to 11.5° in the case of Jupiter and Saturn, 11° for Mercury, and 9° for Mars.19 In the second paragraph, we are first told that Ḥabash was the first medieval Middle Eastern astronomer who applied Āryabhaṭa’s hypothesis to the Ptolemaic model, as can be found in his zīj, which is closely dependent upon the available Mumtaḥan zīj (see Table 1);20 however, Bīrūnī does not explicitly refer to Āryabhaṭa, but to the Shāh zīj, which served as an intermediary for the transmission of Āryabhaṭa’s hypothesis to early Islamic astronomy. The completely surviving contents of al-Battānī’s zīj testify to Bīrūnī’s remark that this hypothesis was also employed later in it (Tables 1 and 2). It was afterwards maintained in the Ḥākimī zīj of Ibn Yūnus, Bīrūnī’s elder contemporary, but Bīrūnī was not apparently acquainted with this work.21 It is noteworthy that Ibn Yūnus not only accepted Āryabhaṭa’s hypothesis through the Shāh zīj, but also adopted the value 4;2° for the maximum equation of centre of Mercury (corresponding to an eccentricity of about 3;55) from the same work.22 Of course, he deployed the unprecedented, non-Ptolemaic values for the radii of the epicycles of the two inferior planets in order to compute his tables of their epicyclic equation, which neither can be traced back in the Shāh zīj nor in any other Indian tradition, but which appear to have been measured by Ibn Yūnus himself.23 We have evidently seen so far that Āryabhaṭa’s hypothesis was penetrated into the majority of the influential, important works in the classical period of astronomy in the medieval Middle East, which lasted until the early eleventh century. In the late Islamic period (after ca. ad 1000), we are confronted with the two streams in astronomy with regard to the relation between the orbital elements of the Sun and Venus: In the mainstream, the situation we encountered in the early Islamic period changed dramatically, in a way that Āryabhaṭa’s hypothesis gradually lost its dominance. Another stream was dependent on the reproduction of the early Islamic astronomical tables, which caused Āryabhaṭa’s hypothesis not to have disappeared completely until the foundation of the Maragha Observatory (northwestern Iran, ca.ad 1260–1320). We first explain the latter, and then will return to the mainstream. This bipartition is necessary in order to keep the discussion in chronological order. Al-Battānī’s zīj appears to have been widely used in the Middle East until the early twelfth century, so that some zījes were written in the eleventh century, in which al-Battānī’s radix and parameter values were simply reproduced. One of them is the now lost Fākhir zīj compiled by Abu’l-Ḥasan ‘Alī b. Aḥmad al-Nasawī, a younger contemporary of Bīrūnī; this work was based on al-Battānī’s zīj, as can be inferred from the values for the longitudes of the solar and planetary apogees adopted in it, as come down to us via Kamālī’s comparative material presented in his Ashrafī zīj.24 Another example in this regard is Ṭabarī’s Mufrad zīj (ca.ad 1100),25 in which al-Battānī’s values for the longitudes of the solar and planetary apogees have been updated for the beginning of 431 Y (1 Ādhār 1373 Alexander/1 March 1062) by adding an increment of 2;45°, which is in agreement with the rate of precession of 1°/66y and the interval of time of about 182 years between al-Battānī’s and Ṭabarī’s epochs. In the latter part of the twelfth century, al-Fahhād remarks that the use of al-Battānī’s zīj had come to an end in his time. Al-Fahhād says that early in his career he had compiled four astronomical tables on the basis of al-Battānī’s parameter values, but that he later found them in error: because of the inconsistencies (tafāwut) in al-Battānī’s observation. It is certainly confirmed that al-Battānī’s observation is erroneous, because by the direct observations (bi-ra’y al-‘ayn, “as witnessed by eye”), we see that in the planetary conjunctions as well as in the magnitudes and timings of the solar and lunar eclipses there are sizeable differences (tafāwut) [between the observational data and those computed on the basis of al-Battānī’s work]. In the entire lands of Syria and Arabia, none of the practitioners of this art does rely on al-Battānī’s observation, except for a part of the people of ‘Irāq [including central Iran and Mesopotamia] who have not any other observation [at their disposal].26 The limited use of al-Battānī’s zīj in central Iran, which al-Fahhād refers to, and its implications for the adoption of the Indian hypothesis were continued, at most, until the turn of the fourteenth century. In Kamālī’s comparative list,27 we can find that the Indian hypothesis was utilized in the two thirteenth-century works entitled the Muntakhab zīj and the Razā’ī zīj, written, respectively, by Muntakhab al-Dīn and Abu al-Ḥasan, both from Yazd (central Iran) about the mid-thirteenth century. Both works are now lost, but a zīj in poems, the so-called Manẓūm zīj (Versified zīj), from Muntakhab al-Dīn is extant, in which the longitudes of the Sun and Venus are taken as equal to each other.28 It deserves noting that according to Kamālī, Ibn al-A‘lam’s values for the equations of centres of Jupiter and Saturn were employed in both works, which can be confirmed by the corrective equation tables pertinent to the Razā’ī zīj as preserved in the anonymous Sulṭānī zīj,29 but the longitudes of the apogees show no obvious relation to Ibn al-A‘lam’s values. In Ashrafī zīj III.1, Kamālī himself points out that until the time when he wrote his own work, it was usual in Shiraz (central Iran) to compute the ephemerides of the superior planets from the ‘Alā’ī zīj and those of the Sun, the Moon, and the inferior planets from al-Nasawī’s Fākhir zīj,30 which, as mentioned earlier, was based on al-Battānī’s zīj; but, from his own observations at the times of conjunctions, he found deviations in the case of Venus and, especially, Mercury, which led him to utilize the Shāhī zīj instead. Returning to the mainstream of astronomy in the late Islamic period, it should be said that all of the outstanding late Islamic astronomers, whose works and achievements exerted a great influence on their later followers, unanimously returned to Ptolemy’s derivation that the apogee of Venus is behind that of the Sun, regardless of the fact that they took the eccentricity of Venus as larger, smaller, or nearly equal to that of the Sun. These astronomers are mentioned in the following (see, also, Tables 1 and 2). The turning point in the relation between the orbital elements of the Sun and Venus seems in all likelihood to have been made by Ibn al-A‘lam, in the sense that, despite the majority of the early Islamic astronomers, he did not follow Āryabhaṭa’s hypothesis, but returned to Ptolemy’s Almagest in putting the double eccentricity of Venus equal to the eccentricity of the Sun and locating the apogee of Venus behind that of the Sun with deriving a good value for its longitude (with an absolute error of less than 2°; see Tables 1 and 2). One of al-Fahhād’s noteworthy statements in the prologue of his ‘Alā’ī zīj (written ca.ad 1172) highlights Ibn al-A‘lam in this respect in contrast to the other early Islamic astronomers:31 We have observed Mars for a long period, which was in agreement with Ibn al-A‘lam’s observation [i.e. the data al-Fahhād obtained from his observations were in agreement with the ephemeris computed on the basis of Ibn al-A‘lam’s parameter values/computational tables]. Also, I observed many times Venus with the star Qalb al-asad [i.e. Regulus, α Leo], which was nicely in agreement [with Ibn al-A‘lam’s observation], but [his values] were different in the longitude of the apogee and the epicyclic anomaly [of Venus] from other observations [i.e. the values for these two parameters measured in other observational programs and/or adopted in other zījes]. Ibn al-A‘lam was no doubt the first outstanding figure in the field of planetary astronomy in the Islamic period, and his now-lost ‘Aḍudī zīj exerted a great influence on later medieval Middle Eastern astronomers. He was apparently the earliest medieval astronomer who was seriously engaged in the derivation of the fundamental parameters of the Ptolemaic planetary models and he measured new values for the eccentricities of Saturn (3;2), Jupiter (2;54),32 and Mercury (3;35).33 He also has an unprecedented value for the radius of the lunar epicycle.34 Although Ibn al-A‘lam’s ‘Aḍudī zīj is now lost, but its underlying parameter values can be found in later works, so that it can be reconstructed to a large extent (see Notes 65 and 79).35 Bīrūnī and al-Khāzinī measured new values for the solar eccentricity, both of which are smaller than Ptolemy’s (Table 1). As shown elsewhere,36 Bīrūnī’s figure is one of the excellent values measured in the medieval Middle East, whereas al-Khāzinī’s is one of the imprecise values determined by the late Islamic astronomers. Both astronomers adopted Ptolemy’s value for the eccentricity of Venus and thus took it to be greater than half that of the Sun. The values for the eccentricities of the other planets adopted in Bīrūnī’s al-Qānūn and al-Khāzinī’s Sanjarī zīj, the ultimate achievement of their long-term careers, are Ptolemaic. Bīrūnī’s value for the longitude of the apogee of Venus, which has been updated from the Almagest (see Table 2), is egregiously about −6° in error, which is an inevitable consequence of the fact that his value, 1°/69y, for the apogeal motion is smaller than the true rate of the motion of the apogee of Venus, about 1°/53y, in addition to the existence of an error of about −2.5° in Ptolemy’s value. He was skilful in the measurement of the solar orbital elements; although he did not seriously deal with a systematic observational program for the purpose of renewing the measurement of the planetary orbital elements,37 he certainly knew about the substantial differences between the methods of the derivation of the orbital elements of the Sun and Venus; seemingly, for the same reason, he could not see any relation between the orbital elements of the Sun and Venus, as can be perceived from the second paragraph of the passage we have already quoted from him in the previous section. About a century later, Khāzinī took a substantial step further in the revival of planetary astronomy, resulting in a significant improvement in the determination of the longitudes of the apogees of Venus and Mars. Unlike the other three planets, for which he only updated Ptolemy’s values for the longitudes of the apogees in the Almagest, the values he utilized for the longitudes of the apogees of Venus and Mars give the strong impression that they might have been the results of new observations and of a checking of the ephemerides against empirical data; his value for the longitude of the apogee of Venus (see Table 2) is very precise (error ~ –0.6°). In his Kayfiyyat al-i‘tibār (How to experiment),38 which he conceived as an introduction to his zīj, Khāzinī deals with the principal features of observational astronomy and explains reasonable ways how to reconcile between available theories and observational data from a coherent methodological point of view. In a section titled “the beginning of the experimentation,” which is located between the end of the treatise in question and the beginning of his zīj,39 he speaks about his 35-year program of checking and correcting the astronomical tables in use in his time against observations, in which context he explicitly refers to the al-Ma’mūnī (i.e. Mumtaḥan) zīj and al-Battānī’s zīj.40 In the list of the major and serious flaws he encountered in them, he mentions for the case of Venus the existence of errors “in its latitude, due to the deviations in its apogee,” a worthwhile statement that provides us with a clue to investigate a probable reason for which the late Islamic astronomers put aside the Indian hypothesis as well as the astronomical tables using it, such as al-Battānī’s zīj. Al-Fahhād took the eccentricity of Venus as half the solar one and the longitude of the apogee of Venus about 12° behind that of the Sun (Tables 1 and 2). Analogous to Bīrūnī and al-Khāzinī, his values for the eccentricities of the other planets are borrowed from the Almagest. As reflected in the quote mentioned earlier, his departure from Āryabhaṭa’s hypothesis seems to have been occurred because of the agreement he found between the data obtained from his observations and Ibn al-A‘lam’s theory of Venus. At this point, a now lost Shāhī zīj written by a certain Ḥusām al-Dīn al-Sālār about the mid-thirteenth century deserves noting. According to Kamālī,41 the apogees of the Sun and Venus in it have a separation of ~10;45° in longitude. This work can be reconstructed on the basis of the rich information provided in Kamālī’s Ashrafī zīj and the anonymous Sulṭānī zīj. Al-Ṭūsī and the main staff of the Maragha observatory, founded by Hülegü, the first ruler of the Mongolian Īlkhānīd dynasty of Iran (d. 1265), adopted Ibn Yūnus’s value for the solar eccentricity, but they preferred to employ the Mumtaḥan and al-Battānī’s value for that of Venus, which does not seem to be a matter of coincidence or confusion at all. Rather, it seems to be a reasonable choice, and they quite probably followed Ibn al-A‘lam at the point that the orbital elements of the Sun and Venus can by no means be interconnected to each other. As regards the Maragha team’s achievements about Venus, also the rediscovery of the equality of its maximum inclination and slant deserves noting (they are the two components of Ptolemy’s latitude models of the inferior planets).42 Muḥyī al-Dīn al-Maghribī, the most prominent astronomer of the Maragha observatory in the field of observational astronomy (working independently from al-Ṭūsī’s official team), maintained the eccentricity of Venus to be less than half the solar one. He carried out a systematic observational program in Maragha, which ran for more than a decade, from 1262 through 1274. His Talkhīṣ al-majisṭī (Compendium of the Almagest) contains a detailed account of his extensive observations and measurements of the Ptolemaic planetary orbital elements in Maragha.43 The only extant copy of this treatise is incomplete, while according to the list of contents the missing parts dealt with the inferior planets and the planetary latitudes. Nevertheless, we can be confident that he gave importance to the inferior planets, because he has a highly accurate non-Ptolemaic value for the maximum inclination of Mercury in his last zīj, the Adwār al-anwār, written in Maragha.44 In both zījes written at the Maragha observatory, the double eccentricity of Venus is a bit less than the eccentricity of the Sun. Contemporary to the Maragha Observatory, Khubilai Khan, the first emperor of the Mongolian Yuan dynasty of China (d. ad 1294), founded an Islamic Astronomical Bureau in Beijing in ad 1271 and appointed a certain Zhamaluding as its first director, who was probably identical to an Iranian astronomer named Jamāl al-Dīn Muḥammad b. Ṭāhir b. Muḥammad al-Zaydī of Bukhārā. The observational activities in the Bureau led to a new set of values for the planetary parameters. Although the original work that was written on the basis of these parameter values seems lost, some of the parameter values are preserved in two later works: the first one, Huihuili, is a Chinese translation of a Persian zīj from the Bureau, prepared in Nanjing in 1382–1383; the other, the Sanjufīnī zīj written in Arabic by a certain Sanjufīnī in Tibet in 1366.45 As can be seen in Tables 1 and 2, Jamāl al-Dīn takes the eccentricity of Venus equal to half that of the Sun and put the apogee of Venus more than 12° behind that of the Sun. The Samarqand observatory was the last mansion of creative achievements of Islamic astronomers in the field of planetary astronomy, where the significantly precise values were measured for the orbital elements of Venus (see below). About one century and a half later, Taqī al-Dīn Muḥammad b. Ma’rūf (ad 1526–1585) made a series of the systematic observations in the short-lived observatory in Istanbul in the latter half of the 1570s. All of his observations concern the Sun and the Moon,46 and both zījes he wrote about ad 1580 contain only the solar and lunar mean motions and equation tables; the Sidrat muntaha ’l-afkār fī malakūt al-falak al-dawwār (The Lotus Tree in the Seventh Heaven of Reflection; also called the Shāhanshāhiyya zīj) is on the basis of his parameter values measured in the Istanbul observatory, whereas the Kharīdat al-durar wa jarīdat al-fikar (The non-bored pearls and the arrangement of ideas) is on the basis of Ulugh Beg’s Sulṭānī zīj.47 Of course, his value, ε = 23;28,54°, for the obliquity of the ecliptic, which was measured from his two observations carried out in Istanbul in ad 1577, has been applied to both works.48 The observatory was destructed in the early 1580s49 before the observers had enough time to deal with planetary and stellar astronomy. In the early eighteenth century, Persian and Indian astronomers used and practiced a new astronomy which had come to them through the transmission to India of the Tabulae astronomicae Ludovici magni (ad 1702), compiled by the French astronomer Philip de La Hire (ad 1640–1718). All materials on the Sun, the Moon, the planets, and the calculation of eclipses in the Persian Muḥammadshāhī zīj, compiled by Mirzā Khayr-Allāh Muhandis (i.e. the “Geometer”), Shīrāzī (d. ad 1747), and Rāja Jai Singh Sawā’ī (ad 1688–1743)50 in Jaipur in the late ad 1730s under the patronage of the latter and dedicated to Mughal emperor Muḥammad (b. 1702, reign ad 1719–1748), are on the basis of de La Hire’s work. One century later, Ghulām Ḥusayn Jaunpūrī (ad 1790/1791–1862) adhered to this revolutionary system and then established as a new tradition in his Bahādurkhānī Encyclopedia (printed in ad 1835) and Bahādurkhānī zīj (written in ad 1838 and printed in ad 1855) dedicated to his patron Rāja of Tikārī. In these works, for example, the apogees (aphelions) and ascending nodes of the orbits of the planets no longer share the same motion, and in the case of Venus, the apogee has a daily motion of 14iii 10iv and the nodes, 7iii 35iv, which are in agreement with the values 23;56,50° and 12;47,50° Philip de La Hire gives for the motions of the apogee and the node of Venus in 1000 years. The planet is also given a maximum equation of centre of 0;50°.51 In the medieval Islamic period, good values (mostly between 1;2 and 1;4) were adopted for the eccentricity of Venus, which should be reckoned as a fruit of measuring remarkably precise values for that of the Sun, because of the connection existing between them in ancient and medieval astronomy and the fact that the eccentricity of Venus is in reality close to that of the Sun (see section “Derivation of the geocentric orbital elements of Venus from the heliocentric parameters” and Figure 3). It is important to note that from the middle of the thirteenth century onwards, the Middle Eastern astronomers took the eccentricity of Venus less than half the solar one, as it is in reality. As can be seen in Table 1, this improvement appears to have occurred, for the first time, in the zījes of the Maragha tradition and then was followed by Ibn al-Shāṭir. It became more apparent and significant in the Sulṭānī zīj, the official product of the Samarqand observatory, where the most precise value 0;52 for the eccentricity of Venus throughout ancient and medieval astronomy was measured;52 as displayed in Figure 3, this fascinating value is located near the graph of the average of the modern values for the eccentricities e1 and e2 for the time, as any highly accurate and carefully made systematic program of observations and measurements is expected to yield such a result. However, there is no historical evidence to clarify how such an improvement was made or how it was interpreted, especially because it was unprecedented in both Ptolemaic and Indian traditions come down to Islamic astronomers. The same progressive improvement in attaining the accurate values for the eccentricity of Venus can also be clearly seen in the case of the values for the longitude of the apogee of the planet. Table 2 shows that in early medieval Middle Eastern astronomy, the errors are inevitably egregious and nearly of the same order; Yaḥyā: +12.3°, al-Battānī: +11.0°, and Ibn Yūnus: +12.6°. In contrast, in the late Middle Eastern Islamic works, the errors appreciably reduce to less than 1°; in Khāzinī’s Mu‘tabar zīj: –0.6°, al-Fahhād’s ‘Alā’ī zīj: –0.8°, the Īlkhānī zīj: –0.2°, and Ulugh Beg’s Sulṭānī zīj: +0.7°. For the other astronomers, the errors are about a few degrees, but not as large as those in the early Islamic period; Ibn al-A‘lam: –1.8°, al-Maghribī: +3.7°, Jamāl al-Dīn: –1.3°, and Ibn al-Shāṭir: –1.9°.53 When the Indian hypothesis of the equality of the orbital elements of the Sun and Venus reached the western Islamic realm, it became incorporated into a tradition that was based on a quite different system of astronomical thinking, demanding a different kind of treatment of the relation between observational data and hypotheses, from that in Eastern Islamic astronomy. In this tradition, secular changes and variations in the basic parameters detected by observations—which were taken to be constant or were thought to be unaltered within short periods of time in Ptolemaic astronomy followed by the Middle Eastern medieval astronomers—were given a higher epistemological level, to such a degree that they were entered into fundamental models.54 A notable example of such treatment in Western Islamic astronomy is the invention of a solar model with variable eccentricity by Ibn al-Zarqālluh (d. ad 1100).55 This model is an ingenious attempt to account for the long-term continuous decrease in the solar eccentricity after Ptolemy’s time, as known by the observations made by the Islamic astronomers from the Mumtaḥan group in the early ninth century to Ibn al-Zarqālluh’s time. The mechanism embedded in this model is similar to that which Ptolemy invented for his models for the Moon and Mercury (see Figure 5). The centre D of the deferent revolves on a hypocycle with centre C, so that the eccentricity DT of the Sun changes from a maximum of TO0 to a minimum of TO. The parameter values of this mechanism, i.e., the maximum and minimum solar eccentricities and the motion of the centre of the eccentric on the circumference of the central hypocycle, are nearly the same in the various sources of Western Islamic astronomy: emax ≈ 2;29, emin ≈ 1;51 (thus, the radius DC of the hypocycle is approximately 0;19), and a complete revolution of the centre of the eccentric on the circumference of the hypocycle takes about 3345 years. Also, at the epoch, namely, at the beginning of the Hijra era, it was located at a distance of 83;40,31° from the apsidal line.56 The motional parameter values of the model were derived in such a manner that a maximum eccentricity of 2;29 was obtained for Hipparchus’s time, i.e., about the mid-second century bc. Figure 6 shows the graph of the solar eccentricity according to Ibn al-Zarqālluh’s solar model with the parameter values mentioned above (dash-dotted curve) along with the graphs of the eccentricities of the Sun and Venus on the basis of the modern theories as already exhibited in Figure 3. In the case of the eccentricity (and, accordingly, equations of centre) of the planet, the two different treatments can be addressed in them. First, some astronomers like Ibn Isḥāq (ad 1193–1222), Ibn al-Raqqām (d. ad 1315), and Ibn al-Bannā’ adopted the Indian hypothesis of the equivalence of the orbital elements of the Sun and Venus along with Ibn al-Zarqālluh’s solar model. This inevitably led to the result that the same values for the solar eccentricity as computed on the basis of Ibn al-Zarqālluh’s model should be taken for the eccentricity of Venus as well. Second, some scholars such as Ibn al-Kammād (fl. ca.ad 1116) and Ibn ‘Azzūz al-Qusanṭīnī (d. ad 1354) accepted the prevalent value 1;59° for the maximum equation of centre of Venus borrowed from the Mumtaḥan tradition as established in the zījes of Ḥabash and al-Battānī.58 Hence, they held that the eccentricity of Venus is greater than that of the Sun. What is most notable is that the values which the first group of the Western Islamic astronomers mentioned above derived for the eccentricity of Venus from Ibn al-Zarqālluh’s solar theory range between 0;56 and 0;59, which are very close to the true values for the eccentricity of Venus in the period in question (lying within the tolerance band of the geocentric eccentricity of Venus; see Figure 6). Nevertheless, it should not come as a surprise that these values are highly accurate and comparable with the remarkably precise value measured in the Samarqand observatory. For it is evident that this achievement is solely a matter of coincidence; the accidental accuracy of these values is merely a result of the combination of a solar model with variable eccentricity and the Indian hypothesis of the equality of the orbital elements of the Sun and Venus. We have seen in section “Derivation of the geocentric orbital elements of Venus from the heliocentric parameters” that the size of the geocentric eccentricities of Venus is substantially dependent on that of the Sun/Earth, and is a bit less than it. Also, the geocentric apsidal line of Venus is very close to that of the Sun/Earth. It is imaginable that careless observations would have led to the result that the apsidal lines of the Sun and Venus coincide with each other and/or that their eccentricities are equal. Ptolemy derived a value for the double eccentricity of Venus, which is equal to his value for the solar eccentricity, but he did not give any notice of the relation between the two. Āryabhaṭa took not only the eccentricities of the Sun and Venus, but also the longitudes of their apogees equal to each other in his Midnight System developed about the early sixth century. As a consequence, medieval astronomers from the early Islamic period on were exposed to the existence of a connection between the orbital elements of the Sun and Venus which had come down to them from both Ptolemaic and Indian traditions. In the tradition-based medieval system of thinking, such similarities between different traditions were not treated with indifference or as a result of mere coincidence. Consequently, it does not come as a surprise that Āryabhaṭa’s hypothesis had a great influence on the main trends of medieval astronomy and was prevalent in quite different traditions. It was passed into the Western Islamic regions basically through the transmission of Middle Eastern astronomical tables, such as the Mumtaḥan zīj and al-Battānī’s Ṣābi’ zīj, and wherefrom diffused into medieval Latin and Jewish astronomy apparently via the Alfonsine Tables.59 It maintained its dominance to such an extent that it can be found in a good number of European treatises until just before the emergence of Kepler’s new astronomy (more notably, in Copernicus’s Commentariolus).60 Nevertheless, it began to be rendered obsolete in Eastern Islamic astronomy after the tenth century. For the eccentricities of the Sun and Venus, three treatments can be identified in medieval Islamic astronomy: 1. The eccentricity of Venus equal to half that of the Sun. 1.1 In early Eastern Islamic (ca.ad 800–1000) and some Western Islamic zījes (after ad 1000) following Āryabhaṭa’s hypothesis in the Midnight System, the eccentricity of Venus is half that of the Sun. In the latter group, Ibn al-Zarqālluh’s solar model was utilized, according to which the eccentricity of the Sun periodically changes, decreasing in the period from the mid-second century bc to about ad 1500. As exhibited in Figure 6, the values that this model gives for half the solar eccentricity during the period from ca.ad 1000 to ad 1950 are within the tolerance band of the geocentric eccentricity of Venus. Consequently, the adoption of Āryabhaṭa’s hypothesis along with Ibn al-Zarqālluh’s solar model accidentally yielded accurate values for the eccentricity of Venus. 1.2 Some outstanding figures constituting the main stream of planetary astronomy in the late Eastern Islamic period (after ca.ad 1000), like Ibn al-A‘lam, al-Fahhād and Jamāl al-Dīn, returned to Ptolemy’s derivation, i.e., taking the double eccentricity of Venus equal to the solar eccentricity, as is in Āryabhaṭa’s Midnight System, but putting the apogee of Venus behind that of the Sun. 2. No relation between the eccentricities of the Sun and Venus.Some late Eastern Islamic astronomers, such as Bīrūnī and al-Khāzinī, did not apparently see any relation between the eccentricities of the Sun and Venus. They adopted their measured values for the solar eccentricity, which are less than Ptolemy’s, but held Ptolemy’s value for that of Venus. This situation is analogous to that encountered in other Western Islamic zījes different from the group (1), where the eccentricity of the Sun is computed according to Ibn al-Zarqālluh’s solar model, which gives smaller values for it than those adopted in the early medieval Middle Eastern zījes such as the Mumtaḥan zīj, Ḥabash’s zīj, and al-Battānī’s Ṣābi’ zīj, but that of Venus is the same, i.e., about 1;2, as adopted in these works. Thus, to these astronomers, the eccentricity of Venus is inevitably larger than half the solar one, which has no astronomical connotation, but is solely a consequence of the adoption of fundamental parameter values from different sources/traditions. 3. The eccentricity of Venus smaller than half that of the Sun.In the late medieval Middle Eastern astronomical tables from the middle of the thirteenth century onwards (notably, the zījes of the Maragha tradition, Ibn al-Shātir’s Jadīd zīj, and Ulugh Beg’s Sulṭānī zīj), the eccentricity of Venus was taken smaller than half that of the Sun. This achievement is significant for the astronomical reasons mentioned earlier and may be considered one of the discoveries of late Islamic astronomy, encountered neither in Ptolemaic nor in Indian astronomy; an exceptional promotion in the derivation of the eccentricity of Venus took place at the Samarqand observatory in the first part of the fifteenth century, where the accurate value 0;52 was measured for the eccentricity of Venus, which was deployed in Ulugh Beg’s Sulṭānī zīj. Two hypotheses on the spatial orientation of the geocentric orbits of the Sun and Venus can be found in medieval Islamic astronomy: 1. Both coincide with each other, in accordance with the Indian tradition of the Midnight System, which is inaccurate and which is dominant in early Eastern Islamic astronomy as well as in Western Islamic astronomy. 2. The apogee of Venus is behind that of the Sun, in agreement with Ptolemy’s tradition, which is correct, and which was held in late Eastern Islamic astronomy. In early Islamic Middle Eastern astronomy as well as in Western Islamic astronomy, the errors in the values for the longitude of the apogee of Venus are larger than +10°, a consequence of putting the longitude of the apogee of Venus equal to that of Sun. But, as this hypothesis was discarded in late medieval Middle Eastern astronomy, the values for the longitude of the apogee of the planet became significantly improved, so that the errors reduced to less than 1° in the Īlkhānī zīj and Ulugh beg’s Sulṭānī zīj, the two official works connected to the Maragha and Samarqand observatories. The author owes a debt of gratitude to Benno van Dalen (Germany), Julio Samsó (Spain), and John Steele (United States) and an anonymous referee for their critical remarks and suggestions. He also likes to thank Dirk Grupe (Germany) for revising the English of an earlier version of this paper. This work was financially supported by the Research Institute for Astronomy and Astrophysics of Maragha (RIAAM) under research project No. 1/5750–6. Notes on contributor S. Mohammad Mozaffari is an assistant professor of History of Astronomy in the Research Institute for Astronomy and Astrophysics of Maragha (RIAAM), Iran. He has published several papers on observational and mathematical astronomy in medieval Middle Eastern astronomy since 2012. He is currently working on Ibn Yūnus’s and Ibn al-Shāṭir’s non-Ptolemaic star tables. 1.See O. Pedersen, A Survey of the Almagest (Odense: Odense University Press, 1974; with annotation and new commentary by A. Jones, New York: Springer, 2010), pp. 295–309; O. Neugebauer, A History of Ancient Mathematical Astronomy (3 vols., Berlin; Heidelberg; New York: Springer, 1975), vol. 1, pp. 152–8; C. Wilson, “The Inner Planets and the Keplerian Revolution,” Centaurus, 17, 1973, pp. 205–48; N.M. Swerdlow, “Ptolemy’s Theory of the Inferior Planets,” Journal for the History of Astronomy, 20, 1989, pp. 29–60. 2.In Ptolemy’s iterative method for determining the orbital elements of the superior planets, the total eccentricity (e1 + e2) is first computed and then the result is halved under the assumption that e1 = e2; but, in his method for the derivation of the orbital elements of the inferior planets, either eccentricity is computed independently from the other. It is noteworthy that Abū al-Rayḥān al-Bīrūnī (ad 973–1048), in his al-Qānūn al-mas‘ūdī VI.8 and X.3.1 (Abū al-Rayḥān al-Bīrūnī, al-Qānūn al-mas‘ūdī (Mas‘ūdīc canons) (3 vols., Hyderabad: Osmania Bureau, 1954–1956), vol. 2, pp. 681–5, vol. 3, pp. 1183–4), proposed an alternative method for the derivation of the orbital elements of the Sun and the superior planets, which resembles Ptolemy’s corresponding method for the inferior planets; see S.M. Mozaffari, “Bīrūnī’s Four-Point Method for Determining the Eccentricity and the Direction of the Apsidal Lines of the Superior Planets,” Journal for the History of Astronomy, 44, 2013, pp. 207–11. 3.G.J. Toomer, Ptolemy’s Almagest (Princeton: Princeton University Press, 1998), pp. 469–74. 4.Toomer, op. cit. (Note 3), p. 155. 5.The late Prof. O. Neugebauer (op. cit. (Note 1), vol. 1, pp. 147, 213) derives the values 0;40 for the eccentricity and 47° for the longitude of the apogee of Venus for ad 100. The reason behind these erroneous figures has been explained in S.M. Mozaffari, “Holding or Breaking with Ptolemy’s Generalization: Considerations about the Motion of the Planetary Apsidal Lines in Medieval Islamic Astronomy,” Science in Context, 30, 2017, pp. 1–32, p. 5, Note 3. 6.See Wilson, op. cit. (Note 1); N.M. Swerdlow and O. Neugebauer, Mathematical Astronomy in Copernicus’s De Revolutionibus (2 vols., New York: Springer, 1984), vol. 1, esp. pp. 369–71. 7.The medieval Islamic values for the solar parameters are investigated in detail earlier in S.M. Mozaffari, “Limitations of Methods: The Accuracy of the Values Measured for the Earth’s/Sun’s Orbital Elements in the Middle East, A.D. 800 and 1500,” Journal for the History of Astronomy, 44, 2013, Part 1: issue 3, pp. 313–36, Part 2: issue 4, pp. 389–411 and S.M. Mozaffari, “An Analysis of Medieval Solar Theories,” Archive for History of Exact Sciences, 72, 2018, pp. 191–243. 8.The true values for the geocentric eccentricity and longitude of the apogee of Venus in this study were computed on the basis of the formulae for the heliocentric orbital elements of the Earth and Venus in J.L. Simon, P. Bretagnon, J. Chapront, M. Chapront-Touze, G. Francou, and J. Laskar, “Numerical Expressions for Precession Formulae and Mean Elements for the Moon and the Planets,” Astronomy and Astrophysics, 282, 1994, pp. 663–83. 9.As we will show elsewhere, the criterion is not consistent with the Ptolemaic conception of the apsidal line of the inferior planets in the case of Mercury, because of the great eccentricity of this planet in comparison with that of the Earth, which leads to a complex situation regarding the variation of the apparent size of its orbit as seen from our planet. 10.Pañcasiddhāntikā III.2–3, IX.7–8, XVI.12–14: O. Neugebauer and D. Pingree, The Pañcasiddhāntikā of Varāhamihira, Historisk-filosofiske Skrifter (2 vols., Copenhagen: Det Kongelige Danske Videnskabernes Selskab, 1970–1971), vol. 1, pp. 39, 93, 149–51, vol. 2, pp. 24, 69–70, 101–2; Khaṇḍakhādyaka I.13 and II.6: Brahmagupta, The Khaṇḍakhādyaka of Brahmagupta, B. Chatterjee (ed. and En. trans.) (New Delhi: Bina Chatterjee, 1970), pp. 49–50, 54, 283; E.S. Kennedy, “The Sasanian Astronomical Handbook Zīj-i Shāh and the Astrological Doctrine of “Transit” (Mamarr),” Journal of the American Oriental Society, 78, 1958, pp. 246–62 (reprinted in E.S. Kennedy, Studies in the Islamic Exact Sciences (Beirut: American University of Beirut, 1983), pp. 319–35), pp. 256–7; E.S. Kennedy and D. Pingree (eds), The Book of the Reasons behind Astronomical Tables (New York: Scholars’ Facsimiles & Reprints, 1981), p. 220; D. Pingree, “The Persian “Observation” of the Solar Apogee in ca. A.D. 450,” Journal of Near Eastern Studies, 24, 1965, pp. 334–6; D. Pingree, “The Fragments of the Works of Ya‘qūb Ibn Ṭāriq,” Journal of Near Eastern Studies, 27, 1968, pp. 97–125; D. Pingree, “The Fragments of the Works of Al-Fazārī,” Journal of Near Eastern Studies, 29, 1970, pp. 103–23; D. Pingree, Jyotiḥśāstra; Astral and Mathematical Literature (Wiesbaden: Harrassowitz, 1981), pp. 15–6; B.L. van der Waerden, “The Heliocentric System in Greek, Persian and Hindu Astronomy,” in G. Saliba and D.A. King (eds), From Deferent to Equant: A Volume of Studies on the History of Science of the Ancient and Medieval Near East in Honor of E. S. Kennedy (Annals of the New York Academy of Sciences, vol. 500) (New York: New York Academy of Sciences, 1987), pp. 525–45, esp. pp. 530–2; B.L. van der Waerden, “The Astronomical System of the Persian Tables II,” Centaurus, 30, 1987, pp. 197–211. It deserves noting, however, that the underlying values for the planetary parameters in the Shāh zīj is not completely in agreement with those deployed in the Midnight System; see Pingree, “The Fragments of the Works of Ya‘qūb Ibn Ṭāriq” (Note 10), pp. 104–5. 11.This hypothesis of Āryabhaṭa in his own Midnight System distinguishes from a curious feature in other Indian astronomical traditions (and in another work by himself, Āryabhaṭīya I.7: Āryabhaṭa, Āryabhaṭīya, W.E. Clark (Eng. trans.) (Chicago: University of Chicago Press, 1930), pp. 16–8), according to which the progressive motion of the apsidal lines of the Sun, the Moon, and the planets and the retrograde motion of the nodal lines of the Moon and the planets do not take place at a constant rate, but each has its own motion (different from Ptolemaic astronomy, where the apsidal and nodal lines are sidereally fixed and therefore tropically subject to the precessional motion). However, the angular velocities given for their motions in various Indian traditions are too small to be assumed to have been obtained from actual systematic observations. The early Muslim astronomers were completely acquainted with it. See Súrya Siddhánta I.41–44: P. Gangooly (ed.) and E. Burgess (En. trans.), The Súrya Siddhánta: A Textbook of Hindu Astronomy (Delhi: Motilal Banarsidass, 1997), p. 7; P.B. Deva Sastri and L. Wilkinson (Eng. trans.), The Súrya Siddhánta, Or An Ancient System of Hindu Astronomy, followed by the Siddhánta Śiromani (Amsterdam: Philo Press, 1974), pp. 26–8; Kennedy and Pingree, op. cit. (Note 10), pp. 118–9, 282; Pingree, “The Fragments of the Works of Ya‘qūb Ibn Ṭāriq” (Note 10), p. 99; Pingree, “The Fragments of the Works of Al-Fazārī” (Note 10), p. 109. 12.Al-Fazārī has the unprecedented values qmax = 2;11,15° for the Sun (which is close, but by no means identical, to the ones utilized in some Indian sources; e.g., the value 2;10,31° in the Paitāmahasiddhānta) and 2;15° for Venus (approximately equal to the value 2;14° employed in the Midnight System); see Pingree, “The Fragments of the Works of Ya‘qūb Ibn Ṭāriq” (Note 10), pp. 103–4; Pingree, “The Fragments of the Works of Al-Fazārī” (Note 10), pp. 112–3. For al-Khwārizmī, see O. Neugebauer, The Astronomical Tables of Al-Khwārizmī (Copenhagen: Munksgaard, 1962), p. 41, 99. 13.We have checked the two extant Arabic translations of the Almagest by Ḥajjāj b. Yūsuf b. Maṭar in ad 827–828 (LE: Leiden, Or. 680: ff. 150v–152v, dropped from MS. LO: Library of London, Add 7474, copied in 686 H/ad 1287) and by Ḥunayn b. Isḥāq in ad 880–890, which was afterward revised by Thābit b. Qurra (d. ad 901) (S: Iran, Tehran, Sipahsālār Library, no. 594, copied in 480 H/ad 1087–1088, ff. 134v–136v, PN: USA, Rare Book and Manuscript Library of University of Pennsylvania, LJS 268, written in an Arabic Maghribī/Andalusian script at Spain in 783 H/ad 1381, ff. 99r–v) and found nothing indicating that Āryabhaṭa’s Midnight System had any deleterious effect on the contents related to the orbital elements of Venus in these translations of the Almagest. Of course, there was an earlier translation made shortly before or around ad 800, which is not available nowadays. See P. Kunitzsch, “Translators’ Errors in the Almagest, Arabic and Latin,” in P. Arfé, I. Caiazzo and A. Sannino (eds), Adorare caelestia, gubernare terrena: Atti del colloquio internazionale in onore di Paolo Lucentini (Napoli: Brepols, 2011), pp. 283–93, 284; R. Lorch, “Greek-Arabic-Latin: The Transmission of Mathematical Texts in the Middle Ages,” Science in Context, 14, 2001, pp. 313–31, 315–6, and the references mentioned therein. 14.In some other cases, the plausible reasons for maintaining the Indian astronomical elements can be thought: They either are the subjects which Ptolemy does not deal with in the Almagest (e.g. the colours and the optical limitation of the visibility of the eclipses) or could be of practical use (e.g. the Indian hypotheses of the angular diameters of the Sun and the Moon, which made the annular solar eclipses justifiable and even predictable); see S.M. Mozaffari, “Historical Annular Solar Eclipses,” Journal of the British Astronomical Association, 123, 2013, pp. 33–6; S.M. Mozaffari, “Wābkanawī’s Prediction and Calculations of the Annular Solar Eclipse of 30 January 1283,” Historia Mathematica, 40, 2013, pp. 235–61; S.M. Mozaffari, “A Case Study of How Natural Phenomena were Justified in Medieval Science: The Situation of Annular Eclipses in Medieval Astronomy,” Science in Context, 27, 2014, pp. 33–47; S.M. Mozaffari, “Annular Eclipses and the Considerations about the Solar and Lunar Angular Diameters in the Medieval Astronomy,” in W. Orchiston, D.A. Green and R. Strom (eds), New Insights From Recent Studies in Historical Astronomy: Following in the Footsteps of F. Richard Stephenson (New York: Springer, 2015), pp. 119–42, esp. p. 138. 15.Bīrūnī, op. cit. (Note 2), vol. 3, pp. 1197–8. The English translation of the second paragraph of this passage is from B.R. Goldstein and F.W. Sawyer, “Remarks on Ptolemy’s Equant Model in Islamic Astronomy,” in Y. Maeyama and W.G. Salzer (eds), Prismata: Festschrift für Willy Hartner (Wiesbaden: Franz Steiner Verlag, 1977), pp. 165–81, a part of which is repeated in J. Chabás and B.R. Goldstein, “Ibn al-Kammād’s Muqtabis zij and the Astronomical Tradition of Indian Origin in the Iberian Peninsula,” Archive for History of Exact Sciences, 69, 2015, pp. 577–650, 610, with some changes. 16.See B. van Dalen, “A Second Manuscript of the Mumtaḥan Zīj,” Suhayl, 4, 2004, pp. 9–44, esp. p. 11. 17.This issue is beyond the scope of this paper and is dealt with in depth elsewhere; only as an example, it is noteworthy that the Mumtaḥan zīj has a very precise solar theory with the errors not exceeding 5′ for the period of 13,000 days since ad 1–1–820 (see Mozaffari, “An Analysis,” esp. pp. 221–3, 235). It employs Yaḥyā’s values for the orbital elements of the Sun. Nevertheless, his measured values for the times of the equinoxes of ad 829–830, a short while before his death, suffer from the large errors up to ~ +7 hours and is not compatible with the Mumtaḥan solar theory. In contrast, the time of the autumnal equinox of ad 832 measured by Sand b. ‘Alī and Khālid b. ‘Abd al-Malik al-Marwarūdhī in Damascus is only ~ +1/2 hour off (Mozaffari, “An Analysis,” p. 216) and is in complete agreement with the Mumtaḥan solar theory. Therefore, it seems that the finalized solar theory in the Mumtaḥan zīj as come down to us is a modified version of an earlier theory prepared by Yaḥyā. 18.See Mozaffari, “Limitations,” Part 2, pp. 403–8; Mozaffari, op. cit. (Note 5), pp. 8–9. 19.See Mozaffari, op. cit. (Note 5), pp. 14–5. 20.In the Istanbul copy of Ḥabash’s Zīj (I: Istanbul, Süleymaniye, Yeni Cami, no. 784, ff. 89r, 115r; M.-T. Debarnot, “The Zīj of Ḥabash al-Ḥāsib: A Survey of MS Istanbul Yeni Cami 784/2,” in Saliba and King, op. cit. (Note 10), pp. 35–69, 44), the longitudes of the apogees of the Sun and Venus are equal to 82;39°, as is in the Mumtaḥan zīj. In the Berlin copy of this work, there are also the two tables for the longitudes of the apogees of the Sun and the five planets. One gives the values up to the five sexagesimal fractional places, wherein the fractions from the seconds to the fifths are equal (…;…,24,2,43,53°). The longitudes of the apogees of the Sun and of Venus are equal to 79;30, … (B: Berlin, Ahlwardt 5750 (formerly Wetzstein I 90), f. 28r). The tabular values are by about 3;9° less than the apogee longitudes in the Mumtaḥan/Ḥabash’s zīj, and so, they are for the beginning of the Hijra era. The other gives the longitudes of the planetary apogees up to the ninth sexagesimal fractional place for the year 872 Hijra, whose beginning was on 1 August 1467. The longitude of the apogees of the Sun and Venus are equal up to the arc-seconds: 92;24 …° (B: f. 17v). At the beginning of this table, we are explicitly told that it was updated from the Mumtaḥan zīj. With Yaḥyā’s value 82;39° for ca. 830 in Table 2 and the precessional motion of 1° in 66 years, as associated with the Mumtaḥan tradition, we derive a longitude of about 92;19° for the given date. 21.Muḥyī al-Dīn al-Maghribī appears to have been responsible for introducing Ibn Yūnus’s zīj at the Maragha observatory, since, to the best of our knowledge, no trace of it may be found in the Eastern Islamic lands until that time; more notably, a reference to it can be found neither in ‘Abd al-Raḥmān al-Khāzinī’s On Experimental Astronomy (Kayfiyyat al-i‘tibār) II.4 (in: al-Khāzinī, al-Zīj al-mu‘tabar al-sanjarī, V: Vatican, Biblioteca Apostolica Vaticana, Arabo 761, f. 8r), wherein the two most influential Middle Eastern works, the Mumtaḥan zīj and al-Battānī’s <.> Sābi’ zīj, are mentioned, nor in Ibn al-Fahhād’s very informative evaluation of the deficiencies and errors in his Islamic predecessors’ works, as put forward in the prologue of his ‘Alā’ī zīj (Farīd al-Dīn Abu al-Ḥasan ‘Alī b. ‘Abd al-Karīm al-Fahhād al-Shirwānī or al-Bākū’ī, Zīj al-‘Alā’ī, MS. India, Salar Jung, no. H17, pp. 3–5). 22.Already noted in D.A. King, “Aspects of Fatimid Astronomy: From Hard-Core Mathematical Astronomy to Architectural Orientations in Cairo,” in M. Barrucand (ed.), L’Égypte Fatimide: son art et son histoire – Actes du colloqie organisé à Paris les 28, 29 et 30 mai 1998 (Paris: Presses de l’Université de Paris-Sorbonne, 1999), pp. 497–517, 502; see ‘Alī b. ‘Abd al-Raḥmān b. Aḥmad Ibn Yūnus, Zīj al-kabīr al-Ḥākimī, L: Leiden, Universiteitsbibliotheek, Or. 143, pp. 121, 191–3; J.-J.-A. Caussin de Perceval, “Le livre de la grande table hakémite, Observée par le Sheikh, …, ebn Iounis,” Notices et Extraits des Manuscrits de la Bibliothèque nationale, 7, 1804, pp. 16–240, 221; Pingree, “The Fragments of the Works of Ya‘qūb Ibn Ṭāriq” (Note 10), p. 104; Pingree, “The Fragments of the Works of Al-Fazārī” (Note 10), p. 113. 23.For Venus, the maximum epicyclic equation of 46;25°, which corresponds to a radius of the epicycle r ≈ 43;28, and for Mercury: 22;24°, corresponding to r ≈ 22;52. See Ibn Yūnus, Zīj (Note 22), L: pp. 121, 190, 192; Caussin, op. cit. (Note 22), p. 221. 24.Muḥammad b. Abī ‘Abd-Allāh Sanjar al-Kamālī (Sayf-i munajjim), Ashrafī zīj (written in Shiraz in the early fourteenth century), MSS. F: Paris, Bibliothèque Nationale, no. 1488, f. 232v, G: Iran, Qum, Gulpāyigānī, no. 64731, f. 249r. 25.Abū Ja‘far Muḥammad b. Ayyūb al-Ḥāsib al-Ṭabarī, Mufrad zīj (The unique zīj), MS. Cambridge, Browne Collection College, O.1, f. 175v. 26.Al-Fahhād, Zīj (Note 21), p. 3. The most notable of such errors took place in the case of the conjunction between Jupiter and Saturn in December 1166. In the prediction of the time of this conjunction, al-Battānī’s zīj was about 35 days in error. Al-Fahhād computed it to have occurred on 10 December, at 8;14,35 hours before noon (pp. 4, 57–9). In reality, the conjunction took place on 11 December, at 23:46 MLT, hence the error in Ibn al-Fahhād’s time is less than 2 days. 27.Kamālī, Zīj (Note 24), F: f. 232v, G: f. 249r. 28.Muntakhab al-Dīn al-Yazdī, Manẓūm zīj, MS. Iran, Mashhad University, Theology Faculty, no. 674, ff. 46v–47r: He gives the value 87;55° for the longitudes of the apogees of the Sun and Venus for the beginning of 621 Yazdigird (13 January 1252), which is more than −2° in error, if taken as the place of the solar apogee, but too large (with an error of about +9;41°) for being the location of the apogee of Venus for the given time. The values Kamālī reports for the longitudes of the solar and planetary apogees from the Muntakhab zīj as converted to 13 Khurdād 672 (13 March 1303) are all by 0;46° more than the values in the Manẓūm zīj, which is in accordance with the rate of precession of 1°/66y and the period of 51 years between them, and which shows that both zījes written by Muntakhab al-Dīn share the same epoch and radix values. The Muntakhab zīj and Razā’ī zīj can be reconstructed to a large extent on the basis of the information that has come down to us through the Ashrafī zīj X.8 and X.9: (Note 24), F: f. 230v and ff. 231v–233r, 234r, 235v, G: f. 247v and ff. 248v–249r, 250v, and the anonymous Sulṭānī zīj written in Yazd about the 1290s (NB this is neither be confused with Wābkanawī’s Zīj al-Muḥaqqaq al-Sulṭānī, nor with Ulugh Beg’s Sulṭānī zīj), which is preserved in a unique manuscript in Iran, Library of Parliament, no. 184. Despite the late E.S. Kennedy’s conjecture (E.S. Kennedy, “A Survey of Islamic Astronomical Tables,” Transactions of the American Philosophical Society, New Series, 46, 1956, pp. 123–177, no. 25 on p. 129), this work is not identical with the Shāhī zīj, since some material of the latter work is explicitly quoted and explained in it; e.g., the tables of the equation of time on ff. 7v and 15r, and the method of Ḥusām al-Dīn al-Sālār for the construction of the planetary equation tables on f. 77r. Some tables of the Razā’ī zīj are preserved in this Sulṭānī zīj: (a) the table of the longitude of the lunar node on f. 11r, (b) the procedure for the computation of the longitude of the superior planets in III.6 on f. 79r, (c) the planetary mean positions in longitude and in anomaly on f. 81v (the longitudes of the apogees of the Sun and Venus are equal), and (d) the tables of the “difference in equations” for the superior planets on ff. 120v–121v (see, also, next note). 29.In the Sulṭānī zīj (Note 28), all the principal tables for the equation of centre of the superior planets are displaced, but based on Ptolemy’s eccentricity values, and in the steps of 0;5°, as described in the following: Saturn: Min = 0;28°, Max = 13;32° (ff. 16v–22r); Jupiter: Min = 0;45°, Max = 11;15° (ff. 30v–36r); and Mars: Min = 0;35°, Max = 23;25° (ff. 44v–50r). The equation tables from other zījes appear in the form of the auxiliary tables called “difference in equation” (ikhtilāf-i ta‘dīl), in each of which the differences in entries between a principal equation table of this zīj and the corresponding table from another one have been tabulated. Consequently, the three corrective tables in the Sulṭānī zīj for the equation of centre of the superior planets according to the Razā’ī zīj actually display the differences between Ptolemy’s equation values and those originally tabulated in the Razā’ī zīj. The corrective table for Saturn is subtractive and displaced with Max = 1;44° and Min = 0;16° (f. 120v); thus, the maximum difference in Saturn’s equation of centre between the Razā’ī zīj and Almagest is Δqmax = –0;44°; therefore, according to the Razā’ī zīj, the maximum value of equation of centre of Saturn is qmax = 6;31 – 0;44 = 5;47°. The corrective table for Jupiter is symmetrical with Δqmax = ±0;17° (f. 121r); therefore, qmax = 5;15 + 0;17 = 5;32°. For Mars, the corrective table is additive and displaced with Min = 1;38° and Max = 2;28°; thus, Δqmax = +0;25° (f. 121v); therefore, qmax = 11;25° + 0;25° = 11;50°. Note that the maximum values for the equation of centre of Jupiter and Saturn are equal to Ibn al-A‘lam’s (see below, Note 32), in agreement with Kamālī’s statement. But, the source of the value 11;50° for Mars’ maximum equation of centre (corresponding to e ≈ 6;13) is unknown. However, the table of Mars’ equation of centre from the Razā’ī zīj as preserved in the Ashrafī zīj is on the basis of Ptolemy’s eccentricity value (although displaced, with Min = 2;35° and Max = 25;25°) (Kamālī, Zīj (Note 24), F: f. 235v, G: f. 250v). A close value e = 6;15 is mentioned in Ashrafī zīj III.9.2: (Note 24), F: f. 51r, G: f. 56r, where Kamālī lists the planetary eccentricities. The astronomers in the Samarqand observatory measured the other close value, e ≈ 6;13,30, less than two centuries later; see S.M. Mozaffari, “Planetary Latitudes in Medieval Islamic Astronomy: An Analysis of the Non-Ptolemaic Latitude Parameter Values in the Maragha and Samarqand Astronomical Traditions,” Archive for History of Exact Sciences, 70, 2016, pp. 513–41, 535. 30.Kamālī, Zīj (Note 24), F: f. 47r, G: ff. 50v–51r. 31.Al-Fahhād, Zīj (Note 21), p. 4. See, also, B. van Dalen, “The Zīj-i Naṣirī by Maḥmūd ibn Umar: The Earliest Indian Zij and Its Relation to the ‘Alā’ī Zīj,” in C. Burnett et al. (eds), Studies in the History of the Exact Sciences in Honour of David Pingree (Leiden: Brill, 2004), pp. 825–62, 836. 32.Ibn al-A‘lam’s tables of the equation of centre of these two superior planets are preserved in Kamālī’s Ashrafī zīj. The table for Saturn’s equation of centre (Kamālī, Zīj (Note 24), F: f. 234v, G: f. 250r) is displaced with a minimum tabular value of 0;12° (for arguments 76°–81°) and a maximum value of 11;48° (for arguments 253°–258°). The table for Jupiter’s equation of centre (F: f. 235r, G: f. 250r) is also displaced with minimum 0;28° (for arguments 72°–78°) and maximum 11;32° (for arguments 246°–252°; on “displaced” equation tables, a term coined by the late Prof. E.S. Kennedy, see van Dalen, op. cit. (Note 31); J. Chabás and B.R. Goldstein, “Displaced Tables in Latin: The Tables for the Seven Planets for 1340,” Archive for History of Exact Sciences, 67, 2013, pp. 1–42, reprinted in J. Chabás and B.R. Goldstein, Essays on Medieval Computational Astronomy (Leiden: Brill, 2015), pp. 99–149; and the references mentioned therein). Accordingly, the maximum equations of centre of Saturn and Jupiter are derived, respectively, as 5;48° and 5;32°. The modern values for the geocentric eccentricity of the two planets in Ibn al-A‘lam’s time are, respectively, equal to 3;26 and 2;48 (see S.M. Mozaffari, “Ptolemaic Eccentricity of the Superior Planets in the Medieval Islamic Period,” in: G. Katsiampoura (ed.), Scientific Cosmopolitanism and Local Cultures: Religions, Ideologies, Societies; Proceedings of 5th International Conference of the European Society for the History of Science (Athens, 1–3 November 2012) (Athens: National Hellenic Research Foundation, 2014), pp. 23–30, 26). It should be noted that none of his values for the eccentricities of the two superior planets is more accurate than Ptolemy’s. That no new table for the equation of centre of Mars is associated with Ibn al-A‘lam gives the impression that he probably had not measured a new value for its eccentricity. Note that the geocentric eccentricity of Mars has remained nearly constant, about Ptolemy’s value 6;0, during the past two millennia, which may explain why Ibn al-A‘lam did not come up with a new value for it (see Mozaffari, op. cit. (Note 32), Figure 5 on p. 29). Ibn al-A‘lam’s value for the eccentricity of Saturn was used in the zījes of three Western Islamic astronomers; see J. Samsó and E. Millás, “The Computation of Planetary Longitudes in the zīj of Ibn al-Bannā,” Arabic Science and Philosophy, 8, 1998, pp. 259–86, reprinted in J. Samsó, Astronomy and Astrology in al-Andalus and the Maghrib (Variorum Collected Studies Series) (Aldershot; Burlington: Ashgate, 2007), Trace VIII, p. 273. 33.Ibn al-A‘lam’s table of the equation of centre of Mercury is preserved in Kamālī’s Ashrafī Zīj ((Note 24), F: f. 237r, G: f. 252v): the maximum equation of centre in this table is 3;40° (for arguments 99°–101°). It should be noted that his value for the eccentricity of this planet is more exact than Ptolemy’s three values 3;0, 2;45, 2;30, as found, respectively, in the Almagest, Planetary Hypotheses, and Canobic Inscription (Almagest IX.8,9: Toomer, op. cit. (Note 3), p. 459; B.R. Goldstein, “The Arabic Version of Ptolemy’s Planetary Hypotheses,” Transactions of the American Philosophical Society, 57, 1967, pp. 3–55, 19; A. Jones, “Ptolemy’s Canobic Inscription and Heliodorus’ Observation Reports,” SCIAMVS, 6, 2005, pp. 53–97, 69, 86–7); the true value during the past two millennia has been about 3;50 (note that for the eccentricity of Mercury, we consider here half of the distance between the Earth and the centre of the hypocycle in Ptolemy’s complicated model for this planet, on the circumference of which the centre of its deferent revolves). 34.See S.M. Mozaffari, “Muḥyī al-Dīn al-Maghribī’s Lunar Measurements at the Maragha Observatory,” Archive for History of Exact Sciences, 68, 2014, pp. 67–120, 105. 35.See E.S. Kennedy, “The Astronomical Tables of Ibn al-A‘lam,” Journal for the History of Arabic Science, 1, 1977, pp. 13–23; R.P. Mercier, “The Parameters of the Zīj of Ibn al-A‘lam,” Archives Internationales d’Histoire des Sciences, 39, 1989, pp. 2–50. 36.See Mozaffari, “Limitations,” Part 2, esp. pp. 395–97; Mozaffari, “An Analysis,” esp. p. 212. 37.During the period at which Bīrūnī was busy with writing al-Qānūn, he was 57 years, at least. As he states, until that time he had not yet observed desirably or investigated in depth the fixed star other than an observation of the star Spica (α Vir) on 2 July 1009, which he employed for the derivation of the precessional motion (see Mozaffari, “Limitations,” Part 2, p. 405). Despite a good number of solar and lunar observations that he made by himself or reported from his Muslim predecessors, he mentions nothing about new planetary observations. His procedure of correcting and converting Ptolemy’s planetary epoch mean and apogee longitudes to his epoch and base meridian (see al-Qānūn X.4: Bīrūnī, op. cit. (Note 2), vol. 3, pp. 1193–8) is a good example of the crude, artificial ways that a typical medieval astronomer could invent. It also reflects how difficult it could be for a single-handed astronomer (no matter whatever skilful or motivated) to cope with the determination of all fundamental parameters during his lifetime. 38.The term al-i‘tibār, as al-Khāzinī (Kayfiyyat al-i‘tibār, in: Zīj (Note 21), V: f. 4r) defines, connotes the “experiment” in the modern scientific method: “We called it [i.e., our method] ‘the experiment method’ (ṭarīq al-i‘tibār). […] In the experiment, observed facts (musallamāt marṣūda) are taken, and what are wanted (maṭlūbāt) are based on them.” 39.Khāzinī, Kayfiyyat al-i‘tibār, in: Zīj (Note 21), V: ff. 16v–17r. 40.Khāzinī, Kayfiyyat al-i‘tibār II.4, in: Zīj (Note 21), V: f. 8r. 41.Kamālī, Zīj (Note 24), F: f. 47r, G: ff. 50v–51r. 42.See Mozaffari, op. cit. (Note 29), pp. 520–2, 531–5. 43.See G. Saliba, “An Observational Notebook of a Thirteenth-Century Astronomer,” Isis, 74, 1983, pp. 388–401; “Solar Observations at Maragha Observatory,” Journal for the History of Astronomy, 16, 1985, pp. 113–22; “The Determination of New Planetary Parameters at the Maragha Observatory,” Centaurus, 29, 1986, pp. 249–71 (these three papers are reprinted in G. Saliba, A History of Arabic Astronomy: Planetary Theories During the Golden Age of Islam (New York: New York University, 1994), pp. 163–76, 177–86, 208–30); Mozaffari, op. cit. (Note 34). 44.See Mozaffari, op. cit. (Note 29), pp. 520–22, 530–31. Also, he has a non-Ptolemaic value for the inclination of Venus in his earlier zīj, the Tāj al-azyāj (Crown of the zījes), written in Damascus; see Mozaffari, op. cit. (Note 29), pp. 521, 531–5. 45.See K. Yabuuti, “The Influence of Islamic Astronomy in China,” in Saliba and King, op. cit. (Note 10), pp. 547–59; “Islamic Astronomy in China during the Yuan and Ming Dynasties” trans. and partially revised by Benno van Dalen, Historia Scientiarum, 7, 1997, pp. 11–43; B. van Dalen, “Islamic and Chinese Astronomy under the Mongols: A Little-Known Case of Transmission,” in Y. Dold-Samplonius, J.W. Dauben, M. Folkerts and B. van Dalen (eds), From China to Paris: 2000 Years Transmission of Mathematical Ideas (Stuttgart: Franz Steiner, 2002), pp. 327–56; B. van Dalen, “Islamic Astronomical Tables in China: The Sources for the Huihui li,” in S.M.R. Ansari (ed.), History of Oriental Astronomy; Proceedings of the Joint Discussion-17 at the 23rd General Assembly of the International Astronomical Union, organised by the Commission 41 (History of Astronomy), held in Kyoto, August 25–26, 1997 (Dordrecht: Kluwer, Springer, 2002), pp. 19–30. On the accuracy of the values Jamāl al-Dīn measured for the eccentricities of Saturn and Jupiter, see Mozaffari, op. cit. (Note 32), esp. p. 27. 46.See S.M. Mozaffari and J.M. Steele, “Solar and Lunar Observations at Istanbul in the 1570s,” Archive for History of Exact Sciences, 69, 2015, pp. 343–62; Mozaffari, op. cit. (Note 5), p. 10. 47.E.g., the tables for the solar and lunar equation of centre in Kharīdat (B: Berlin, Staatsbibliothek zu Berlin, no. Ahlwardt 5699 = WE. 193, ff. 28r–v, 34r–v, C1: Cairo, Dār al-Kutub, Ṭal‘at Mīqāt Collection, no. 900, ff. 50v–51r, 58r–v, C2: Cairo, Dar al-Kutub, Ṭal‘at Mīqāt Collection, no. 76, ff. 37v–38r, 43v–44r, E: Istanbul, Süleymaniye, Esad Efendi Collection, no. 1976, ff. 4r–v, 6v–7r, K: Kandilli Observatory, no. 183, ff. 48v–49r, 56r–v) are always additive, but not displaced (like those in Ulugh Beg’s zīj); the first has Max = 3.863° and Min = 0° and the latter, Max = 26.519° and Min = 0°; note that the tabular numerical values in this work are in decimals. The maximum values for the solar and lunar equation of centre are thus, respectively, equal to 1;55,53° and 13;15,34°, which are the same values adopted in Ulugh Beg’s zīj (see Table 1; Mozaffari, “Limitations,” Part 1, p. 326; Mozaffari, op. cit. (Note 34), p. 105). 48.Taqī al-Dīn, Sidrat, K: Istanbul, Kandilli Observatory, no. 208/1 (up to f. 48v; autograph), f. 17v, N: Istanbul, Süleymaniye Library, Nuruosmaniye Collection, no. 2930, f. 23r, V: Istanbul, Süleymaniye Library, Veliyüddin Collection, no. 2308/2 (from f. 10v), f. 25r; Kharīdat (Note 47), C1: f. 8v, C2: f. 6r, E: f. 25v, K: f. 6v. 49.See A. Sayılı, The Observatory in Islam (Ankara: Türk Tarih Kurumu Basimevi, 1988), pp. 290–2. 50.On this work, see B. van Dalen, “Origin of the Mean Motion Tables of Jai Singh,” Indian Journal of History of Science, 35, 2000, pp. 41–66; D. Pingree, “An Astronomer’s Progress,” Proceedings of the American Philosophical Society, 143, 1999, pp. 73–85; D. Pingree, “Philippe de La Hire at the Court of Jayasiṃha,” in Ansari, op. cit. (Note 45), pp. 123–31; D. Pingree, “Philippe de La Hire’s Planetary Theories in Sanskrit,” in Dold-Samplonius et al. (Note 45), pp. 429–53; S.M.R. Ansari, “Survey of Zījes Written in the Subcontinent,” Indian Journal of History of Science, 50, 2015, pp. 575–601; and the references mentioned therein. 51.Khayr-Allāh Shīrāzī and Sawā’ī Jai Singh, Muḥammadshāhī zīj, P1: Iran, Parliament Library, no. 2144, pp. 196, 201, P2: Iran, Parliament Library, no. 6121, pp. 264, 269, P3: Iran, Parliament Library, no. 15780, pp. 232–3 (in blank), L: London, British Library, no. Add 14373, ff. 173v–174r; P. de La Hire, Tabulae astronomicae Ludovici magni jussu et munifrcentia exaratae et in lucem editae, 2nd ed. (Paris: Montalant, 1727), section of tables, pp. 64, 66–7. Ghulām Ḥusayn Jaunpūrī, in his Bahādurkhānī Encyclopedia (Jāmi’-i Bahādurkhānī (Calcutta, 1835), p. 616), lists the daily motions of the planetary apogees and nodes and ascribes the discovery of the difference between them to the Muḥammadshāhī’s observations (!). It should be noted that, compared to the famous astronomical tables in the seventeenth century, Philip de La Hire’s values at least for the motions of the apogee and the node of Venus are not accurate; e.g., E. Halley, Astronomical Tables with Precepts both in English and Latin (London: Printed for William Innys, 1752), section of tables, p. Uu) gives their motions in 1000 years as 15;42,13° (≈ 56.5″/y) and 8;36,40° (≡ 31.0″/y), respectively; the latter was approved by T. Bugge “Astronomical Observations on the Planets Venus and Mars, Made with a View to Determine the Heliocentric Longitude of Their Nodes, the Annual Motion of the Nodes, and the Greatest Inclination of Their Orbits,” Philosophical Transactions of the Royal Society of London, 80, 1790, pp. 21–31, 26; the true values at the time are ~ 50.7″/y and ~ 32.4″/y. 52.As we have shown elsewhere, this work is a treasure of non-Ptolemaic values for the structural parameters of the motions of the planets in longitude and in latitude (see Mozaffari, op. cit. (Note 29), pp. 535–6), which had not received the attention that it deserves. For example, no reference to these values can be found in E.S. Kennedy’s brief statement about the planetary equations and latitudes in Ulugh Beg’s zīj in Kennedy, op. cit. (Note 28), p. 167, nor, e.g., in E.S. Kennedy, Astronomy and Astrology in the Medieval Islamic World (Aldershot: Ashgate-Variorum, 1998), Trace XI, where he numerates the heritage of Ulugh Beg. 53.The error in al-Kāhsī’s value is about −1°, but as mentioned in the apparatus to Table 2, his value has been updated from that adopted in the Īlkhānī zīj. 54.The Middle Eastern Islamic astronomers apparently did not deal with the problem of long-term changes in the fundamental parameters until about the last quarter of the thirteenth century, when Quṭb al-Dīn al-Shīrāzī (ad 1236–1311) constructed a solar model in order to account for the continuous decrease observed in the obliquity of the ecliptic and in the solar eccentricity since Ptolemy’s time; see S.M. Mozaffari, “A Forgotten Solar Model,” Archive for History of Exact Sciences, 70, 2016, pp. 267–91. 55.On the model and its later receptions and parameters, see G.J. Toomer, “The Solar Theory of az-Zarqāl: A History of Errors,” Centaurus, 14, 1969, pp. 306–36; J. Samsó and E. Millás, “Ibn al-Bannā’, Ibn Ishāq and Ibn al-Zarqālluh’s Solar Theory,” appeared in 1989, in J. Samsó (ed.), Islamic Astronomy and Medieval Spain (Ashgate: Variorum, 1994), Trace X; J. Samsó, “Al-Zarqal, Alfonso X and Peter of Aragon on the Solar Equation,” in Saliba and King, op. cit. (Note 10), pp. 467–76; G.J. Toomer, “The Solar Theory of az-Zarqāl: An Epilogue,” in: Saliba and King, op. cit. (Note 10), pp. 513–9; E. Calvo, “Astronomical Theories Related to the Sun in Ibn al-Hā’im’s al-Zīj al-Kāmil fī ’l-Ta‘ālīm,” Zeitschrift für Geschichte der Arabisch-Islamischen Wissenschaften, 12, 1998, pp. 51–111. 56.See Samsó and Millás, op. cit. (Note 55), pp. 18, 21, 25–6; Calvo, op. cit. (Note 55), pp. 58–9. 57.M. Boutelle, “The Almanac of Azarquiel,” Centaurus, 12, 1967, pp. 12–9 (reprinted in Kennedy, Studies (Note 10), pp. 502–10), p. 13. 58.See E.S. Kennedy and D.A. King, “Indian Astronomy in Fourteenth Century Fez: The Versified Zīj of al-Qusunṭīnī,” Journal for the History of Arabic Science, 6, 1982, pp. 3–45, reprinted in D.A. King, Islamic Mathematical Astronomy (London: Variorum, 1986), Trace VIII, pp. 10–1; J. Samsó, “Andalusian Astronomy in 14th Century Fez: al-Zīj al-Muwāfiq of Ibn ‘Azzūz al-Qusanṭīnī” Zeitschrift für Geschichte der Arabisch-Islamischen Wissenschaften, 11, 1997, pp. 73–110, reprinted in Samsó, Astronomy and Astrology (Note 32), Trace IX, pp. 83, 102; J. Samsó, “Ibn al-Raqqām’s al-Zīj al-Mustawfī in MS Rabat National Library 2461,” in N. Sidoli and G. van Brummelen (eds), From Alexandria, through Baghdad (Heidelberg; New York; Dordrecht; London: Springer, 2014), pp. 297–325, 315, 317–18; Samsó and Millás, op. cit. (Note 32), pp. 265–66, 272–73; J. Chabás and B.R. Goldstein, “Andalusian Astronomy: al-Zīj al-Muqtabis of Ibn al-Kammād,” Archive for History of Exact Sciences, 48, 1994, pp. 1– 41, reprinted in Chabás and Goldstein, Essays on (Note 32), 179–226, pp. 5, 33; Chabás and Goldstein, op. cit. (Note 15), pp. 598–600, 605–06, 609–11. 59.See J. Chabás and B.R. Goldstein, The Alfonsine Tables of Toledo (Dordrecht: Kluwer Academic Publishers, 2003), pp. 153–55, 159–60, 253–54. 60.See E. Rosen, Three Copernican Treatises, 3rd ed. (New York: Octagon Books, 1971), p. 81; N.M. Swerdlow, “A Summary of the Derivation of the Parameters in Commentariolus from the Alfonsine Tables,” Centaurus, 21, 1977, pp. 201–13, 205. The following literature represents only a few examples from the Latin and Jewish astronomical corpus brought into light and investigated in recent years, in which the idea of the equality of the orbital elements of the Sun and Venus were adopted: B.R. Goldstein, The Astronomy of Levi Ben Gerson (1288-1344), A Critical Edition of Chapters 1-20 with Translation and Commentary (New York: Springer, 1985), p. 113; B.R. Goldstein and J. Chabás, “An Occultation of Venus Observed by Abraham Zacut in 1476,” Journal for the History of Astronomy, 30, 1999, pp. 187–200, 188; B.R. Goldstein, “An Anonymous Zij in Hebrew for 1400 A.D.: A Preliminary Report,” Archive for History of Exact Sciences, 57, 2003, pp. 151–71, 160–61; J. Chabás, “Astronomy for the Court in the Early Sixteenth Century, Alfonso de Córdoba and his Tabule Astronomice Elisabeth Regine,” Archive for History of Exact Sciences, 58, 2004, pp. 183–217, 188; J. Chabás and B.R. Goldstein, The Astronomical Tables of Giovanni Bianchini (Leiden: Brill, 2009), p. 34. 61.Yaḥyā b. Abī Manṣūr, Zīj al-mumtaḥan, E: Madrid, Library of Escorial, árabe 927, published in The Verified Astronomical Tables for the Caliph al-Ma’mūn, F. Sezgin (ed.) with an introduction by E.S. Kennedy (Frankfurt am Main: Institut für Geschichte der Arabisch-Islamischen Wissenschaften, 1986), ff. 14v, 41r–v, L: Leipzig, Universitätsbibliothek, Vollers 821, ff. 67v, 92v–93r; Ibn Yūnus, Zīj (Note 22), L: p. 121; Caussin, op. cit. (Note 22), p. 221; Kennedy and Pingree, op. cit. (Note 10), p. 226. On the Mumtaḥan zīj, also, see S.M. Mozaffari and G. Zotti, “Bīrūnī’s Telescopic-Shape Instrument for Observing the Lunar Crescent,” Suhayl, 14, 2015, pp. 167–88; S.M. Mozaffari, “A Revision of the Star Tables in the Mumtaḥan zīj,” Suhayl, 15, 2016–2017, pp. 67–100; and the references mentioned therein. 62.Ḥabash, Zīj (Note 20), I: ff. 90v, 117r–v, B: ff. 30v, 55r–v; Debarnot, op. cit. (Note 20), pp. 41–2, 44. 63.C.A. Nallino (ed.), Al-Battani sive Albatenii Opus Astronomicum (Publicazioni del Reale osservatorio di Brera in Milano, n. XL, pte. I–III, Milan: Mediolani Insubrum, 1899–1907. The Reprint of Nallino’s edition: Frankfurt: Minerva, 1969), vol. 2, p. 128. 64.Ibn Yūnus, Zīj (Note 22), L: pp. 121, 188–90; Caussin, op. cit. (Note 22), p. 221. 65.Ibn al-A‘lam’s table of the solar equation centre has been preserved in the Ashrafī zīj (Note 24), F: f. 236v, G: f. 251v, but his equation tables of Venus are not extant. However, Kamālī says, in his Ashrafī zīj VIII.5: (Note 24), F: f. 230r, G: f. 247r, that the difference between al-Battānī’s and Ibn al-A‘lam’s values for the maximum equation of centre of Venus is only one arc-minute. 66.Bīrūnī, op. cit. (Note 2), vol. 3, p. 1258. 67.The value 2;4,39 is the average of the two values Bīrūnī measured for the solar eccentricity from his own observations carried out in ad 1016–1017. His table of the solar equation of centre in al-Qānūn is asymmetric with Max = 3;59,3,21° for a mean eccentric anomaly of 266° and Min = 0;0,56,39° for argument 90° (Bīrūnī, op. cit. (Note 2), vol. 2, pp. 710, 716), and therefore qmax = 1;59,3,21°, which strictly corresponds to e = 2;4,39. 68.The equation tables in Khāzīnī’s zīj are symmetric. The table of the equation of centre of the Sun has the maximum value 2;12,23° for argument 92°, and that of Venus, 2;23° for arguments 85°–94° (al-Khāzinī, Zīj (Note 21),V: ff. 131v, 179r; L: London, British Linbrary, Or. 6669, ff. 113v, 146r; Wajīz [Compendium of] al-Zīj al-mu‘tabar al-sanjarī, S: Tehran: Sipahsālār, no. 682, pp. 58–9, 84–9). These tables are preserved in Kamālī’s Ashrafī zīj (Note 24), F: ff. 238r, 239v, G: ff. 250v, 252r; the first is unchanged but has a scribal error, giving a value of 2;12,25, instead of 2;12,20, , for argument 93°; the second is displaced with Max = 5;23° for arguments 216°–225° and Min = 0;37° for arguments 35°–44°. 69.Al-Fahhād, Zīj (Note 21), pp. 154–5. 70.Naṣīr al-Dīn al-Ṭūsī, Īlkhānī zīj, C: University of California, Caro Minasian Collection, no. 1462, p. 124; T: Iran, University of Tehran, Central Library, Ḥikmat collection, no. 165, ff. 71v–73r; P: Iran, Parliament Library, no. 181, f. 42v; M: Iran, Mashhad, Holy Shrine Library, no. 5332a, f. 75v. The table for the equation of centre of Venus is displaced with Min = 0;1° for arguments 83°–94° and Max = 3;59° for arguments 261°–271°. Note that, as shown elsewhere (see Mozaffari, op. cit. (Note 34), pp. 110–2), all of the solar and lunar parameters in the Īlkhānī zīj were adopted or updated from Ibn Yūnus’s Ḥākimī zīj. 71.Mūḥyī al-Dīn al-Maghribī, Adwār al-anwār, M: Iran, Mashhad, Holy Shrine Library, no. 332, ff. 87v–88r, CB: Ireland, Dublin, Chester Beatty, no. 3665, ff. 85v–86r; also preserved in Kamālī’s Ashrafī zīj (Note 24), F: ff. 248v–249v, G: ff. 257r–v and in Shams al-Dīn Muḥammad al-Wābkanawī’s Zīj al-muhaqqaq al-sulṭānī ‘alā uṣūl al-raṣad al-Īlkhānī (The verified royal zīj on the basis of the parameters of the Īlkhānid observations), T: Turkey, Aya Sophia Library, no. 2694, f. 160v. Wābkanwī (Muḥaqqaq zīj IV.15.10: Y: Iran, Yazd, Library of ‘Ulūmī, no. 546, its microfilm is available in Tehran university central library, no. 2546, ff. 160v–161r; T: ff. 93r–93v; P: Iran, Parliament Library, no. 6435, f. 141r) reports e = 1;2,49 from al-Maghribī. 72.See Yabuuti, “Islamic Astronomy in China” (Note 45), pp. 22–4, 33. The table of the equation of centre of Venus in Sanjufīnī’s Zīj, MS. Paris: Bibliothèque Nationale, Arabe 6040, f. 50v; a table for the solar equation of centre cannot be found in this work, and the reason is as follows. In the tables on ff. 32v–34r, there are presented the true longitude of the Sun together with the lunar mean positions from 764 H to 895 H. This is a sort of the user-friendly medieval astronomical tables that dispense practitioners with the addition-subtraction procedure in the solar equation table as well as with taking into account the longitudes of its apogee, as noted in the pertinent explanatory section in I.2.2 (ff. 7r–v). In the tables on ff. 44v–46r, the mean positions of the Sun and planets are tabulated for the same period. A simple assessment of the correlated entries in both tables clearly affirms the use of a value of a bit more than 2;6 for the solar eccentricity. 73.‘Alā’ al-Dīn Abu ’l-Ḥasan ‘Alī b. Ibrāhīm b. Muḥammad al-Muṭa’’im al-Anṣārī, Ibn al-Shāṭir, al-Zīj al-Jadīd, K: Istanbul, Kandilli Observatory, no. 238, ff. 52v, 66v, L1: Leiden, Universiteitsbibliotheek, Or. 65, ff. 66r, 85r, L2: Leiden, Universiteitsbibliotheek, Or. 530, ff. 52r, 65r, O: Oxford, Bodleian Library, Seld. A inf 30, ff. 31v, 50v. 74.Jamshīd Ghiyāth al-Dīn al-Kāshī, Khāqānī zīj, IO: London: India Office, no. 430, f. 136v; P: Iran: Parliament Library, no. 6198, p. 121. 75.Ulugh Beg, Sulṭānī Zīj, P1: Iran, Parliament Library, no. 72, f. 144r, 117v–123r, P2: Iran, Parliament Library, no. 6027, ff. 134v–140r, 161v. The table for the equation of centre of Venus is displaced with Min = 0;20,41° for argument 89° and Max = 3;39,19° for argument 267°, thus qmax = 1;39,19°. 76.Yaḥyā, Zīj (Note 61), E: ff. 15r, 40r, 86v, L: ff. 60v–61r, 67r–68r; Ibn Yūnus, Zīj (Note 22), L: p. 121; Caussin, op. cit. (Note 22), p. 221. See, also, van Dalen, op. cit. (Note 16), p. 23. 77.Nallino, op. cit. (Note 63), vol. 2, p. 126. 78.Ibn Yūnus, Zīj (Note 22), L: p. 121; Caussin, op. cit. (Note 22), p. 221. 79.In his Ashrafī zīj (Note 24), F: f. 232v, G: f. 249r, Kamālī gives Ibn al-A‘lam’s values for the longitudes of the solar and planetary apogees as updated for 13 Adhar 1614 Alexander/23 Rajab 702/13 Khurdād 672 (13 March 1303) (in MS. G, the Alexandrian date is wrongly given as 14 Adhar 1612). They end with 19″, except for Mars, giving the impression that Ibn al-A‘lam’s original values were given with a precision up to arc-minutes, and the 19″ results from Kamālī’s precessional increment. The longitudes of the apogee of the Sun and Venus are, respectively, 89;5,19° and 75;55,19°. The epoch of Ibn al-A‘lam’s zīj is unknown. As set forth elsewhere (Mozaffari, op. cit. (Note 61)), it seems quite probable that (1) the second star table found in the preserved manuscripts of the Mumtaḥan zīj is a work by Ibn al-A‘lam himself, in the sense that he updated the longitudes in the first, and in all likelihood original, star table in this zīj (for the year 198 Y/ad 829–830) for the year 380 Y (ad 1011–1012) by adding an increment of 2;36°, which is in accordance with his rate of precession of 1°/70y and the interval of time of 182 between them. And (2) he attained this annual processional motion by a comparison between the value 135;6° he measured for the longitude of Regulus (α Leo) from his observation(s) carried out in 365 H (344–345 Y/ad 975–976) and the value 133;0° registered in the first Mumtaḥan star table. We convert the values for the longitudes of the apogees of Sun and Venus to the latter date, which is about 10 years before Ibn al-A‘lam passed away, by subtracting from them the value 4;40,19° (≈ (672–345)/70). 80.Bīrūnī, op. cit. (Note 2), vol. 2, p. 693, vol. 3, pp. 1193–8. He simply converts Ptolemy’s values for the longitudes of the planetary apogees to his epoch by an increment of about 13° calculated from his rate of precession of 1°/69y (see Mozaffari, op. cit. (Note 5), pp. 13–4). 81.Khāzinī, Zīj (refs. 21 and 68), V: ff. 129r, 163v; L: ff. 102v, 125v; S: pp. 53–4. In a table in MS. V, the longitudinal differences between the apogee of the Sun and those of the five planets are given to arc-minutes for the beginning of the Hijra era, which, added to the longitude of the solar apogee, are generally in agreement with the values given in the main table of the radixes of the Sun, Moon, and planets in this work. Kamālī, Zīj (Note 24), F: f. 232v, G: f. 249r, has added 10;18,48° to Khāzinī’s values in order to update them for 23 Rajab 702 (13 March 1303). This increment is in accordance with the precessional motion of 1°/66y and the period of about 681 Persian years elapsed from the beginning of the Hijra era to the date in question. Khāzinī has added 7;35° to the longitudes of the apogee of Saturn, Jupiter, and Mercury in the Almagest, which approximately agrees with his rate of precession of 1°/66y and the interval of time of about 487 years, from the mid-130s ad to 622 ad, but for Mars and Venus, his values are by 12° and 12;35° greater than Ptolemy’s. We have added an increment of 7;33° to Khāzinī’s values in order to convert them to 1 January 1120 ad, a date falling within the period of his fruitful career. 82.Al-Fahhād, Zīj (Note 21), p. 73; see, also, Mozaffari, op. cit. (Note 5), pp. 17–8. 83.Īlkhānī Zīj (Note 70), C: pp. 56, 120, P: ff. 20v, 41r, M: ff. 33v, 73v. 84.Al-Maghribī, Adwār (Note 71), CB: f. 80v, M: f. 82v. As al-Maghribī explains in detail in his Talkhīṣ al-majisṭī IV.5–6 (MS. Leiden, Universiteitsbibliotheek, Or. 110), he measures the solar parameters from his four solar observations in ad 1264–1265, and then computes back from his figure 88;50,43° for the longitude of the solar apogee for 16 December 1264 to 88;20,47° for his epoch, 17 January 1232, with a precessional and apogee motion of 1° in every 66 Persian/Egyptian years. 85.We know (Yabuuti, “Islamic Astronomy in China” (Note 45), pp. 22, 24) that Jamāl al-Dīn and his team of Persian astronomers in China measured the longitude of the solar apogee as 89;21° in 660 H (ad 1261/1262). In Sanjufīnī’s zīj (Note 72), which is on the basis of their parameter values, the apogeal motion with a rate of 1°/60y is clearly different from the precessional one with a rate of 1°/73y, which can be derived from the values tabulated for them in the two separate columns in the table for the solar and the planetary mean motions from 764 to 895 H (ff. 44v–46r). Sanjufīnī (f. 44v) gives the values 91;1,20° and 78;46°, respectively, for the longitudes of the apogees of the Sun and Venus for 24 Jumādā I 764 (10 March 1363, according to the astronomical Hijra calendar). Accordingly, it seems that he added an increment of 1;40,20° to Jamāl al-Dīn’s value in order to update it for his own time, which is in accordance with the rate of apogeal motion of 1°/60y and the period of about one century between them. If this is true, then Jamāl al-Dīn’s value for the longitude of the apogee of Venus for ad 1261/1262 was equal to 77;6°. It should be noted that Jamāl al-Dīn precedes Ibn al-Shāṭir (see Mozaffari, op. cit. (Note 5)) in putting a clear distinction between the apogeal and precessional motions by one century. 86.Ibn al-Shāṭir, Zīj (Note 73), K: f. 52r, L1: f. 65v, L2: 50v, O: f. 31r, PR: f. 100r. A curious feature of Ibn al-Shāṭir’s astronomy is that he correctly believed that the motion of the solar and planetary apogees (which he takes as equal to 1° in 60 Persian/Egyptian years) is not equal to, but larger than, the precession (which he takes as 1°/70y); see Mozaffari, op. cit. (Note 5). It is to be noted that in my previous study (Mozaffari, “Limitations,” Part 1, p. 326), the value 79;12° for the longitude of the solar apogee, which V. Roberts (“The Solar and Lunar Theory of Ibn ash-Shāṭir, a pre-Copernican Copernican Model,” Isis, 48, 1957, pp. 428–32, p. 430) quotes from Ibn al-Shāṭir’s Nihāyat al-Sūl fī Taṣḥīḥ al-Uṣūl (A Text of Final Inquiry in Correcting the Parameters), was taken as Ibn al-Shāṭir’s formal value. He derived this value from an observation made in Damascus on the first day of the year 701 Y/24 Rabī’ I 732 H (24 December 1331, JDN 2207563). This value is strangely in error by more than 12°. Moreover, it is inconsistent with the more precise values Ibn al-Shāṭir lists in his table of the longitudes of the solar and planetary apogees in the Jadīd zīj, which gives a value about 89;52° for the given date. It seems very likely a scribal error to have been occurred in the manuscript Roberts made use of (Oxford, Bodleian, Marsh 139), because of the similarity of the abjad numerals in the form → (89;52) (79;12). 87.Αl-Κāshī, Zīj (Note 74), ΙΟ: ff. 127v, 128v, P: pp. 107, 109. Note that al-Kāshī’s values was updated from the Īlkhānī zīj with taking the precessional and apogee motion as equal to 1° in 70 Persian/Egyptian years. 88.Ulugh Beg, Zīj (Note 75), P1: ff. 116r, 143r, P2: ff. 133r, 160v. The table has λA = 90;30,4,48° for the Sun; the table of the solar equation of centre is always additive, but not displaced, therein all entries have been increased by qmax = 1;55,53,12°; therefore, the same value has been subtracted from the values of λA, and then the resultants were tabulated. ‘Alī b. Muḥamamd Qūshčī (ca.ad 1402–1474), an astronomer of the Samarqand observatory, explains this procedure of the preparation of the conventional equation tables in the case of the Sun in his Commentary on Zīj of Ulugh Beg (Sharḥ-i Zīj-i Ulugh Beg, N: Iran, National Library, no. 20127–5, p. 292, P: Iran, Parliament Library, no. 6375/1, p. 169, PN: USA, Rare Book & Manuscript Library of University of Pennsylvania, LJS 400, f. 258r).
Correlation measures the relationship between two variables. Correlation measures the relationship between two variables. More precisely, it calculates the level of change that you can expect to see in one variable due to a change in another variable. Imagine a scatter graph where your independent variable is “time spent studying” – your dependent variable is “number of questions answered correctly”. Do you think that the amount of time you spend studying is related to the number of questions you answer correctly in Kinnu? It probably is! That means that ‘the amount of time spent studying is correlated with the number of questions answered correctly’. The more you study, the more questions you answer correctly. The more X you have, the more Y you also have. If as one value gets higher, the other one does too, then you have a positive correlation. An example of a positive correlation is height and weight. As people get taller, they also tend to weigh more. But it works both ways, because as one value gets lower, and the other one does too, that is also a positive correlation. So, a positive correlation is when both variables move in the same direction. On a scatter plot, a positive correlation slopes up and to the right. A negative correlation is when as one value moves in one direction, the other moves in the opposite direction. As an example, if one gets higher, the other gets lower. Like when you climb up a mountain and get higher above sea level, the temperature gets lower. On a scatter plot, a negative correlation slopes down and to the right. What does no correlation look like? When there is no correlation between variables, a scatter plot looks like somebody has just randomly thrown darts at it. There is no real pattern to be seen in the data. This shows that your data is not correlated. For example, there is no correlation between the amount of tea you drink and how long my commute is. The line of best fit in scatterplots Often when viewing scatterplots you will see a line of best fit through the centre of the mass of data points. This is called the line of best fit, and represents a linear estimate of the dependent variable based on the value of the independent variable. It enables a visualization of the general trend in your data. When data is so tightly clustered together as it is in the image above, it’s relatively easy to visualize the general trend in your data without the line of best fit. However, in cases where your data is messier, it serves as a useful visualization tool. It is also used in predictive statistical models to mathematically – specifically algebraically – represent the relationships between variables. Does the slope matter when visualizing correlation? When you look at a scatter plot, you will be able to visualize the strength of a correlation. Often on a scatter plot, you will also see a line of best fit, that’s the line that runs through the middle of the data points. When visualising a correlation, the steepness of this line does not affect the strength of the correlation. What affects the strength of a correlation is how closely related the data points are to one another, which represents how reliably a certain change in one variable predicts a change in the other. Pearson’s Correlation Coefficient, otherwise known as Pearson’s r, is a common way to calculate the correlation between two quantitative variables. It was developed by Karl Pearson in the 1880s. The Pearson Correlation Coefficient tells you in which direction two variables are correlated, positively or negatively, as well as the strength of that correlation. Pearson’s Coefficient is a key tool for data scientists to quantify the strength of a coefficient, instead of guessing based on visual representations. Correlation coefficient interpretation Pearsons’s correlation coefficient ranges from -1 to +1. A correlation coefficient of less than 0 signifies a negative correlation, while greater than 0 signifies a positive correlation. But, the strength of a correlation is also important. The table below shows you how to define the strength of your correlation. Correlation is not causation If you calculated the correlation coefficient – the strength of relationship – for your two continuous variables and saw that the more people studied, the better test scores they got, you could say that there was a correlation between time studying and test scores. However, you can’t ever say that one caused the other from the correlation coefficient alone. This is true no matter how intuitive or obvious it might seem. ‘But of course studying causes better test scores’ you say. What if I told you that per capita cheese consumption was correlated with the number of people who died by getting tangled in their bedsheets? Would you be so sure that cheese causes this? What about the fact that the number of films Nicholas Cage appears in is correlated with the number of people who drown in a pool? Would you tell me that Nicholas Cage films cause drownings?
Using the words the you have learned,describe the following mathematical symbol. 1) The plus symbol. 2) The minus symbol. 3) The multiplication symbol. 4) The equals symbol. 5) The pi symbol. Section 2 Angles Look at the figure and say which lines are : 1) These two lines meet at an angle. This angle is less than 90o (ninety degrees). It is an acute angle. They form a right angle. The two lines are perpendicular to each other. 5. Lines FK and AB intersect at point X.The angles FXB and BXK are next to each other, or adjacent . The sum of these angles is 180o. They are supplementary 6) Angles ABY and YBC are equal. Line BY bisectes angle ABC. BY is the bisector of angle ABC. The sum of angles ABY and YBC is 90o. They are complementary angles. Describe the lines and angles in the following figures. A B C x y z A triangel is a three-sided figure. The three sides of a triangel meet at points called vertices (singular vertex). The vertex at the top a triangle may be called the apex, and the line at the botton may be called the base. 1) In triangle ABC, line BC is produced to point X. ACB is an interior angle, and ACX is an exterior angle. 2) This is an isosceles triangle. 3) This is an equilateral triangle. A 4) This is a right angled triangle.In a right angled triangle the side opposite the right angle is called the The theorem of Phytagoras states : “In a right angled triangle square on line hypotenuse equal to the sum of Section 2 Congruence, similarity 1) If the following parts of two triangles are equal : a) Two sides and the included angle ; or, b) A right angle, hypotenuse and c) Two angles and a corresponding side ; or, d) All three sides ; Then the two triangles are 2) If two triangles have their corresponding angles equal, they 3) These two triangles are on either side of an exis of Describe each triangle, and use your ruler to discover any relation ships between the triangles (i.e.symetry, similarity or congruence). 1) If each of the angles in a triangle is equal to 60o , the triangle is called _________ 2) A line which meets another _____________at 90o is called a ____________ line. 3) If two angles of a triangle are equal to 45o , the triangle is called a 4) If we ______________ a right angle, we have two _____________angles of 45o . 5) Each triangle has three point, or _____________________ 1) 2) 3) 4) 5) 6)
Descendants of African American slaves have never been compensated for one of the worst crimes against humanity. The lost wages suffered by slaves, and the asset value of slaves to the slave holders are well over one trillion US dollars in today’s money. Equally important is the value of the promise of land to freed slaves, commonly know as 40 acres and a mule. The word restitution is used deliberately meaning the recovery and return benefits obtained through improper means. Slavery in the United States was particularly abominable because it was chattel slavery, meaning that people were the property of owners and bought and sold as property. A common argument against reparations for American slavery is that it was legal then. But at the Nuremberg trials, where Nazis were brought to justice, the US prosecutor Robert H. Jackson said that common law recognizes rules of conduct and this is sufficient to establish guilt and judgments of wrong doings. In the case of slavery, Europe tacitly permitted slavery in the colonies, but slavery was prohibited in Europe and Africa, and for Natives in the Americas. It was widely recognized that chattel slavery and permanent slavery was morally wrong and this is sufficient to establish guilt and judgments of wrong-doings in the United States. In the United States slavery ended with the Confiscation Acts and Emancipation Proclamation in 1863 and the 13th Amendment made slavery illegal in 1865. The Civil War started in 1861 and ended in 1865 and many slaves fought in it. After Abolition, Special Field Order No. 15 of the US Army in 1865 confiscated 400,000 acres of land along the Atlantic coast of Florida, Georgia and South Carolina and approximately 18,000 freedmen where settled there. They were each given 40 acres but on a temporary basis. They were to receive a mule, left over from the war. This coined the expression “40 acres and a mule.” At the end of Reconstruction in 1877 most of that land was given back to the original White owners. The Southern Homestead Act of 1866 was designed to make 40 acres available to Freedmen. This transition went along as well as it could while Federal troops oversaw this plan. But in 1877 troops retreated and these arrangements disintegrated. Census data does not separate White and Black farmers until 1900 but W.E.B. Du Bois estimated that Black farmers owned 3 million acres in 1875 and 8 million in 1890. The peak year, from census data, shows 12 million acres in 1910 fully owned by 175,290 non-whites and partially owned by 43,177 non-white farmers. Roughly this is 60 acres per farm. Because of the rising prices in cotton, many farmers did very well initially, but as Jim Crow laws set in, farm operating contracts became more difficult and this, together with a general collapse of all farming led, to widespread abandonment of Black owned farms. Today, there are 45,000 Black owned farms. In the early 20th Century there was the Great Migration, when about 6 million Afrodescendants left the South. In 1863, over 90% lived in the South and this held true until about 1900. The majority of the population in South Carolina and Mississippi were African Americans, and more than 40 per cent in Georgia, Alabama, Louisiana and Texas, and this changed drastically with the migration to the North. The 1910 agricultural census shows that there were over 3 million farms in the South farming 354 million acres. This is down by 7 million acres from the 1900 census, and today, the South farms 270 million acres. Maps of this census show that almost all farms in the South were less than 80 acres, except for areas around Savannah. The census of 1910 does not distinguish between sharecroppers and tenants, and lists 670,000 Black tenant farmers and 1,200 Black farm managers cultivating 27 million acres. This amounts to 40 acres per farm. Sharecropping existed well into the 1950s. Sharecroppers represent the potential of how much land Black farmers would have been able to cultivate if they had been able to own land. In 1910, there were a total of 890,000 Afrodescendants cultivating a total of 39 million acres. The total population of Afrodescendants was 10 million. After 1910 there was a steady decline of Afrodescendant farmers as they moved to the North during the Great Migration in the ensuing decades. Based on this data from 1910, when agriculture was the main economic activity and when the majority of the population in the South was Black and rural and applying a 2017 average value of Southern Agricultural land of 3,200 US dollars, the total value of that land is 125 billion US dollars in 2017 dollars. This dollar value represents damages suffered by those sharecroppers and are agricultural assets that were denied to the Black community. The lack of the opportunity to accumulate assets means that these families remain in poverty. Poverty programs only address income and thus asset poverty persists. This of course is true for Whites as well, but for Blacks, asset accumulations is much more difficult. For instance, in 2010, 1.25 billion US dollars from the U.S. Department of Agriculture (USDA) was paid to African American farmers through the settlement of the Pigford v Glickman case. The Pigford case is a 1999 class action discrimination suit that showed that the USDA was biased when making loans or farm assistance available at the county level between the years 1983 and 1997. African American farmers were either denied or had to wait longer for loan approvals. In agriculture especially, timely availability of credit is imperative to run a farm successfully. Receiving credit for seeds is meaningless if the time for planting has already passed. About 2,000 farmers were in that first claim and two options were offered: Track A provided a settlement of $50,000 and relief from loans and tax liabilities. By showing evidence of greater damage, farmers could apply for Track B claims for larger amounts. But there were numerous problems implementing the payments, and only 31% of Track A and 169 eligible Track B claimants. Eventually there were a total of 15,645 farmers who received $1 billion US dollars in cash, debt relief and other credits under Track A. An additional 17,000 farmers, received $1.2 billion US dollars in 2010 under what is referred to as Pigford II. These two settlements are the largest settlements to date to African American farmers, but they are specifically for discrimination of access to farm credits and only cover the period between 1983 and 1997. In 1910, there were 6.7 per cent Afrodescendants who farmed. Using this percentage, an opportunity cost of assets can be calculated for the African American community today. The 2010 census recorded 43 million self-identified African Americans and the 2014 estimates by the Census Bureau is 47 million. Later projections are not available yet. Using the 2010 census and 6.7 per cent, yields 2.9 million, and 3.2 million for the 2014 projections. Estimating damages from these numbers yields 367 and 403 billion US dollars based on the value of agricultural land in the South in 2017. The total damages are estimated between 453 and 489 billion US dollars in lost asset accumulation opportunities suffered by Afrodescendants. These estimates represent the restitution. Other researcher have calculated restitution using wages, or the actual value of slaves. Several economists have looked at what the total value of slavery to the economy was before Abolition. On the one hand, slavery was unpaid labor. On the other hand, slaves were like capital that had a value in the market. In addition to that, researchers looked at the value of the promise to have access to land after Abolition. Larry Neal and economist at the University of Illinois, Champaign/Urbana looked at wages between 1620 and 1840 and compounded these at 3 per cent. His estimates for lost wages during that period are 1.4 trillion in 2016 US dollars. It shows how much white farmers benefited from slavery. Richard Vedder and economist at Ohio University estimated a similar number, 5 to 10 trillion US dollars as an accumulated gain in wealth for white Southerners. Tim Worstall, a journalist, used the market value of an enslaved person, and then calculated a total wealth of having slaves, as an asset, at a compounded 1 per cent to be about 1.75 trillion US dollars, or about 40 thousand US dollars to each Afrodescendant today. But slaves had different prices depending on their skill levels as either artisans or domestics, or if they were known runaways or had physical impairments. Women commanded a higher price than men for obvious reasons. Samuel Williamson, economist at Miami University calculated an average price for slaves of between 300 US dollars in 1804 and 800 US dollars in 1860. He then calculated a labor income value of owning a slave, of about 140,000 US dollars in 2016 US dollars. Williamson uses the inflation rate calculator at a website called measuringworth.com. Here the average annual inflation rate is 2.2 per cent for the period between 1860 and 2016. Theoretically, using the value of a slave is more appropriate for calculating reparations than lost wages. This approach uses slaves as an asset. But after Abolition this asset disappeared. Pikkety developed a graph reproduced below as Figure 11, which illustrates the changing nature of wealth. Thomas Craemer a sociologist at the University of Connecticut, used Field Order No. 15 and the Reparations Bill that was passed in the U.S. House of Representatives in 1866. Both decrees speak of 40 acres to each freedman. He then attached an average price of $3,020 per acre in 2015, and multiplied this by 3,953,760 slaves from the 1860 census. This yielded a number of 486 billion US dollars. Prices per acre vary hugely from State to State, so this might be overstated. Furthermore, Field Order No. 15 states that “three respectable Negroes, heads of families ….. shall have a plot of not more than forty acres of tillable ground.” In other words, the intention of Field Order No. 15 was not to give 40 acres to each slave, but rather to a portion of such individuals that would be able to conduct agricultural production on 40 acres. Also, Reparations Bill H.R. 29 which was introduced by Senator Thaddeus Stevens states that “each male who is the head of a family …. or each widow who is the head of a family…” shall receive 40 acres. In other words, the promise was not 40 acres for all slaves, but rather land for a family unit of perhaps four or five, which would be necessary to successfully till 40 acres. Agriculture is an inherently risky endeavor. If the harvest is poor, credit payments are difficult to make. If the harvest is more than usual, prices drop and the benefit of a large harvest are depressed. If credit is not given in a timely manner, planting cannot happen in a timely manner and production is compromised. This happens to all Black farmers even today. Farming was particularly hazardous for Blacks because of lynchings and the burning down of farms, which resulted in thousands of acres of farm losses. This practice was called “whitecapping.” Also, we know from anecdotal evidence collected by Raymond Winbush, that Black farmers were cautious to not be “too successful” for fear of having their farms taken away. The prevalence of this behavior makes comparing Black and White productivity of farms difficult and problematic. Brent Gloy showed that rates of return for farming can vary greatly. In 1973 it was 28 per cent, but on average, over the period of between 1960 and 2016 it was 7 per cent. The standard deviation he calculated was 6.4 per cent, demonstrating the riskiness of the business. Since the 1970s, agricultural land values increased three and four fold due to increased productivity and export subsidies for all kinds of agricultural products. Other subsidies, such as favorable credit, made farming less risky and a coordinated effort to store food and distribute the surplus to Africa and other areas with food shortages through the Agricultural Trade Development and Assistance Act, PL 480, meant farming in more marginal lands became profitable. Using “40 acres and a mule” as the basis for calculating restitution to descendants of American slaves, an amount of 453 to 489 billion dollars was estimated. This amount is similar to the asset gap between Blacks and Whites. The American media has paid increasing attention to the legacies of slavery. The new National Museum of African American History and Culture features a huge exhibition on the history of slavery. Many US universities are studying their links with slavery and the slave trade. In several cases, schools decided to provide symbolic reparations by renaming buildings and/or creating memorials and monuments to honor enslaved men and women. But these measures do not seem to suffice: several activists and ordinary citizens are calling for financial reparations. Students of Georgetown University recently voted to pay a fee to finance a reparations fund to benefit the descendants of the 1838 sale of enslaved people owned by the Society of Jesus. The Democratic presidential candidates are routinely asked if they would support studies to provide financial reparations for slavery to African Americans. What is often missed is that these calls started long ago. Writers and readers also forget that black women championed demands of reparations for slavery. Belinda Sutton is among the first black women to demand reparations for slavery in North America. Her owner, Isaac Royall Junior, fled North America in 1775, during the American Revolutionary War. He left behind his assets but his will included provisions to pay Belinda a pension for three years. Like today, the political context shaped these early demands for reparations and the responses petitioners received. Unlike other former slaves, Sutton’s odds to get restitutions were greater because her former owner was a British Loyalist. Moreover, he had already determined in his will to pay her a pension. Freedwomen and their descendants continued fighting for reparations in later years. They knew more than anyone else the value of material resources because they lacked them. They were those providing hard work to maintain their households and to raise children and grandchildren. Sojourner Truth also demanded reparations for slavery through land redistribution. Following the end of slavery, during Reconstruction, Truth argued that slaves helped to build the nation’s wealth and therefore should be compensated. In 1870, she circulated a petition requesting Congress to provide land to the “freed colored people in and about Washington” to allow them “to support themselves.” Yet, Truth’s efforts were not successful. US former slaves got no land or financial support after the end of slavery. The context of the brutal end of Reconstruction that cut short the promises of equal access to education and voting rights for black Americans favored the rise of calls for reparations. And once again black women took the lead. Ex-slave Callie House fought for reparations. A widow and a mother of five children, who worked as a washerwoman, she saw many former slaves old, sick, and unable to work to maintain themselves. House became one of the leaders of the National Ex-Slave Mutual Relief, Bounty and Pension Association that gathered dozens of thousands of former slaves to press the US Congress to pass legislation to award pensions to freedpeople. Soon the federal government started accusing the association of using mail to lead a fraud scheme. Callie House responded that the association’s goal was to obtain redress for a historical wrong. She reminded federal authorities that former slaves were left with no resources and had the right to organize themselves to demand restitutions. She bravely denounced that government hostility against the pensions movement was motivated by racism. In 1916, the Post Office Department charged Callie House for using the US mail to defraud. She spent one year in prison. Black women had good reasons to fight for reparations. Until the 1920s, black women were deprived of voting rights. More than black men, they were socially and economically excluded. With less access to education, even in an old age they were those running the households. To most former enslaved women, expectations of social mobility were impracticable. In contrast, pensions and land were tangible resources that could supply them with autonomy and possible social mobility. In 1962, Moore saw the approach of the one hundredth anniversary of the Emancipation Proclamation of 1863 as an occasion to discuss the legacies of slavery. To this end, she created the Reparations Committee for the Descendants of American Slaves (RCDAS) that filed a claim demanding reparations for slavery in a court of the state of California. She also authored a booklet underscoring that slaves provided dozens of years of unpaid work to slave owners. She emphasized the horrors of lynching, segregation, disfranchisement, raping, and police brutality. Yet, the litigation was not successful. Moore defended payment of financial reparations to all African Americans and their descendants and that each individual and group should decide what to do with the funds. She contended that the unpaid work provided by enslaved Africans and their descendants led to the wealth accumulation that made the United States the richest “the richest country in the world.” In later years, Moore continued participating in organizations defending reparations for slavery. In 1968, she joined the Republic of New Africa and later supported the efforts of the National Coalition of Blacks for Reparations in America (N’COBRA). She made her last public appearance at her late nineties during the Million Man March held in Washington DC in October 1995, when she still called for reparations. In 2002, Edward Fagan filled a class-action lawsuit in the name of Deadria Farmer-Paellmann and other persons in similar situations. An African American activist and lawyer, Farmer-Paellmann founded the Reparations Study Group. Fagan’s lawsuit requested a formal apology and financial reparations from three US companies that profited from slavery. Among these corporations was Aetna Insurance Company that held an insurance policy in the name of Abel Hines, Farmer-Paellman’s enslaved great-grandfather. Although the case was dismissed in 2004, the US Court of Appeals for the Seventh Circuit later allowed the plaintiffs to engage in consumer protection claims exposing the companies named in the lawsuit for misleading their customers about their role in slavery. Years marking commemorative dates associated with slavery favor the rise of demands of reparations. This year marks the fourth hundredth anniversary of the landing of the first enslaved Africans in Virginia. In addition, it’s also the kick off of the 2020 presidential campaign. For black groups and organizations that now fully engage in social media it’s time to renew calls for reparations that have been around for several decades. For potential presidential candidates, the debate on reparations is an opportunity to gain the black vote. But for black women, no matter the commemorative and elections calendars, the fight for reparations is not a new opportunity, it is rather a long-lasting battle for social justice. Students at Georgetown University voted on Thursday to increase their tuition to benefit descendants of the 272 enslaved Africans that the Jesuits who ran the school sold nearly two centuries ago to secure its financial future. The fund they voted to create would represent the first instance of reparations for slavery by a prominent American organization. The proposal passed with two-thirds of the vote, but the student-led referendum was nonbinding, and the university’s board of directors must approve the measure before it can take effect. “We value the engagement of our students and appreciate that they are making their voices heard and contributing to an important national conversation,” Todd Olson, vice president for student affairs, said in a statement on Thursday. The undergraduate student body voted to add a new fee of $27.20 per student per semester to their tuition bill, with the proceeds devoted to supporting education and health care programs in Louisiana and Maryland, where many of 4,000 known living descendants of the 272 enslaved people now reside. A 2016 article in The New York Times described the 1838 sale by what was then Georgetown College, the premier Catholic institution of higher learning in America at the time. The college relied on Jesuit-owned plantations in Maryland that were no longer producing a reliable income to support it, so the Jesuit priests who founded and ran Georgetown decided to raise cash by selling virtually all its slaves, receiving the equivalent of about $3.3 million in today’s money. “The school wouldn’t be here without them,” said Shepard Thomas, a junior from New Orleans who is part of the campus group, Students for the GU272, that worked to hold the referendum. Mr. Thomas, a psychology major, is descended from slaves who were part of the 1838 sale. “Students here always talk about changing the world after they graduate,” he said. “Why not change the world when you’re here?” Mr. Thomas said the amount of the fee, $27.20, was chosen to evoke the number of people sold but not be too onerous for students. Tuition and fees for a full-time student per semester is $27,720.00. Georgetown University agreed in 2016 to give admissions preference to descendants of the 272 slaves; Mr. Thomas was one of the first to be admitted under the policy. The school also formally apologized for its role in slavery, and has renamed two buildings on its campus to acknowledge the lives of slaves; one is now named for Isaac Hawkins, the first person listed in the 1838 sale. The university has about 7,000 undergraduates, so the fee would raise about $380,000 a year for the fund. “It makes me feel happy that we, as students, decided to set a precedent for the betterment of people’s lives,” Mr. Thomas said. When the police break your teammate’s leg, you’d think it would wake you up a little. When they arrest him on a New York street, throw him in jail for the night, and leave him with a season-ending injury, you’d think it would sink in. You’d think you’d know there was more to the story. I still remember my reaction when I first heard what happened to Thabo. It was 2015, late in the season. Thabo and I were teammates on the Hawks, and we’d flown into New York late after a game in Atlanta. When I woke up the next morning, our team group text was going nuts. Details were still hazy, but guys were saying, Thabo hurt his leg? During an arrest? Wait — he spent the night in jail?! Everyone was pretty upset and confused. Well, almost everyone. My response was….. different. I’m embarrassed to admit it. Which is why I want to share it today. Before I tell the rest of this story, let me just say real quick — Thabo wasn’t some random teammate of mine, or some guy in the league who I knew a little bit. We’d become legitimate friends that year in our downtime. He was my go-to teammate to talk with about stuff beyond the basketball world. Politics, religion, culture, you name it — Thabo brought a perspective that wasn’t typical of an NBA player. And it’s easy to see why: Before we were teammates in Atlanta, the guy had played professional ball in France, Turkey and Italy. He spoke three languages! Thabo’s mother was from Switzerland, and his father was from South Africa. They lived together in South Africa before Thabo was born, then left because of apartheid. It didn’t take long for me to figure out that Thabo was one of the most interesting people I’d ever been around. We respected each other. We were cool, you know? We had each other’s backs. Anyway — on the morning I found out that Thabo had been arrested, want to know what my first thought was? About my friend and teammate? My first thought was: What was Thabo doing out at a club on a back-to-back?? Yeah. Not, How’s he doing? Not, What happened during the arrest?? Not, Something seemsoff with this story. Nothing like that. Before I knew the full story, and before I’d even had the chance to talk to Thabo….. I sort of blamed Thabo. I thought, Well, if I’d been in Thabo’s shoes, out at a club late at night, the police wouldn’t have arrested me. Not unless I was doing something wrong. It’s not like it was a conscious thought. It was pure reflex — the first thing to pop into my head. And I was worried about him, no doubt. But still. Cringe. A few months later, a jury found Thabo not guilty on all charges. He settled with the city over the NYPD’s use of force against him. And then the story just sort of….. disappeared. It fell away from the news. Thabo had surgery and went through rehab. Pretty soon, another NBA season began — and we were back on the court again. Life went on. But I still couldn’t shake my discomfort. I mean, I hadn’t been involved in the incident. I hadn’t even been there. So why did I feel like I’d let my friend down? Why did I feel like I’d let myself down? A few weeks ago, something happened at a Jazz home game that brought back many of those old questions. Maybe you saw it: We were playing against the Thunder, and Russell Westbrook and a fan in the crowd exchanged words during the game. I didn’t actually see or hear what happened, and if you were following on TV or on Twitter, maybe you had a similar initial viewing of it. Then, after the game, one of our reporters asked me for my response to what had gone down between Russ and the fan. I told him I hadn’t seen it — and added something like, But you know Russ. He gets into it with the crowd a lot. Of course, the full story came out later that night. What actually happened was that a fan had said some really ugly things at close range to Russ. Russ had then responded. After the game, he’d said he felt the comments were racially charged. The incident struck a nerve with our team. In a closed-door meeting with the president of the Jazz the next day, my teammates shared stories of similar experiences they’d had — of feeling degraded in ways that went beyond acceptable heckling. One teammate talked about how his mom had called him right after the game, concerned for his safety in SLC. One teammate said the night felt like being “in a zoo.” One of the guys in the meeting was Thabo — he’s my teammate in Utah now. I looked over at him, and remembered his night in NYC. Everyone was upset. I was upset — and embarrassed, too. But there was another emotion in the room that day, one that was harder to put a finger on. It was almost like….. disappointment, mixed with exhaustion. Guys were just sick and tired of it all. This wasn’t the first time they’d taken part in conversations about race in their NBA careers, and it wasn’t the first time they’d had to address the hateful actions of others. And one big thing that got brought up a lot in the meeting was how incidents like this — they weren’t only about the people directly involved. This wasn’t only about Russ and some heckler. It was about more than that. It was about what it means just to exist right now — as a person of color in a mostly white space. It was about racism in America. Before the meeting ended, I joined the team’s demand for a swift response and a promise from the Jazz organization that it would address the concerns we had. I think my teammates and I all felt it was a step in the right direction. But I don’t think anyone felt satisfied. There’s an elephant in the room that I’ve been thinking about a lot over these last few weeks. It’s the fact that, demographically, if we’re being honest: I have more in common with the fans in the crowd at your average NBA game than I have with the players on the court. And after the events in Salt Lake City last month, and as we’ve been discussing them since, I’ve really started to recognize the role those demographics play in my privilege. It’s like — I may be Thabo’s friend, or Ekpe’s teammate, or Russ’s colleague; I may work with those guys. And I absolutely 100% stand with them. But I look like the other guy. And whether I like it or not? I’m beginning to understand how that means something. What I’m realizing is, no matter how passionately I commit to being an ally, and no matter how unwavering my support is for NBA and WNBA players of color….. I’m still in this conversation from the privileged perspective of opting in to it. Which of course means that on the flip side, I could just as easily opt out of it. Every day, I’m given that choice — I’m granted that privilege — based on the color of my skin. In other words, I can say every right thing in the world: I can voice my solidarity with Russ after what happened in Utah. I can evolve my position on what happened to Thabo in New York. I can be that weird dude in Get Out bragging about how he’d have voted for Obama a third term. I can condemn every racist heckler I’ve ever known. But I can also fade into the crowd, and my face can blend in with the faces of those hecklers, any time I want. I realize that now. And maybe in years past, just realizing something would’ve felt like progress. But it’s NOT years past — it’s today. And I know I have to do better. So I’m trying to push myself further. I’m trying to ask myself what I should actually do. How can I — as a white man, part of this systemic problem — become part of the solution when it comes to racism in my workplace? In my community? In this country? These are the questions that I’ve been asking myself lately. And I don’t think I have all the answers yet — but here are the ones that are starting to ring the most true: I have to continue to educate myself on the history of racism in America. I have to listen. I’ll say it again, because it’s that important. I have to listen. I have to support leaders who see racial justice as fundamental — as something that’s at the heart of nearly every major issue in our country today. And I have to support policies that do the same. I have to do my best to recognize when to get out of the way — in order to amplify the voices of marginalized groups that so often get lost. But maybe more than anything? I know that, as a white man, I have to hold my fellow white men accountable. We all have to hold each other accountable. And we all have to be accountable — period. Not just for our own actions, but also for the ways that our inaction can create a “safe” space for toxic behavior. And I think the standard that we have to hold ourselves to, in this crucial moment….. it’s higher than it’s ever been. We have to be active. We have to be actively supporting the causes of those who’ve been marginalized — precisely because they’ve been marginalized. Two concepts that I’ve been thinking about a lot lately are guilt and responsibility. When it comes to racism in America, I think that guilt and responsibility tend to be seen as more or less the same thing. But I’m beginning to understand how there’s a real difference. As white people, are we guilty of the sins of our forefathers? No, I don’t think so. But are we responsible for them? Yes, I believe we are. And I guess I’ve come to realize that when we talk about solutions to systemic racism — police reform, workplace diversity, affirmative action, better access to healthcare, even reparations? It’s not about guilt. It’s not about pointing fingers, or passing blame. It’s about responsibility. It’s about understanding that when we’ve said the word “equality,” for generations, what we’ve really meant is equality for a certain group of people. It’s about understanding that when we’ve said the word “inequality,” for generations, what we’ve really meant is slavery, and its aftermath — which is still being felt to this day. It’s about understanding on a fundamental level that black people and white people, they still have it different in America. And that those differences come from an ugly history….. not some random divide. And it’s about understanding that Black Lives Matter, and movements like it, matter, because — well, let’s face it: I probably would’ve been safe on the street that one night in New York. And Thabo wasn’t. And I was safe on the court that one night in Utah. And Russell wasn’t. But as disgraceful as it is that we have to deal with racist hecklers in NBA arenas in 2019? The truth is, you could argue that that kind of racism is “easier” to deal with. Because at least in those cases, the racism is loud and clear. There’s no ambiguity — not in the act itself, and thankfully not in the response: we throw the guy out of the building, and then we ban him for life. But in many ways the more dangerous form of racism isn’t that loud and stupid kind. It isn’t the kind that announces itself when it walks into the arena. It’s the quiet and subtle kind. The kind that almost hides itself in plain view. It’s the person who does and says all the “right” things in public: They’re perfectly friendly when they meet a person of color. They’re very polite. But in private? Well….. they sort of wish that everyone would stop making everything “about race” all the time. It’s the kind of racism that can seem almost invisible — which is one of the main reasons why it’s allowed to persist. And so, again, banning a guy like Russ’s heckler? To me, that’s the “easy” part. But if we’re really going to make a difference as a league, as a community, and as a country on this issue….. it’s like I said — I just think we need to push ourselves another step further. First, by identifying that less visible, less obvious behavior as what it is: racism. And then second, by denouncing that racism — actively, and at every level. That’s the bare minimum of where we have to get to, I think, if we’re going to consider the NBA — or any workplace — as anything close to part of the solution in 2019. I’ll wrap this up in a minute — but first I have one last thought. The NBA is over 75% players of color. People of color, they built this league. They’ve grown this league. People of color have made this league into what it is today. And I guess I just wanted to say that if you can’t find it in your heart to support them — now? And I mean actively support them? If the best that you can do for their cause is to passively “tolerate” it? If that’s the standard we’re going to hold ourselves to — to blend in, and opt out? Well, that’s not good enough. It’s not even close. I know I’m in a strange position, as one of the more recognized white players in the NBA. It’s a position that comes with a lot of….. interesting undertones. And it’s a position that makes me a symbol for a lot of things, for a lot of people — often people who don’t know anything about me. Usually, I just ignore them. But this doesn’t feel like a “usually” moment. This feels like a moment to draw a line in the sand. I believe that what’s happening to people of color in this country — right now, in 2019 — is wrong. The fact that black Americans are more than five times as likely to be incarcerated as white Americans is wrong. The fact that black Americans are more than twice as likely to live in poverty as white Americans is wrong. The fact that black unemployment rates nationally are double that of overall unemployment rates is wrong. The fact that black imprisonment rates for drug charges are almost six times higher nationally than white imprisonment rates for drug charges is wrong. The fact that black Americans own approximately one-tenth of the wealth that white Americans own is wrong. The fact that inequality is built so deeply into so many of our most trusted institutions is wrong. And I believe it’s the responsibility of anyone on the privileged end of those inequalities to help make things right. So if you don’t want to know anything about me, outside of basketball, then listen — I get it. But if you do want to know something? Know I believe that. Know that about me. If you’re wearing my jersey at a game? Know that about me. If you’re planning to buy my jersey for someone else…… know that about me. If you’re following me on social media….. know that about me. If you’re coming to Jazz games and rooting for me….. know that about me. And if you’re claiming my name, or likeness, for your own cause, in any way….. know that about me. Know that I believe this matters. The debate on when it is relevant to apologize and pay reparations for misdeeds and human rights violations tells us that the past is never dead. MEXICO CITY — Three weeks ago and 500 years after the arrival of Hernán Cortés in Veracruz, President Andrés Manuel López Obrador of Mexico sent a letter to the king of Spain. In it, he demanded an apology for the abuses inflicted on the indigenous peoples of Mexico by Spain, in view of what the Spaniards now consider “human rights violations.” And last week the prime minister of Belgium apologized in Parliament for the kidnapping, deportation and forced adoption of thousands of children born to mixed-race couples in its former African colonies. National apologies for misdeeds, crimes and odious behavior are not new. The West German government of Konrad Adenauer paid billionsin reparations to the state of Israel and Jewish people for Nazi crimes. Former President Jacques Chirac of France apologized for deporting thousands of Jews to Nazi death camps. The reparations debate in the United States continues. A bill known as H.R. 40 was introduced in the House of Representatives by Representative John Conyers every year from 1989 until his resignation in 2017. It called for a formal study of the impact of slavery on African-Americans living today and the development of a proposal for reparations, among other things. The bill was reintroduced this year by Representative Sheila Jackson Lee. Most recently, several contenders for the Democratic Party’s presidential nomination, most notably Elizabeth Warren, have expressed some level of support for reparations for the descendants of enslaved men and women. What all of this tells us is that the past is never dead and that no matter how anachronistic some demands may seem, historical grievances abound. The past five centuries of world history have featured conquests, plunder, torture, genocide, slavery, occupation and worse. The trend toward asking forgiveness and making reparations is overall a good thing. It acknowledges history while pointing a way forward, whether it be consolidating a national identity in Mexico, apologizing for atrocious colonial misdeeds in Africa or addressing inequality between blacks and whites in America. The debate over the Spanish and Portuguese conquests of what is now called Latin America took on a new meaning after 1992, when the former colonial powers and former colonies met to revisit and discuss Columbus’s arrival in the New World. The Mexican case is especially complicated. Several polls showed Mexicans disagreed on Mr. López Obrador’s call for an apology as well as the issue’s relevance. Historians also made several points against his stance. First, the historians stated that Tenochtitlán, the Aztec capital, was captured thanks as much to Cortés’s allies among the other indigenous peoples of the time as to the Spaniards themselves. Then they recalled that the Aztecs were no choir children: They resorted to cannibalism, human sacrifice, local wars to subjugate other peoples and violent repression of their enemies. Finally, and most important, they noted that Mexicans have always held an ambivalent position on their own national identity. During the past decades, children’s textbooks have implied that today’s inhabitants of Mexico are descended from indigenous people and not from the Spanish. The official narrative for more than a century now in Mexico is that it is the mestizo country par excellence. As the nameplate at the National Anthropology Museum and Tlatelolco Square, where the final defeat of the Aztecs occurred, proclaims, “Neither a victory nor a defeat, here took place the painful birth of the mestizo people that today is Mexico.” There can be no “mestizaje” without both civilizations — the Spanish and the original peoples — taking part in it. However violent their encounter may have been, and acknowledging the brutal nature of the conquest, Mexicans seem to prefer to let sleeping dogs lie. While racism against indigenous minorities in Mexico is undeniable, and the country’s European-origin tiny minority frequently resorts to racist attitudes toward mestizos, an overwhelming majority of the people of Mexico are mestizos today. There are myriad things to fix in Mexico, but discrimination by mestizos against mestizos is not one of them. Mr. López Obrador said in his letter to King Philip VI that he was not requesting reparations; the conquest cannot be repaired. The apology he demanded was immediately rejected by the government in Madrid, and in all likelihood, the entire affair will fade away. The Mexican president’s ploy was almost certainly demagogic in intent and motivation, invoking an anti-Spanish sentiment that he believes exists in Mexico, though polls suggest otherwise. Mexico does not need an apology, because it has no conflict with Spain today. But beyond the Mexican populist gesture, and the debates in the United States, Europe and Canada, however, lies a conversation waiting to be held. There are challenges for other peoples and groups that require atonement or forgiveness in order to be addressed. In some cases, it can make an enormous difference, as with African-Americans, race and slavery in the United States. In others, it can disentangle complicated questions of national identity and victimization, as in Mexico. Reparations may be ultimately relevant only in some cases. But history is always relevant. As this presidential campaign season gets under way, the racial wealth gap is getting a fair amount of attention. African-Americans typically have about one-tenth the wealth of whites. Several presidential hopefuls such as former Secretary of Housing and Urban Development Julian Castro as well as Sens. Kamala Harris and Elizabeth Warren have supported the idea of reparations for the descendants of slaves to rectify this massive inequality born out of an unspeakable historic injustice. A new paper from researchers at the Cleveland Federal Reserve now argues that almost all of the wealth gap between African-Americans and whites is driven by the racial income gap – African-Americans earning about half of what whites earn. One of the main findings of the paper rests on hypothetical scenario that sets African-Americans’ earnings equal to that of whites from 1962 to the present and finds that African-Americans would have had 90% of the wealth of white by 2007. The paper concludes by arguing that addressing the racial wealth gap would require focusing on fixing the racial income gap. The single focus on income as the driver of racial wealth inequality rests on a model that strips away all of the systematic biases that result in lower incomes for African-Americans. Many of these directly relate to the racial wealth gap and the policy biases that favor whites. After all, income builds wealth, but wealth also generates future incomes and systematic obstacles in building enough wealth hold back African-Americans from getting a fair shot at equal pay. Focusing then only on income ignores the real importance of enacting policies that can quickly close the racial wealth gap, such as reparations. Earning the same money as whites then requires African-Americans to have more wealth to begin with than is currently the case, so that they can actually catch up to whites. After all, current earnings in no small part depend on people’s past opportunities, afforded to them by their families’ wealth. These include, but are not limited to the quality of neighborhoods, schools and colleges. More wealth will allow people to move to better schools, to send their children to better schools, and to support their college education. Many African-Americans do not have these choices because of a lack of wealth. They then cannot gain the income that would give them and their children the same opportunities as whites have. Additional policy interventions need to occur to make sure that when African-Americans have the same amount of income, they can also build the same amount of wealth as whites. The evidence shows that at comparable income and education levels, for instance, African-Americans have systematically much less wealth than whites. Their incomes often don’t translate into the same amount of wealth because they face additional obstacles such as housing and mortgage market discrimination, resulting in residential segregation and fewer economic educational and labor market opportunities. The link between higher earnings and more wealth needs to be the same for African-Americans as for whites and that means eliminating systematic biases in housing, mortgage, credit, labor markets and education to begin with. For instance, even when African-Americans enjoy the same opportunities at an education, they often face systematic obstacles in the labor market, which means lower earnings and fewer benefits. A college education for African-Americans still goes along with lower earnings, more unemployment and less wealth than is the case for whites. Systematic obstacles such as outright discrimination, mass incarceration, occupational steering and residential segregation cost African-Americans income right now. Several of these obstacles can be overcome with more wealth that would allow people to move to safer, more diverse neighborhoods, to access similar education opportunities, among other changes. WHEREAS, the General Conference acknowledges and profoundly regrets the massive human suffering and the tragic plight of millions of men, women, and children caused by slavery and the transatlantic slave trade; and WHEREAS, at the conclusion of the Civil War, the plan for the economic redistribution of land and resources on behalf of the former slaves of the Confederacy was never enacted; and WHEREAS, the failure to distribute land prevented newly freed Blacks from achieving true autonomy and made their civil and political rights all but meaningless; and WHEREAS, conditions comparable to “economic depression” continue for millions of African Americans in communities where unemployment often exceeds 50 percent; and WHEREAS, justice requires that African American descendants of the transatlantic slave trade be assured of having access to effective and appropriate protection and remedies, including the right to seek just and adequate reparation or satisfaction for the legacy of damages, consequent structures of racism and racial discrimination suffered as a result of the slave trade; and WHEREAS, Isaiah 61:1-3 provides a model for reparations: “He has sent me to bind up the brokenhearted, to proclaim freedom for the captives, . . . to proclaim the year of the Lord’s favor,&ellipsis; and provide for those who grieve in Zion-to bestow on them a crown of beauty instead of ashes, the oil of gladness instead of mourning, and a garment of praise instead of a spirit of despair.”; and, WHEREAS, January 5, 1993, Congressman John Conyers Jr. (D-Mich.) introduced H.R. 40 to the House of Representatives, calling for the establishment of the Commission to Study Reparation Proposals for African Americans, “acknowledging the fundamental injustice, cruelty, brutality and inhumanity of slavery in the United States from 1619 to the present day,” for the purpose of submitting a report to Congress for further action and consideration with respect to slavery’s effects on African American lives, economics, and politics; Therefore, be it resolved: that we support the discussion and study of reparation for African Americans; that we petition the President, the Vice President, and the United States House of Representatives to support the passage and signing of H.R. 40; that a written copy of this petition be delivered to the President and Vice President of the United States, the United States Senate Majority Leader, the House Speaker, and House Member John Conyers Jr. that the General Commission on Religion and Race and the General Board of Church and Society develop a strategy for interpretation and support of passage of H.R. 40; That the appropriate general boards and agencies of The United Methodist Church develop and make available to its members data on the history of slavery and the role of theology in validating and supporting both the institution and the abolition of the slave trade; and That we call upon The United Methodist Church to acknowledge the memory of the victims of past tragedies and affirm that, wherever and whenever these tragedies occur, they must be condemned and their recurrence prevented amended and Adopted 2004 resolution #62, 2004 book of resolutions resolution #56, 2000 book of resolutions The Rev. R. Albert Mohler Jr., president of Southern Baptist Theological Seminary, on Oct. 5, 2015. (Bruce Schreiner/AP) December 12, 2018 More than two decades after the Southern Baptist Convention — the country’s second-largest faith group — apologized to African Americans for its active defense of slavery in the 1800s, its flagship seminary on Wednesday released a stark report further delineating its ties to institutionalized racism. The year-long study by the Southern Baptist Theological Seminary found that all four founding faculty members owned slaves and “were deeply complicit in the defense of slavery,” R. Albert Mohler Jr., president of the seminary, wrote in his introduction to the 72-page report he commissioned. The report also noted that the seminary’s most important donor and chairman of its Board of Trustees in the late 1800s, Joseph E. Brown, “earned much of his fortune by the exploitation of mostly black convict lease laborers,” employing in his coal mines and iron furnaces “the same brutal punishments and tortures formerly employed by slave drivers.” The report provided largely harsh assessments of the seminary’s past actions, even as it at times lauded the institution for racial strides. Many of the founding faculty members’ “throughout the period of Reconstruction and well into the twentieth century, advocated segregation, the inferiority of African-Americans, and openly embraced the ideology of the Lost Cause of southern slavery,” that recast the South as an idyllic place for both slaves and masters and the Civil War as a battle fought over Southern honor, not slavery, Mohler wrote in his introduction. The faculty opposed racial equality after Emancipation and advocated for the maintenance of white political control and against extending suffrage to African Americans, the report said. In the 19th and early 20th centuries, the seminary faculty relied on pseudoscience to justify its white-supremacist positions, concluding that “supposed black moral inferiority was connected to biological inferiority,” according to the report. And decades later, the seminary was slow to offer full support for the civil rights movement, advocating a “moderate approach.” The seminary’s public reckoning comes as universities grapple with the darker corners of their pasts amid passionate challenges from students and faculty. At colleges across the country, protesters have toppled some Confederate monuments, while other statues remain the subjects of fierce debate. “It is past time that The Southern Baptist Theological Seminary — the first and oldest institution of the Southern Baptist Convention, must face a reckoning of our own,” Mohler wrote. Colby Adams, a spokesman for Mohler, said the theologian launched the historical investigation because people asked him specific questions “he didn’t know the answer to. We knew there was involvement. We didn’t know the full history.” The report has elicited a lukewarm reaction from experts who said while the seminary should be commended for admitting its racist history in writing, the revelations don’t come as a surprise, especially given the fact that the Southern Baptist Convention was formed in 1845 after a split with northern Baptists over slavery. The SBC is now the largest Protestant denomination in the country, with over 15 million members. What does matter, the experts said, are the actions the seminary takes from here and whether it makes reparations. Jemar Tisby, a historian who writes about race and Christianity, said he expects many white Evangelicals will push back on the report by saying the seminary is being divisive and re-litigating its past. The school’s leadership needs to sit down with racial and ethnic minorities and “let themselves be led” to racial reconciliation, Tisby said. “They are at the very beginning of the journey,” he said. “What this document does is open up a new phase of the seminary on racial justice.” Critics and other observers said the Southern Baptist Convention for too long has been hesitant to take full ownership of its past, for decades framing its split with northern Baptists as one over theological differences, not slavery. By commissioning the seminary’s report, Mohler may have been trying to change that., said Lawrence Ware, a professor at Oklahoma State University who studies race and religion. “I think that what he’s trying to do is he’s trying to force the Convention to have a conversation on race and racism that the Convention has really not wanted to have,” Ware told The Washington Post. Ware said that while the report is “a step in the right direction,” some sections seem to soften the severity of the seminary’s racist actions. He called the report’s description of faculty’s mixed record on the civil rights movement “double-handed” and said the document fails to account for the seminary’s lack of diversity among top leadership. The seminary’s progress in the area of civil rights was slow. The Louisville school began admitting black students to degree programs in 1940 and fully integrated 11 years later. The report said that the seminary was skeptical of the civil rights movement’s direct-action tactics, but noted that faculty in the 1960s urged support for civil rights in general and invited the Rev. Martin Luther King Jr. to speak at the seminary in 1961. In 1995, the Southern Baptist Convention adopted a resolution stating its explicit connection to slavery: “Our relationship to African-Americans has been hindered from the beginning by the role that slavery played in the formation of the Southern Baptist Convention; many of our Southern Baptist forbears defended the right to own slaves, and either participated in, supported, or acquiesced in the particularly inhumane nature of American slavery; and in later years Southern Baptists failed, in many cases, to support, and in some cases opposed, legitimate initiatives to secure the civil rights of African-Americans.” Many Southern Baptists hoped the resolution would be the last time they would have to confront the denomination’s racist past, Mohler wrote in the report. “At that time, I think it is safe to say that most Southern Baptists, having made this painful acknowledgment and lamenting this history, hoped to dwell no longer on the painful aspects of our legacy. That is not possible, nor is it right,” he wrote. “We have been guilty of a sinful absence of historical curiosity. We knew, and we could not fail to know, that slavery and deep racism were in the story.” “[T]he moral burden of history requires a more direct and far more candid acknowledgment of the legacy of this school in the horrifying realities of American slavery, Jim Crow segregation, racism and even the avowal of white racial supremacy,” Mohler wrote in the report. “The fact that these horrors of history are shared with the region, the nation, and with so many prominent institutions does not excuse our failure to expose our own history, our own story, our own cherished heroes, to an honest accounting — to ourselves and to the watching world.” The denomination has focused in recent years on efforts toward racial reconciliation and progress. In 2012, it elected its first African American president, Fred Luter. And in April, on the 50th anniversary of King’s death, the SBC’s public policy arm — the Ethics and Religious Liberty Commission — organized what it thought would be a small conference in Memphis about efforts to end racism. About 3,500 pastors and lay leaders showed up. “Father, Lord, would you have mercy on us sinners?” ERLC Commission President Russell Moore prayed at the Memphis event. There have also been notable stumbles. The group voted at its annual meeting in 2017 to condemn the known as the alt-right — which seeks a whites-only state — but only after it faced backlash to an earlier decision not to vote on the issue. The same year, a professor at a different Southern Baptist seminary posted to Twitter a photo appearing to show five white professors posing in hoodies and gold chains, with some pointing their fingers like guns. Barry McCarty, a professor of preaching and rhetoric at Southwestern Baptist Theological Seminary in Texas, later posted that the photo was meant to be a send-off for a professor who occasionally raps. DURHAM, N.C. (RNS) A white scholar touring churches across the nation is trying to convince Christians that racial reconciliation is not enough — it’s time to start talking about reparations for descendants of slaves. And among mostly white, mainline Protestants this controversial — some would say unrealistic — notion is getting a hearing. What divides the races in America, says Drake University ethicist Jennifer Harvey, is not the failure to embrace differences but the failure of white Americans to repent and repair the sins of the past. “Our differences are not only skin deep,” the 44-year-old scholar told a lecture hall packed with Duke Divinity School students recently. “Our differences are the deepest and most complex manifestations of genealogies of harm done to some and perpetrated by others.” “All over the Hebrew Bible, this is what it says to do when you steal — you give it back sevenfold,” she said. Harvey’s 2014 book, “Dear White Christians: For Those Still Longing for Racial Reconciliation,” has led to speaking engagements at United Church of Christ gatherings, Presbyterian assemblies and college campuses such as Duke and Colgate University in New York. Over the next year, she’ll address UCC statewide meetings in the Midwest, a Lutheran congregation in Arkansas, social justice conferences in Georgia and New Mexico, college students Michigan and in Pennsylvania, United Methodist and Disciples of Christ seminarians in New Jersey and Oklahoma. Trimble’s center has published a video interview and a book-study guide to promote Harvey’s book to its 13,000 affiliated congregations in nine different denominations. “Jennifer is inviting a conversation that needs to be had among white people. In all of our mainline traditions, we have deeply institutionalized racism. We have to willingly give up power in order to equal the playing field.” “Dear White Christians,” by Jennifer Harvey. Photo courtesy of Jennifer Harvey More than 20 churches in the diocese have investigated their connections to slavery and produced an online historical tour, “Trail of Souls,” as an act of truth-telling and confession. “If we’re not reconciled with our history, then we can’t understand what the repair is that’s needed,” said the Rev. Angela Shepherd, the diocesan canon for mission. Shepherd said it’s too late for the U.S. to consider any kind of direct reimbursement but welcomed Harvey’s stoking the reparations movement in churches. She hopes Harvey’s visit, along with the Baltimore protests in the spring, will help to motivate people in her diocese to support a bill first introduced by Michigan Congressman John Conyers’ in 1989 to create a federal commission to study reparations. “It would not look like writing checks to individuals,” Shepherd said. “To me, it’s about figuring out a way in our country to bring up the playing field so that it is level.” “White households are worth roughly 20 times as much as black households,” wrote Coates. “Effectively, the black family in America is working without a safety net.” Coates traced some of the systemic injustices to “redlining,” the denial of home mortgages to black Americans, driving them toward predatory lenders outside the banking system. Harvey said this history, beginning in slavery and Jim Crow and continuing with poor, underfunded pubic schools for minority children, has stalled well-intentioned efforts at reconciliation since the Rev. Martin Luther King Jr.’s assassination. This history also explains the energy around the “Black Lives Matter” response to recent acts of police brutality. “I find myself surrounded by white Americans in a state of shock,” Harvey said. “We should not be shocked or surprised. We have no right to surprise.” Harvey said she grew up attending mostly black schools in Denver, but it wasn’t until she met black students at Union Theological Seminary that she began to understand how being white gave her societal power that they didn’t have. “Women and men of color said to me, ‘You need to figure out your whiteness,’” she said. Harvey said demands for reparations drove white Christians out of the civil rights movement. They held onto King’s vision of the “beloved community” and kept talking about reconciliation but have never made the sort of recompense that’s needed. With a Ph.D. in Christian social ethics from Union, Harvey has spent her career writing on white supremacy and the contemporary reparations movement. Harvey was ordained in the liberal American Baptist Churches USA. She supports Conyers’ congressional bill and is trying to kindle the conversation in religious communities. Harvey resists specifying what form reparations might take, saying that should come from the wounded parties. She points to the National Coalition of Blacks for Reparations in America, which calls for cash, land, economic development, scholarships and policy changes ensuring equitable treatment in criminal justice, health care and financial systems. Harvey also suggests environmental reparations for Native American land taken and exploited; citizenship for underpaid immigrant workers; and political remedies for mass incarceration of black Americans. “People who’ve been there, who lived through the civil rights movement, can look back and say, ‘Yes, our churches are just as segregated as they were before,’” said Michael DePue, director of Christian education at Chapel in the Pines, a white Presbyterian congregation in Chapel Hill, N.C., where Harvey’s book is being studied. “It’s been 40 or 50 years, and the things that the civil rights movement set out to do, they haven’t come to pass.” “There’s an awareness among progressive Christians that if you do what you’ve always done, you’re going to get what you’ve always gotten,” she said. “The challenge that remains before us is, will it move beyond talk? What we do very well in church is talk a thing to death.” It’s the idea that white Americans should pay a moral debt to black Americans to compensate for slavery, Jim Crow and institutionalized racism. Reparations has been a concept debated as far back as emancipation. But for some Denver women, it’s not a debate — it’s an obligation. In late 2018, the Denver-based nonprofit Soul2Soul Sisters received a whopping $200,000 anonymous donation. Founders Rev. Dawn Riley Duval and Rev. Tawana Davis were “stunned,” and tried to learn more. The mystery benefactor ended up being a graduate student. The donor asked Colorado Public Radio News not to use their name or identifying information in order to keep the focus on Soul2Soul and their racial injustice workshops for people of faith. She had delved deep into her family tree for a class assignment. What she found was new information that caused her “deep sadness.” She had grown up believing that her family — which settled in Mississippi in the late 18th century — had never owned slaves. But it turned out that wasn’t true. She even dug up a cassette recording of her grandmother, and she learned about Alice. Alice was an enslaved girl given to her “aristocratic” great-great grandmother when she left North Carolina for Mississippi. Even after emancipation, Alice stayed. “It became true what I had thought was true,” she said. “It may have been just one person, perhaps there were other people, but to know that my family had benefited from the efforts of someone else.” This revelation came four years after her father passed away, leaving her an inheritance that presented a challenge. She wanted to do some good with it. The donor approached her teacher to talk out ways to use this money to atone for her family’s role in slavery and to honor Alice. Her teacher mentioned Soul2Soul, which clicked instantly — Revs. Riley Duval and Davis had not only spoken at her school, but they’d also preached at her church, and left an impression. She quietly made the donation, and figured that was that. But the reverends reached out, wanting to know more. “I began to think, ‘What do I call this?’” she said. “A gift is something that’s yours that you give away, and I thought, ‘That’s not the right word.’ Because this, in my mind, wasn’t mine. It was something I had gotten through Alice, or partially through Alice.” She tried to find a word to pin on it, but one word, even reparations, didn’t seem like enough. “Reparations came to mind. I’ve heard that, I’m not an expert on it. But reparations to me is big, it’s societal changes, it’s something we need to do as a country,” she said. “So I thought it was more ‘personal reparations,’ and then I said it was ‘personal partial reparations,’ because I don’t know what the right number is, and I don’t know that money is all of it. I don’t think it is.” Riley Duval said reparations are an important part of healing racial wounds in America. “There has to be compensation. We understand economic justice and healing justice to be integral to racial justice,” she said. “So, there must be compensation towards conciliation.” Rev. Riley Duval said the money has been a huge boon to Soul2Soul sisters, allowing them to beef up their staff. “We have brought on other black women who are helping us to broaden the work of Soul2Soul sisters,” she said. “Soul2Soul Sisters is a fiercely faith-based racial justice organization that is lead by black women towards actualizing black healing and black liberation.” Lotte Lieb Dula, a retired financial strategist, started down a similar path as Soul2Soul’s anonymous donor at the start of 2018. Dula’s grandmother passed away in January, and Dula took up the task of sorting through her things. She found a small, old book that was still well-preserved. Dula opened it to find inventories of slaves, hundreds of them, with their individual monetary worths listed. It was then Dula learned that much of her family’s ancestral wealth came from slavery. She did more research, and counted more than 400 enslaved people who were considered the property of her ancestors. She also unearthed an old Smith College yearbook that listed her grandmother as a KKK member. “I want to skip the guilt and shame part, and I want to do something about this,” Dula recalled thinking at the time. She joined a national group called Coming to the Table which connects descendents of enslaved people with descendents of slaveholders. Dula also established a scholarship fund for students who wish to study political science or law, restricted only to black applicants. She met a young black woman pursuing a career in politics and Dula agreed to help pay off her college debt, calling it a “direct reparation.” “Since I used to do financial modeling for my career, of course I’ve modeled what I might be able to give,” Dula laughed. “I think over the course of my lifetime, my goal is to give half a million dollars through whatever means I can, and then at my death, the rest of it will go towards setting up a reparations fund.” She’s also started building a website – a guide to reparations for white people, by white people. “This is how I’ll spend the rest of my life,” Dula said. “If only my life could be extended 250 or 400 years, maybe I’d make a small dent.” A look at why the first and most crucial poetic gesture for a black poet in the West is a knowledge and mastery of her body Look for me in the whirlwind She teaches us that voodoo was used as a means, during slavery, for slaves to break free from the slave master. When the slave wanted to break free from the master, the only way to get out a lot of times was to die. That’s right, to die. And they had an antidote in voodoo that would cause the body to stop, the heart to stop, and all that, the consciousness to leave the body and move to the nervous system, and the slave master would just come to check to see that the slave was dead, check the pulse or whatever, and let the slaves bury them, but the family would be in and knowing that nah, he’s not dead, or she’s not dead. We’re just faking out master, so we can get him or her off the land, so that they can go get us some help so we can break free from the plantation. (Excerpt from Hollywood Forever) This is not life! This is death disguised as life. I know what life is. Life is s p l e n d i d! Sun Ra riffs in a gorgeous, generous poem /\ song somewhere between inheritance and transcendence. And what if he’s right. Would we worry, or zombie a little less, or invent more verbs, or honor the restlessness lurking in our performed domesticity; if we knew we were involuntary participants in a grand and oh-so-fatalistic suicide culture that began with black bodies as capital/ real estate, and fantasizes about ending there or in a carefully disguised eternal return we name entertainment, sports, mass incarceration—would we opt out or go harder? It seems the evidence is everywhere. We earn our continued oppression day by day, labor for it. What a genius mystic like Sun Ra accomplishes with his urgent call, is naming it, creating a double entendre for so what, and etching a map toward the exodus that devalues ‘progress’ as we know it, an understanding of the casual merits of an ancient future, a scoffing in the face of exclusivity, an overturning of the toxic myth that we have somehow ‘made it,’ or been civilized, when it’s clear that we’ve deteriorated as spirit beings, and physically: been degraded, degraded and diluted ourselves, become puppets for our own subjugation. And then one day we kneel for the national anthem before a football game and this is protest?, and then the next the new Drake anthem is out, started from the bottom… now we’re … at rock bottom. But to have been seduced into acting as agents of our own devolution as a people, we had to be given words, phrases, an entire syntax and meaning factory and way of moving through space and time, in the service of that steady diminishing. Where do those words hide or how are they glorified or embedded in common action so deeply that we miss them? How did we manage to become so disembodied that we lost track of what life is? How do you expect to write effective poetry from outside of yourself, always gazing on your own spirit as ‘other’ and looking for ways to contain it with art as opposed to using this cryptic and encrypted English language to liberate the spirit? Henrietta Lacks’s Cells Henrietta Lacks was a cervical cancer patient at Johns Hopkins hospital when she unwittingly became the queen mother of stem cell research. Without any informed consent, her cancer cells were cloned and survived the arduous cloning process. For years researches had been trying to clone the cells of white men and women, and failing. It was discovered that the cells of black people are so resilient that, even when cancerous, they can survive the cloning process and replicate interminably. After her cell line, deemed the Immortal Cell Line, was successfully cloned in 1955, its traces became staples in cosmetics, household products, and all areas of medical research. In order to patent a semi biological substance, it must derive from cloned not original cells, so yet another form of free labor was born in this transaction. Decades later the adulteration of Lacks’s body was acknowledged first by Morehouse College and then by additional institutions. Her family has not been paid for the secret harvesting of her cells that continues today as an open secret. Henrietta Lacks’s cells know what life is. It never bows… Okra and Hog Calling After having been abducted and transported across the Atlantic at manifold angles and velocities, and upon arrival on the auction block and then at the miserable slave quarters on their respective plantations, African men and women refused to eat the food they were given by their cannibal captors. Scraps of dead animal flesh, meals of blood and starch, were bitterly, indifferently, refused. Hunger strikes are among the most natural responses humans have to trauma; they reflect the integrity of our impulse to heal. When you are sick, or made sick by circumstance, food only deepens the illness, and codes itself with the suffering one is enduring while eating, becomes about emotions more than nourishment, and negative, desperate emotions at that, the opposite of its purpose in nature. When lost, the prophets fasted, when ecstatic, the prophets fasted, when suffering, the prophets fasted; feasting was the symbolic exception. But Africans had been stolen and tortured at sea only so they could provide the free labor that would build and sustain the parasitic economies of the Americas, so starving themselves was not an option for their captors. The mentally ill plantation owners had discovered the boundaries of their persuasion, and had to return to Africa for Okra and fonio, wild rice, greens, whole indigenous uncultivated foods and herbs, to keep their human capital from starving. It was only after distorting the contents of scripture, which instructs to only eat the flesh of animals, and only the animals who are herbivores themselves “the clean animals” in times of flood, famine, or dire need, it was only after converting slaves to Christianity and linking the religion with the West’s putrid dietary habits, that slave masters managed to create a race of primarily flesh and starch eating slaves whose thinking would often reflect those misguided tastes. Who would relate to one another using those tastes as their foundation. Around this same time degenerative diseases sprung up in the African slaves, who were slowly becoming American in culture and psychology. And doctors purchased some slave solely to test new surgeries and medicines on them with no thought of anesthesia or side effects. Now with the new science of Epigenitics we are learning that these kinds of traumas are handed down in the genes, memories are genetic, déjà vu is no myth. Even the quality of sunlight in North America contributes to the undermining of black bodies. It is unlike the light in West Africa, or near the equator. The UVB rays needed to melinate bodies and to produce adequate amounts of vitamin D only hit North America (with the exception of Florida, Georgia, and Southern California) between April to September, from 11:00 to 3:00 in the afternoon, and are only absorbed if you are 60 percent naked in those places at those times, and not overly calcified with dairy and other toxins. And yet with all of this working against most of us, the immortal cell line comes from a black woman who had cancer. Life is splendid. A self-taught black man and master herbalist from Honduras cured dozens of AIDS patients in the 1980s, removing all traces of the disease in them, and most all other diseases he was asked to help heal, by first declaring them non-existent, figments of our covetous western thinking, and then weaning his patients off of all that slave food and the slave mentality that came with it. He even made it onto Worldstar, echo W O R L D S T A R! head start for half stepping, his teachings almost went mainstream. He was mysteriously arrested and killed for his work, along with more than 50 additional holistic practitioners just this year. The history of the West is the history of everyone but black people, realizing that we are its most valuable resource, that our bodies are superhuman conductors, our hearts, brilliant minds, and our language visceral and poetic even when used absentmindedly. Melanin is a technology that renders the black body masterful but can also turn on it if not acknowledged. Without adequate light and proper sounds you can turn a nation of demi-gods into demons against their own selves. The soil in the west also lacks the minerals needed to compensate for the poor quality of light and the dismal acoustics here. In that weakened condition it’s easier for a fascist police state to feed off of our anguish and even criminalize it until we all but give up on retaliation. It’s easy for police to kill black children and harvest their organs the way Henrietta Lacks’ cells have been harvested. It’s easy to turn Griots into madmen. Just as we cannot allow ourselves to forget, or be naive enough to think that anything offered us by this society is benevolent, from the 13th amendment, with its hideous exception, to the so-called ‘good job’ that has us inside all day degrading our light-processing engines, we also cannot ignore what ails or controls our bodies, from the new curfew renewed every time we mobilize to protest state violence, to the comfort food addiction renewed with every collective trauma, to trap beats we twerk to over Henny under dimmed fluorescent lights and Future’s intoxicating timbre, to the church pews we kneel on to worship white Jesus. The grammar of blackness in the West must be ruthlessly examined from within, on a cellular, molecular level, and reconfigured, if we are to move from signifying the tragic and soulful beauty of an oppressed group of electric spirits, to signifying the triumph of organic power and talent over the weaker but much more evil force that has been out to contain and exploit it for centuries. The very desire to contain and oppress is a sign of internal weakness. Our poetry must become ruthless as we ruthlessly deploy our bodies to rectify our situation/ship with the Western gaze. Perhaps if we stop trying to seduce white America and its sad satellites, we will gain the courage to act worthy of ourselves and of our superconductor immortal cells and the healers who give their lives to remind us of our greatness from musicians to herbalists to poets to mothers. I don’t think any such maneuvers are possible unless we first understand our anatomy and physiology. Know thyself. Know how what you do with your body affects it, how what you put into your body instructs it to behave. And how those instructions become in our poems. Fetishized melancholy, willful forgetting, shadows of the show lights of the new curfew, shoals of our lost knowledge returning. Reparations begin in the body, and that is where our poems must begin; our poems must teach us new ways to use our bodies, must watch with us and walk with us and burst through us as new light, even if it hurts, even if it means we have to relearn self-love through the eyes of a truer more unified self. Korrine Gaines , Mary J. Blige Korrine Gaines is huddled in a pool of her own blood after having been shot by a Baltimore cop, and instructs her five-year-old son to keep filming, while Mary J. Blige limply holds Hillary Clinton’s hand to chant a hymn about how to be polite in the face of your own murder. On 59th and Columbus, Judith Jamison is instructing an Ailey principal dancer how to cry without tears, in your torso, from the womb, in the place where mourning shifts from vengeful to detoxifying and even becomes a kind of forgiveness of self. Regenerative sorrow. We have reached a time where accidental cooning or lack of intimacy and disembodied thinking could kill us and sabotage our art, as a dancer first I’ve always known this but as our bodies fall deeper and deeper under siege in this age of casual fascism, the only relevant poems will be those which force both ourselves and the state to contend with the full power of our forms. Throughout this month I’ll discuss poems and poetic acts that reinstall spirit language and re-embody the act of writing and of thinking in that fashion, accessing the brave vulnerability that is the key to self-mastery here. Until now we have survived by denying our physical otherness or asserting it so aggressively it becomes parody, we have tried to fit into a context whose first intention is to extort us in every way possible, we have praised invisibility as a skill and perhaps felt guilty for being great on our own terms, gone deaf to those terms and yearnings. In the process of all of this pandering, we have often used language as a tool to access the few token spaces for us to supposedly ‘prosper’ in the western world, rather than training the language to obey us, we have proven that we can be tamed and undermined by it. Our bodies, and how we use them, are testaments to how we use language both on and off the page. Today’s poems need to break up rigged thought patterns rather than look for new ways to restate and validate them, and today’s black poets and all who love us, must master our bodies and their histories the way soldiers do, for our words must be directed toward saving the souls these vessels carry from suffering the fate of monopoly capitalism into fascism and back into barbarism; that’s the trajectory we are on if we remain mere witnesses, if we remain abstract to ourselves. Poetry is the space wherein the facts we learn through true study and understanding of self can perform as archetypes and symbols and syncopation, so that these hard facts are easier to bear, but it is not a space we should use to escape the facts of our essence or our condition. If you ignore what happens to your body, what is happening to black bodies everywhere, your poems will ignore you back and lack the resonance we need from them to free ourselves or become our true selves again. But how do we remain that present without putting our bodies in danger or under scrutiny in order to reclaim their richest language? Julia Leakes yearned to be reunited with her family. In 1853, her two sisters showed up for sale along with her thirteen nieces and nephews in Lawrence County, Mississippi. Julia used all the political capital an enslaved woman could muster to negotiate the sale of her loved ones to her owner, Stephen A. Douglas. Douglas’s semi-literate white plantation manager told him “[y]our negros begs for you to b[u]y them.” Despite assurances that this would “be a good arrangement,” Douglas refused to shuffle any of his 140+ slaves to reunite this separated slave family. Instead, Julia’s siblings, nieces, and nephews were put on the auction block where they vanished from the historical record.1 Unfortunately, things went from bad to worse for Julia. By 1859, she had a 1 in 3 chance of being worked to death under Douglas’s new overseer in Washington County, Mississippi. Douglas’s mistreatment of his slaves became notorious. According to one report, slaves on the Douglas plantation were kept “not half fed and clothed.”2 In another, Dr. Dan Brainard from Rush Medical College stated that Douglas’s slaves were subjected to “inhuman and disgraceful treatment” deemed so abhorrent that even other slaveholders in Mississippi branded Douglas “a disgrace to all slave-holders and the system that they support.”3 When we began this project, we assumed that the University of Chicago was a postemancipation institution. However, as the University of Chicago historian and Dean of its College John Boyer has shown, the deep ties between the university’s original Bronzeville campus and its current Hyde Park campus constitute a rich “inheritance” and give the university what he calls “a plausible genealogy as a pre-Civil War institution.” Continuities between the two campuses can be found almost everywhere among its trustees, faculty members, student alumni, donor networks, intellectual culture, institutional memory, distinctive architecture, library books, and, of course, the University of Chicago’s name itself. The two campuses would undoubtedly be deemed inseparable alter egos of one another. Boyer convincingly makes this case. What he seems to have missed, however, was that this pre-Civil War founding also came with a founding slaveholder who endures to this day—haunting the halls of the Hutchinson Commons. Due in no small part to the pioneering work of the Brown University Committee of Slavery and Justice and historian Craig Steven Wilder, we now know that the University of Chicago is not alone. Many elite colleges and universities have deep roots in American slavery. Many also owe their large endowments to the financial legacy of the slave economy. These schools continue to leverage these endowments to develop and recruit talented faculty and students, build up the physical plant, and maintain their global reputations in the marketplace of ideas. Once a school comes to grips with its historical ties to chattel slavery, however, what is its next step? Many may argue that Georgetown University might provide a useful but incomplete starting point for the University of Chicago. Both schools are located in urban environments with a large African American population. Both are endowed with lots of money. Under the aegis of the Georgetown Slavery, Memory and Reconciliation committee, Georgetown has also publicly wrestled with the question of what it owes the descendants of the enslaved. Georgetown has decided that there is not a statute of limitations on slavery, and that to reckon with the past they had to engage historically. The university opened up four tenure track lines in African American Studies, expanded the African American Studies Major, and has plans to establish a Research Center for Racial Justice. In addition to these measures, the university will offer the descendants of enslaved African Americans sold by its friars preferential treatment in admissions (similar to the boost that so-called legacy students already receive). It will also rename two campus buildings in honor of African Americans—one an educator and another one of the enslaved who made the university a financial possibility. But is this enough? Perhaps, instead, the University of Chicago can find a way to look beyond Georgetown and what many have rightly criticized as its self-congratulatory, self-serving, and extremely limited program. Given the University of Chicago’s location on the city’s South Side it is uniquely situated to engage and address the legacy of slavery in a much different way. Chicago, like Washington D.C., has long been one of the meccas of Black life and culture. Establishing an African American Studies department should be a no-brainer. So, too, should be a concerted effort to recruit and develop faculty of color while vigorously recruiting and mentoring underrepresented students to attend the university. But this should happen anyway. It’s not reparations. Maybe a further step would be to encourage the University to build more deeply upon the community-based efforts it is already engaging in. These include the UChicago Promise program, which provides enrichment programs for talented but under-served public school students. There is also the Chicago Public Schools Educators Award Scholarship—a full scholarship to attend the University for the children of educators in the Chicago Public Schools—which should be broadly promoted and expanded. The University’s Arts + Public Life programming, including the Washington Park Arts Block, the Black Cinema House, and the Stony Island Arts Bank (led by the indefatigable Theaster Gates) should all continue to invite local residents to engage in their programming. But, again, this is already happening as it should. Perhaps we’ve gotten reparations entirely backwards. Here we must return to Julie and the enslaved peoples of the Douglas plantation. Any program of reparations must begin with them and their descendants. Reparations that flow back to the university itself either in the form of goodwill or an improved campus experience are not reparations. Diversity initiatives, black studies programs, and slick PR campaigns celebrating the university’s benevolence function primarily to enrich the university while compelling black students and faculty members to labor once more for the institution that owes their ancestors money. This cannot be a question of what the university will do for black communities. It must be a function of what black communities demand as payment to forgive an unforgivable debt. Black people do not need a seat at the university’s reparations table. They need to own that table and have full control over how reparations are structured. As more details of the university’s participation in slavery, Jim Crow, and discrimination post-1967 are documented, the current residents and community organizations of the South Side of Chicago must lead the way—not be told where to sit. This is part of a requisite cognitive shift that involves thinking beyond the legal framework of ‘damages,’ or the neoliberal ordering of private property rights, or the monstrosity of capitalism. We must imagine an entirely new model of human interactions, self-governance, and social organization. One that shuns hierarchies and fosters horizontalism. If done correctly, reparations can lead the way to a fresh re-conceptualization of politics—not based in crude self-interest but justice and even love. Reparations promise us a monumental re-birthing of America. Like most births, this one will be painful. But the practice of reparations must continue until the world that slavery built is rolled up and a new order spread out in its place. Until then, the University of Chicago must begin all of its conversations with the knowledge that it is party to a horrific crime that can never be fully rectified. But still it must try. And through that trying it must embrace an entirely new mission—one that centers slavery, the lives of the enslaved, and their descendants. This piece was originally posted at the Black Perspectives blog, published by the African American Intellectual History Society (AAIHS). A step-by-step guide to paying the descendants of enslaved Africans. Let’s say you’re driving down the street and someone rear-ends you. You get out of your car to assess the damage. The person who hit your vehicle gets out of his car, apologizes for the damage and calls his insurance company. Eventually, you receive a check for the harm done. Now, let’s say that for years, if not generations, your family and families like yours have been damaged by your country’s political and economic system — by law and widespread practice, with the intent of benefiting families not like yours — then the checks for the harm done would be called reparations. Beginning with more than two centuries of slavery, black Americans have been deliberately abused by their own nation. It’s time to pay restitution. Black activists and intellectuals have been making that point with increasing volume over the last few years, turning what was an obscure thought problem into a political issue. The question of reparations has even entered into the Democratic primary, with Sen. Bernie Sanders (I-Vt.)struggling to explain to black voters why he has built such a strong social justice platform on every issue but this one. Sanders was put on the spot last month when a reporter asked him if he would support reparations as president. “No, I don’t think so,” he said, describing the likelihood of congressional passage as “nil” — as if those odds normally stopped him. Every year since 1989, Rep. John Conyers (D-Mich.) has introduced the Commission to Study Reparation Proposals for African-Americans Act. As the name indicates, H.R. 40 does not require reparations. It simply calls for comprehensive research into the nature and financial impact of African enslavement as well as the ills inflicted on black people during the Jim Crow era. Then, remedies can be suggested. Fifty-nine percent of black Americans think that the descendants of enslaved Africans deserve reparations, according to a June 2014 HuffPost/YouGov poll. Sixty-three percent of black folks support targeted education and job training programs for the descendants of slaves. Most other Americans still aren’t listening. Ta-Nehisi Coates, perhaps the most prominent voice now pushing reparations, laid out why black Americans deserve even more than repayment for slavery in a sweeping 2014 article, “The Case for Reparations.” The exploitation didn’t stop with the Emancipation Proclamation, so any restitution must reckon with the discrimination that followed and deal with the living victims of these ills. If not even an avowed socialist can be bothered to grapple with reparations, if the question really is that far beyond the pale, if Bernie Sanders truly believes that victims of the Tulsa pogrom deserved nothing, that the victims of contract lending deserve nothing, that the victims of debt peonage deserve nothing, that that political plunder of black communities entitle them to nothing, if this is the candidate of the radical left — then expect white supremacy in America to endure well beyond our lifetimes and lifetimes of our children. Let’s change that — let’s bother to have the hard but necessary discussion of what black Americans are owed for what was taken from them. If reparations ever come, what would they look like? 1. Let’s Figure Out Who Deserves Reparations And Why Simply put, reparations are due to the millions of black Americans whose families have endured generations of discrimination in the United States. Most black Americans count among their ancestorspeople who endured chattel slavery, the ultimate denial of an individual’s humanity. William Darity, a public policy professor at Duke University who has studied reparations extensively, proposes two specific requirements for eligibility to receive a payout. First, at least 10 years before the onset of a reparations program, an individual must have self-identified on a census form or other formal document as black, African-American, colored or Negro. Second, each individual must provide proof of an ancestor who was enslaved in the U.S. Why does this huge group of Americans deserve restitution? Because starting with slavery, the damage done was institutionalized and inescapable. Darity has created a “Bill of Particulars,” including such specific grievances as: The extended history of government-sanctioned segregation and other forms of racial oppression in the Jim Crow era Post-WWII public policies that were designed to provide upward mobility for Americans but in practice did not include black people (such as the GI Bill) Redlining, which made home ownership a possibility for white people while shutting out black folks Ongoing discrimination against and associated denigration of black lives Eric J. Miller, a professor at Loyola Law School, said the case for reparations starts with an honest accounting of the racism that black people have experienced. “Part of our history is our grandparents participating in these acts of terrible violence [against black people],” he said. “But people don’t want to acknowledge the horror of what they engaged in.” White America built its wealth on those generations of legal and physical violence — a fact most white people today would rather not dwell on. “People don’t want to believe that they got their gains in an ill manner,” Miller said. “The cognitive dissonance of learning that your property is got and preserved on the back of the misery of others is not an incredibly nice thing to live with. So people would rather discount it.” But when the harm is great enough, it’s not enough to say you’re sorry and try to fix problems going forward. Germany made an effort to repay the Jews for the horrors of the Holocaust. Japanese-Americans were repaid for suffering in internment camps. Black Americans deserve no less. This leads us to our next step. 2. So How Much Are We Talking About, Exactly? No one really knows. (That’s part of the reason Rep. Conyers wants a commission.) But there are some numbers out there. A 1990 study by Richard Sutch and Roger Ransom, professors at the University of California, Riverside, estimated that industries fueled by slave labor, like cotton and tobacco, made profits of $3.4 billion (in 1983 dollars) between 1806 and 1860. Darity has estimated that if you throw in an annual interest rate of 5 percent, that number jumps to $9.12 billion (in 2008 dollars). Larry Neal, an economist at the University of Illinois, came up with an even higher number. His studies concluded that $1.4 trillion (in 1983 dollars) was owed to the descendants of enslaved Africans based on the compensation their ancestors did not receive for their labor between 1620 and 1840. With interest, that amounts to $6.4 trillion in 2014, according to The New Republic. None of these numbers account for the physical and sexual violence inflicted upon enslaved Africans. The figures mentioned also don’t include compensation for housing segregation and other forms of racial discrimination in the years since slavery ended. Nor do they factor in the extent to which American industries have profited — and continue to profit — from exploiting low-income workers, many of whom are black. How do we measure those kinds of losses — the chance at upward economic mobility that was stolen from millions? One way is to compare property values between majority-black neighborhoods that were redlined and white neighborhoods that were not — or property values within a single neighborhood before and after redlining. Another way is to gauge lost educational opportunities. Good public schools are usually found in majority-white suburbs where people pay higher property taxes. Poorly performing schools are found more often in economically disenfranchised areas with larger black populations. Bottom line: reparations are going to cost a lot of money. But America is a wealthy nation that can afford to pay for its misdeeds. For perspective, consider that in fiscal year 2014, the U.S. government spent $3.5 trillion, which is only 20 percent of the nation’s gross domestic product of about $17.5 trillion. 3. Now, How Would This Money Be Paid Out? Darity suggests that financial payouts be divided between individual recipients and a variety of endowments set up to develop the economic strength of the black community. His model is inspired by Germany’s restitution payments both to victims of the Holocaust and to Israel. The advantage of individual payouts, Miller notes, is that they maximize autonomy. But much of that money would land back in the white-dominated economy and “the one percent would become one percentier,” he said. Hence the value of using a portion of reparation funds to create programs geared toward aiding black people in combating the damage of racism. “One could think of Black America as being a community that could benefit from development investments,” Darity said. “So you could have a trust fund that was set up to finance higher education, [another] to create greater opportunities for opening one’s own business, and so forth.” Darity envisions the U.S. government establishing and overseeing these programs. Although it might seem counter-intuitive to give this power to the very institution that committed so much discrimination against black people, the professor said the government should be heavily involved precisely because of that history. “The U.S. government is the responsible party because of the entire legal apparatus that supported both slavery and, subsequently, Jim Crow and continues to permit ongoing discrimination,” he said. Miller emphasizes that the reparations-funded programs must be fully accessible to and controlled by members of the black community. “Unless institutions exist that are controlled by and accountable to the community, then the community will always be dominated, or prone to domination, by others,” he said. 4. But Will This Ever Happen? Congress hasn’t even managed to pass H.R. 40. And that’s really no surprise since most Americans are not pushing their lawmakers to do anything on this issue. Only 6 percent of white Americans support cash payments to the descendants of enslaved Africans, according to that HuffPost/YouGov poll. Only 19 percent favor reparations in the form of education and jobs programs, while 50 percent of whites don’t even believe that slavery is one of the reasons why black Americans have lower levels of wealth. They’re wrong. “The connection between slavery and the pillars of American society are tight. There are no pillars of American society without slavery,” Miller said. “You might think about that even literally. The columns of the White House and the Congress were built by slave labor.” To deflect discussing why reparations are needed, some people request a developed strategy for reparations or a detailed legislative proposal before they’ll contemplate the issue. The suggestion, in itself, fits into a tired line of thinking that victims of injustice must explain themselves fully — and convincingly — to the system that harmed them before any recognition is provided. “These demands always struck me as akin to demanding a payment plan for something one has neither decided one needs nor is willing to purchase,” Coates wrote. As he has tirelesslyreiterated, we must start with a robust discussion on why reparations are owed to black Americans. If anything, the expansive U.S. history of anti-black racism is the deterrent — but letting that deter us today is itself anti-black. This returns us to the criticism of Sanders. The symbolism of specifically calling for reparations matters. A white presidential candidate who vows only to fight police violence and other modern ills affecting black Americans is essentially urging that we put a bandage on past injustices without true reconciliation. If we don’t look back and reckon with what has been done, there is no moving forward. Two hundred fifty years of slavery. Ninety years of Jim Crow. Sixty years of separate but equal. Thirty-five years of racist housing policy. Until we reckon with our compounding moral debts, America will never be whole. And if thy brother, a Hebrew man, or a Hebrew woman, be sold unto thee, and serve thee six years; then in the seventh year thou shalt let him go free from thee. And when thou sendest him out free from thee, thou shalt not let him go away empty: thou shalt furnish him liberally out of thy flock, and out of thy floor, and out of thy winepress: of that wherewith the LORD thy God hath blessed thee thou shalt give unto him. And thou shalt remember that thou wast a bondman in the land of Egypt, and the LORD thy God redeemed thee: therefore I command thee this thing today. — deuteronomy 15: 12–15 Besides the crime which consists in violating the law, and varying from the right rule of reason, whereby a man so far becomes degenerate, and declares himself to quit the principles of human nature, and to be a noxious creature, there is commonly injury done to some person or other, and some other man receives damage by his transgression: in which case he who hath received any damage, has, besides the right of punishment common to him with other men, a particular right to seek reparation. — john locke, “second treatise” By our unpaid labor and suffering, we have earned the right to the soil, many times over and over, and now we are determined to have it. — anonymous, 1861 I. “So That’s Just One Of My Losses” Clyde ross was born in 1923, the seventh of 13 children, near Clarksdale, Mississippi, the home of the blues. Ross’s parents owned and farmed a 40-acre tract of land, flush with cows, hogs, and mules. Ross’s mother would drive to Clarksdale to do her shopping in a horse and buggy, in which she invested all the pride one might place in a Cadillac. The family owned another horse, with a red coat, which they gave to Clyde. The Ross family wanted for little, save that which all black families in the Deep South then desperately desired—the protection of the law. In the 1920s, Jim Crow Mississippi was, in all facets of society, a kleptocracy. The majority of the people in the state were perpetually robbed of the vote—a hijacking engineered through the trickery of the poll tax and the muscle of the lynch mob. Between 1882 and 1968, more black people were lynched in Mississippi than in any other state. “You and I know what’s the best way to keep the nigger from voting,” blustered Theodore Bilbo, a Mississippi senator and a proud Klansman. “You do it the night before the election.” The state’s regime partnered robbery of the franchise with robbery of the purse. Many of Mississippi’s black farmers lived in debt peonage, under the sway of cotton kings who were at once their landlords, their employers, and their primary merchants. Tools and necessities were advanced against the return on the crop, which was determined by the employer. When farmers were deemed to be in debt—and they often were—the negative balance was then carried over to the next season. A man or woman who protested this arrangement did so at the risk of grave injury or death. Refusing to work meant arrest under vagrancy laws and forced labor under the state’s penal system.Well into the 20th century, black people spoke of their flight from Mississippi in much the same manner as their runagate ancestors had. In her 2010 book, The Warmth of Other Suns, Isabel Wilkerson tells the story of Eddie Earvin, a spinach picker who fled Mississippi in 1963, after being made to work at gunpoint. “You didn’t talk about it or tell nobody,” Earvin said. “You had to sneak away.” When Clyde Ross was still a child, Mississippi authorities claimed his father owed $3,000 in back taxes. The elder Ross could not read. He did not have a lawyer. He did not know anyone at the local courthouse. He could not expect the police to be impartial. Effectively, the Ross family had no way to contest the claim and no protection under the law. The authorities seized the land. They seized the buggy. They took the cows, hogs, and mules. And so for the upkeep of separate but equal, the entire Ross family was reduced to sharecropping. This was hardly unusual. In 2001, the Associated Press published a three-part investigation into the theft of black-owned land stretching back to the antebellum period. The series documented some 406 victims and 24,000 acres of land valued at tens of millions of dollars. The land was taken through means ranging from legal chicanery to terrorism. “Some of the land taken from black families has become a country club in Virginia,” the AP reported, as well as “oil fields in Mississippi” and “a baseball spring training facility in Florida.”Clyde Ross was a smart child. His teacher thought he should attend a more challenging school. There was very little support for educating black people in Mississippi. But Julius Rosenwald, a part owner of Sears, Roebuck, had begun an ambitious effort to build schools for black children throughout the South. Ross’s teacher believed he should attend the local Rosenwald school. It was too far for Ross to walk and get back in time to work in the fields. Local white children had a school bus. Clyde Ross did not, and thus lost the chance to better his education.Then, when Ross was 10 years old, a group of white men demanded his only childhood possession—the horse with the red coat. “You can’t have this horse. We want it,” one of the white men said. They gave Ross’s father $17. “I did everything for that horse,” Ross told me. “Everything. And they took him. Put him on the racetrack. I never did know what happened to him after that, but I know they didn’t bring him back. So that’s just one of my losses.” The losses mounted. As sharecroppers, the Ross family saw their wages treated as the landlord’s slush fund. Landowners were supposed to split the profits from the cotton fields with sharecroppers. But bales would often disappear during the count, or the split might be altered on a whim. If cotton was selling for 50 cents a pound, the Ross family might get 15 cents, or only five. One year Ross’s mother promised to buy him a $7 suit for a summer program at their church. She ordered the suit by mail. But that year Ross’s family was paid only five cents a pound for cotton. The mailman arrived with the suit. The Rosses could not pay. The suit was sent back. Clyde Ross did not go to the church program. It was in these early years that Ross began to understand himself as an American—he did not live under the blind decree of justice, but under the heel of a regime that elevated armed robbery to a governing principle. He thought about fighting. “Just be quiet,” his father told him. “Because they’ll come and kill us all.” Clyde Ross grew. He was drafted into the Army. The draft officials offered him an exemption if he stayed home and worked. He preferred to take his chances with war. He was stationed in California. He found that he could go into stores without being bothered. He could walk the streets without being harassed. He could go into a restaurant and receive service.Ross was shipped off to Guam. He fought in World War II to save the world from tyranny. But when he returned to Clarksdale, he found that tyranny had followed him home. This was 1947, eight years before Mississippi lynched Emmett Till and tossed his broken body into the Tallahatchie River. The Great Migration, a mass exodus of 6 million African Americans that spanned most of the 20th century, was now in its second wave. The black pilgrims did not journey north simply seeking better wages and work, or bright lights and big adventures. They were fleeing the acquisitive warlords of the South. They were seeking the protection of the law.Clyde Ross was among them. He came to Chicago in 1947 and took a job as a taster at Campbell’s Soup. He made a stable wage. He married. He had children. His paycheck was his own. No Klansmen stripped him of the vote. When he walked down the street, he did not have to move because a white man was walking past. He did not have to take off his hat or avert his gaze. His journey from peonage to full citizenship seemed near-complete. Only one item was missing—a home, that final badge of entry into the sacred order of the American middle class of the Eisenhower years. In 1961, Ross and his wife bought a house in North Lawndale, a bustling community on Chicago’s West Side. North Lawndale had long been a predominantly Jewish neighborhood, but a handful of middle-class African Americans had lived there starting in the ’40s. The community was anchored by the sprawling Sears, Roebuck headquarters. North Lawndale’s Jewish People’s Institute actively encouraged blacks to move into the neighborhood, seeking to make it a “pilot community for interracial living.” In the battle for integration then being fought around the country, North Lawndale seemed to offer promising terrain. But out in the tall grass, highwaymen, nefarious as any Clarksdale kleptocrat, were lying in wait. Three months after Clyde Ross moved into his house, the boiler blew out. This would normally be a homeowner’s responsibility, but in fact, Ross was not really a homeowner. His payments were made to the seller, not the bank. And Ross had not signed a normal mortgage. He’d bought “on contract”: a predatory agreement that combined all the responsibilities of homeownership with all the disadvantages of renting—while offering the benefits of neither. Ross had bought his house for $27,500. The seller, not the previous homeowner but a new kind of middleman, had bought it for only $12,000 six months before selling it to Ross. In a contract sale, the seller kept the deed until the contract was paid in full—and, unlike with a normal mortgage, Ross would acquire no equity in the meantime. If he missed a single payment, he would immediately forfeit his $1,000 down payment, all his monthly payments, and the property itself. The men who peddled contracts in North Lawndale would sell homes at inflated prices and then evict families who could not pay—taking their down payment and their monthly installments as profit. Then they’d bring in another black family, rinse, and repeat. “He loads them up with payments they can’t meet,” an office secretary told The Chicago Daily News of her boss, the speculator Lou Fushanis, in 1963. “Then he takes the property away from them. He’s sold some of the buildings three or four times.”Ross had tried to get a legitimate mortgage in another neighborhood, but was told by a loan officer that there was no financing available. The truth was that there was no financing for people like Clyde Ross. From the 1930s through the 1960s, black people across the country were largely cut out of the legitimate home-mortgage market through means both legal and extralegal. Chicago whites employed every measure, from “restrictive covenants” to bombings, to keep their neighborhoods segregated.Their efforts were buttressed by the federal government. In 1934, Congress created the Federal Housing Administration. The FHA insured private mortgages, causing a drop in interest rates and a decline in the size of the down payment required to buy a house. But an insured mortgage was not a possibility for Clyde Ross. The FHA had adopted a system of maps that rated neighborhoods according to their perceived stability. On the maps, green areas, rated “A,” indicated “in demand” neighborhoods that, as one appraiser put it, lacked “a single foreigner or Negro.” These neighborhoods were considered excellent prospects for insurance. Neighborhoods where black people lived were rated “D” and were usually considered ineligible for FHA backing. They were colored in red. Neither the percentage of black people living there nor their social class mattered. Black people were viewed as a contagion. Redlining went beyond FHA-backed loans and spread to the entire mortgage industry, which was already rife with racism, excluding black people from most legitimate means of obtaining a mortgage. Explore Redlining in Chicago “A government offering such bounty to builders and lenders could have required compliance with a nondiscrimination policy,” Charles Abrams, the urban-studies expert who helped create the New York City Housing Authority, wrote in 1955. “Instead, the FHA adopted a racial policy that could well have been culled from the Nuremberg laws.” The devastating effects are cogently outlined by Melvin L. Oliver and Thomas M. Shapiro in their 1995 book, Black Wealth/White Wealth: Locked out of the greatest mass-based opportunity for wealth accumulation in American history, African Americans who desired and were able to afford home ownership found themselves consigned to central-city communities where their investments were affected by the “self-fulfilling prophecies” of the FHA appraisers: cut off from sources of new investment[,] their homes and communities deteriorated and lost value in comparison to those homes and communities that FHA appraisers deemed desirable. In Chicago and across the country, whites looking to achieve the American dream could rely on a legitimate credit system backed by the government. Blacks were herded into the sights of unscrupulous lenders who took them for money and for sport. “It was like people who like to go out and shoot lions in Africa. It was the same thrill,” a housing attorney told the historian Beryl Satter in her 2009 book, Family Properties. “The thrill of the chase and the kill.” The kill was profitable. At the time of his death, Lou Fushanis owned more than 600 properties, many of them in North Lawndale, and his estate was estimated to be worth $3 million. He’d made much of this money by exploiting the frustrated hopes of black migrants like Clyde Ross. During this period, according to one estimate, 85 percent of all black home buyers who bought in Chicago bought on contract. “If anybody who is well established in this business in Chicago doesn’t earn $100,000 a year,” a contract seller told The Saturday Evening Post in 1962, “he is loafing.” Contract sellers became rich. North Lawndale became a ghetto.Clyde Ross still lives there. He still owns his home. He is 91, and the emblems of survival are all around him—awards for service in his community, pictures of his children in cap and gown. But when I asked him about his home in North Lawndale, I heard only anarchy.“We were ashamed. We did not want anyone to know that we were that ignorant,” Ross told me. He was sitting at his dining-room table. His glasses were as thick as his Clarksdale drawl. “I’d come out of Mississippi where there was one mess, and come up here and got in another mess. So how dumb am I? I didn’t want anyone to know how dumb I was. “When I found myself caught up in it, I said, ‘How? I just left this mess. I just left no laws. And no regard. And then I come here and get cheated wide open.’ I would probably want to do some harm to some people, you know, if I had been violent like some of us. I thought, ‘Man, I got caught up in this stuff. I can’t even take care of my kids.’ I didn’t have enough for my kids. You could fall through the cracks easy fighting these white people. And no law.” But fight Clyde Ross did. In 1968 he joined the newly formed Contract Buyers League—a collection of black homeowners on Chicago’s South and West Sides, all of whom had been locked into the same system of predation. There was Howell Collins, whose contract called for him to pay $25,500 for a house that a speculator had bought for $14,500. There was Ruth Wells, who’d managed to pay out half her contract, expecting a mortgage, only to suddenly see an insurance bill materialize out of thin air—a requirement the seller had added without Wells’s knowledge. Contract sellers used every tool at their disposal to pilfer from their clients. They scared white residents into selling low. They lied about properties’ compliance with building codes, then left the buyer responsible when city inspectors arrived. They presented themselves as real-estate brokers, when in fact they were the owners. They guided their clients to lawyers who were in on the scheme. The Contract Buyers League fought back. Members—who would eventually number more than 500—went out to the posh suburbs where the speculators lived and embarrassed them by knocking on their neighbors’ doors and informing them of the details of the contract-lending trade. They refused to pay their installments, instead holding monthly payments in an escrow account. Then they brought a suit against the contract sellers, accusing them of buying properties and reselling in such a manner “to reap from members of the Negro race large and unjust profits.” Video: The Contract Buyers League In return for the “deprivations of their rights and privileges under the Thirteenth and Fourteenth Amendments,” the league demanded “prayers for relief”—payback of all moneys paid on contracts and all moneys paid for structural improvement of properties, at 6 percent interest minus a “fair, non-discriminatory” rental price for time of occupation. Moreover, the league asked the court to adjudge that the defendants had “acted willfully and maliciously and that malice is the gist of this action.”Ross and the Contract Buyers League were no longer appealing to the government simply for equality. They were no longer fleeing in hopes of a better deal elsewhere. They were charging society with a crime against their community. They wanted the crime publicly ruled as such. They wanted the crime’s executors declared to be offensive to society. And they wanted restitution for the great injury brought upon them by said offenders. In 1968, Clyde Ross and the Contract Buyers League were no longer simply seeking the protection of the law. They were seeking reparations. II. “A Difference of Kind, Not Degree” According to the most-recent statistics, North Lawndale is now on the wrong end of virtually every socioeconomic indicator. In 1930 its population was 112,000. Today it is 36,000. The halcyon talk of “interracial living” is dead. The neighborhood is 92 percent black. Its homicide rate is 45 per 100,000—triple the rate of the city as a whole. The infant-mortality rate is 14 per 1,000—more than twice the national average. Forty-three percent of the people in North Lawndale live below the poverty line—double Chicago’s overall rate. Forty-five percent of all households are on food stamps—nearly three times the rate of the city at large. Sears, Roebuck left the neighborhood in 1987, taking 1,800 jobs with it. Kids in North Lawndale need not be confused about their prospects: Cook County’s Juvenile Temporary Detention Center sits directly adjacent to the neighborhood. North Lawndale is an extreme portrait of the trends that ail black Chicago. Such is the magnitude of these ailments that it can be said that blacks and whites do not inhabit the same city. The average per capita income of Chicago’s white neighborhoods is almost three times that of its black neighborhoods. When the Harvard sociologist Robert J. Sampson examined incarceration rates in Chicago in his 2012 book, Great American City, he found that a black neighborhood with one of the highest incarceration rates (West Garfield Park) had a rate more than 40 times as high as the white neighborhood with the highest rate (Clearing). “This is a staggering differential, even for community-level comparisons,” Sampson writes. “A difference of kind, not degree.”In other words, Chicago’s impoverished black neighborhoods—characterized by high unemployment and households headed by single parents—are not simply poor; they are “ecologically distinct.” This “is not simply the same thing as low economic status,” writes Sampson. “In this pattern Chicago is not alone.” The lives of black Americans are better than they were half a century ago. The humiliation of whites only signs are gone. Rates of black poverty have decreased. Black teen-pregnancy rates are at record lows—and the gap between black and white teen-pregnancy rates has shrunk significantly. But such progress rests on a shaky foundation, and fault lines are everywhere. The income gap between black and white households is roughly the same today as it was in 1970. Patrick Sharkey, a sociologist at New York University, studied children born from 1955 through 1970 and found that 4 percent of whites and 62 percent of blacks across America had been raised in poor neighborhoods. A generation later, the same study showed, virtually nothing had changed. And whereas whites born into affluent neighborhoods tended to remain in affluent neighborhoods, blacks tended to fall out of them.This is not surprising. Black families, regardless of income, are significantly less wealthy than white families. The Pew Research Center estimates that white households are worth roughly 20 times as much as black households, and that whereas only 15 percent of whites have zero or negative wealth, more than a third of blacks do. Effectively, the black family in America is working without a safety net. When financial calamity strikes—a medical emergency, divorce, job loss—the fall is precipitous.And just as black families of all incomes remain handicapped by a lack of wealth, so too do they remain handicapped by their restricted choice of neighborhood. Black people with upper-middle-class incomes do not generally live in upper-middle-class neighborhoods. Sharkey’s research shows that black families making $100,000 typically live in the kinds of neighborhoods inhabited by white families making $30,000. “Blacks and whites inhabit such different neighborhoods,” Sharkey writes, “that it is not possible to compare the economic outcomes of black and white children.” A national real-estate association advised not to sell to “a colored man of means who was giving his children a college education.” Even seeming evidence of progress withers under harsh light. In 2012, the Manhattan Institute cheerily noted that segregation had declined since the 1960s. And yet African Americans still remained—by far—the most segregated ethnic group in the country. With segregation, with the isolation of the injured and the robbed, comes the concentration of disadvantage. An unsegregated America might see poverty, and all its effects, spread across the country with no particular bias toward skin color. Instead, the concentration of poverty has been paired with a concentration of melanin. The resulting conflagration has been devastating.One thread of thinking in the African American community holds that these depressing numbers partially stem from cultural pathologies that can be altered through individual grit and exceptionally good behavior. (In 2011, Philadelphia Mayor Michael Nutter, responding to violence among young black males, put the blame on the family: “Too many men making too many babies they don’t want to take care of, and then we end up dealing with your children.” Nutter turned to those presumably fatherless babies: “Pull your pants up and buy a belt, because no one wants to see your underwear or the crack of your butt.”) The thread is as old as black politics itself. It is also wrong. The kind of trenchant racism to which black people have persistently been subjected can never be defeated by making its victims more respectable. The essence of American racism is disrespect. And in the wake of the grim numbers, we see the grim inheritance. The Contract Buyers League’s suit brought by Clyde Ross and his allies took direct aim at this inheritance. The suit was rooted in Chicago’s long history of segregation, which had created two housing markets—one legitimate and backed by the government, the other lawless and patrolled by predators. The suit dragged on until 1976, when the league lost a jury trial. Securing the equal protection of the law proved hard; securing reparations proved impossible. If there were any doubts about the mood of the jury, the foreman removed them by saying, when asked about the verdict, that he hoped it would help end “the mess Earl Warren made with Brown v. Board of Education and all that nonsense.” The Supreme Court seems to share that sentiment. The past two decades have witnessed a rollback of the progressive legislation of the 1960s. Liberals have found themselves on the defensive. In 2008, when Barack Obama was a candidate for president, he was asked whether his daughters—Malia and Sasha—should benefit from affirmative action. He answered in the negative. The exchange rested upon an erroneous comparison of the average American white family and the exceptional first family. In the contest of upward mobility, Barack and Michelle Obama have won. But they’ve won by being twice as good—and enduring twice as much. Malia and Sasha Obama enjoy privileges beyond the average white child’s dreams. But that comparison is incomplete. The more telling question is how they compare with Jenna and Barbara Bush—the products of many generations of privilege, not just one. Whatever the Obama children achieve, it will be evidence of their family’s singular perseverance, not of broad equality. III. “We Inherit Our Ample Patrimony” in 1783, the freedwoman Belinda Royall petitioned the commonwealth of Massachusetts for reparations. Belinda had been born in modern-day Ghana. She was kidnapped as a child and sold into slavery. She endured the Middle Passage and 50 years of enslavement at the hands of Isaac Royall and his son. But the junior Royall, a British loyalist, fled the country during the Revolution. Belinda, now free after half a century of labor, beseeched the nascent Massachusetts legislature: The face of your Petitioner, is now marked with the furrows of time, and her frame bending under the oppression of years, while she, by the Laws of the Land, is denied the employment of one morsel of that immense wealth, apart whereof hath been accumilated by her own industry, and the whole augmented by her servitude. WHEREFORE, casting herself at your feet if your honours, as to a body of men, formed for the extirpation of vassalage, for the reward of Virtue, and the just return of honest industry—she prays, that such allowance may be made her out of the Estate of Colonel Royall, as will prevent her, and her more infirm daughter, from misery in the greatest extreme, and scatter comfort over the short and downward path of their lives. Belinda Royall was granted a pension of 15 pounds and 12 shillings, to be paid out of the estate of Isaac Royall—one of the earliest successful attempts to petition for reparations. At the time, black people in America had endured more than 150 years of enslavement, and the idea that they might be owed something in return was, if not the national consensus, at least not outrageous. “A heavy account lies against us as a civil society for oppressions committed against people who did not injure us,” wrote the Quaker John Woolman in 1769, “and that if the particular case of many individuals were fairly stated, it would appear that there was considerable due to them.” As the historian Roy E. Finkenbine has documented, at the dawn of this country, black reparations were actively considered and often effected. Quakers in New York, New England, and Baltimore went so far as to make “membership contingent upon compensating one’s former slaves.” In 1782, the Quaker Robert Pleasants emancipated his 78 slaves, granted them 350 acres, and later built a school on their property and provided for their education. “The doing of this justice to the injured Africans,” wrote Pleasants, “would be an acceptable offering to him who ‘Rules in the kingdom of men.’ ” Edward Coles, a protégé of Thomas Jefferson who became a slaveholder through inheritance, took many of his slaves north and granted them a plot of land in Illinois. John Randolph, a cousin of Jefferson’s, willed that all his slaves be emancipated upon his death, and that all those older than 40 be given 10 acres of land. “I give and bequeath to all my slaves their freedom,” Randolph wrote, “heartily regretting that I have been the owner of one.” In his book Forever Free, Eric Foner recounts the story of a disgruntled planter reprimanding a freedman loafing on the job: Planter: “You lazy nigger, I am losing a whole day’s labor by you.” Freedman: “Massa, how many days’ labor have I lost by you?” In the 20th century, the cause of reparations was taken up by a diverse cast that included the Confederate veteran Walter R. Vaughan, who believed that reparations would be a stimulus for the South; the black activist Callie House; black-nationalist leaders like “Queen Mother” Audley Moore; and the civil-rights activist James Forman. The movement coalesced in 1987 under an umbrella organization called the National Coalition of Blacks for Reparations in America (n’cobra). The NAACP endorsed reparations in 1993. Charles J. Ogletree Jr., a professor at Harvard Law School, has pursued reparations claims in court. But while the people advocating reparations have changed over time, the response from the country has remained virtually the same. “They have been taught to labor,” the Chicago Tribune editorialized in 1891. “They have been taught Christian civilization, and to speak the noble English language instead of some African gibberish. The account is square with the ex‑slaves.”Not exactly. Having been enslaved for 250 years, black people were not left to their own devices. They were terrorized. In the Deep South, a second slavery ruled. In the North, legislatures, mayors, civic associations, banks, and citizens all colluded to pin black people into ghettos, where they were overcrowded, overcharged, and undereducated. Businesses discriminated against them, awarding them the worst jobs and the worst wages. Police brutalized them in the streets. And the notion that black lives, black bodies, and black wealth were rightful targets remained deeply rooted in the broader society. Now we have half-stepped away from our long centuries of despoilment, promising, “Never again.” But still we are haunted. It is as though we have run up a credit-card bill and, having pledged to charge no more, remain befuddled that the balance does not disappear. The effects of that balance, interest accruing daily, are all around us.Broach the topic of reparations today and a barrage of questions inevitably follows: Who will be paid? How much will they be paid? Who will pay? But if the practicalities, not the justice, of reparations are the true sticking point, there has for some time been the beginnings of a solution. For the past 25 years, Congressman John Conyers Jr., who represents the Detroit area, has marked every session of Congress by introducing a bill calling for a congressional study of slavery and its lingering effects as well as recommendations for “appropriate remedies.” A country curious about how reparations might actually work has an easy solution in Conyers’s bill, now called HR 40, the Commission to Study Reparation Proposals for African Americans Act. We would support this bill, submit the question to study, and then assess the possible solutions. But we are not interested. “It’s because it’s black folks making the claim,” Nkechi Taifa, who helped found n’cobra, says. “People who talk about reparations are considered left lunatics. But all we are talking about is studying [reparations]. As John Conyers has said, we study everything. We study the water, the air. We can’t even study the issue? This bill does not authorize one red cent to anyone.” That HR 40 has never—under either Democrats or Republicans—made it to the House floor suggests our concerns are rooted not in the impracticality of reparations but in something more existential. If we conclude that the conditions in North Lawndale and black America are not inexplicable but are instead precisely what you’d expect of a community that for centuries has lived in America’s crosshairs, then what are we to make of the world’s oldest democracy? One cannot escape the question by hand-waving at the past, disavowing the acts of one’s ancestors, nor by citing a recent date of ancestral immigration. The last slaveholder has been dead for a very long time. The last soldier to endure Valley Forge has been dead much longer. To proudly claim the veteran and disown the slaveholder is patriotism à la carte. A nation outlives its generations. We were not there when Washington crossed the Delaware, but Emanuel Gottlieb Leutze’s rendering has meaning to us. We were not there when Woodrow Wilson took us into World War I, but we are still paying out the pensions. If Thomas Jefferson’s genius matters, then so does his taking of Sally Hemings’s body. If George Washington crossing the Delaware matters, so must his ruthless pursuit of the runagate Oney Judge. Black families making $100,000 typically live in the kinds of neighborhoods inhabited by white families making $30,000. In 1909, President William Howard Taft told the country that “intelligent” white southerners were ready to see blacks as “useful members of the community.” A week later Joseph Gordon, a black man, was lynched outside Greenwood, Mississippi. The high point of the lynching era has passed. But the memories of those robbed of their lives still live on in the lingering effects. Indeed, in America there is a strange and powerful belief that if you stab a black person 10 times, the bleeding stops and the healing begins the moment the assailant drops the knife. We believe white dominance to be a fact of the inert past, a delinquent debt that can be made to disappear if only we don’t look. There has always been another way. “It is in vain to alledge, that our ancestorsbrought them hither, and not we,” Yale President Timothy Dwight said in 1810. We inherit our ample patrimony with all its incumbrances; and are bound to pay the debts of our ancestors. This debt, particularly, we are bound to discharge: and, when the righteous Judge of the Universe comes to reckon with his servants, he will rigidly exact the payment at our hands. To give them liberty, and stop here, is to entail upon them a curse. IV. “The Ills That Slavery Frees Us From” America begins in black plunder and white democracy, two features that are not contradictory but complementary. “The men who came together to found the independent United States, dedicated to freedom and equality, either held slaves or were willing to join hands with those who did,” the historian Edmund S. Morgan wrote. “None of them felt entirely comfortable about the fact, but neither did they feel responsible for it. Most of them had inherited both their slaves and their attachment to freedom from an earlier generation, and they knew the two were not unconnected.” When enslaved Africans, plundered of their bodies, plundered of their families, and plundered of their labor, were brought to the colony of Virginia in 1619, they did not initially endure the naked racism that would engulf their progeny. Some of them were freed. Some of them intermarried. Still others escaped with the white indentured servants who had suffered as they had. Some even rebelled together, allying under Nathaniel Bacon to torch Jamestown in 1676. One hundred years later, the idea of slaves and poor whites joining forces would shock the senses, but in the early days of the English colonies, the two groups had much in common. English visitors to Virginia found that its masters “abuse their servantes with intollerable oppression and hard usage.” White servants were flogged, tricked into serving beyond their contracts, and traded in much the same manner as slaves.This “hard usage” originated in a simple fact of the New World—land was boundless but cheap labor was limited. As life spans increased in the colony, the Virginia planters found in the enslaved Africans an even more efficient source of cheap labor. Whereas indentured servants were still legal subjects of the English crown and thus entitled to certain protections, African slaves entered the colonies as aliens. Exempted from the protections of the crown, they became early America’s indispensable working class—fit for maximum exploitation, capable of only minimal resistance.For the next 250 years, American law worked to reduce black people to a class of untouchables and raise all white men to the level of citizens. In 1650, Virginia mandated that “all persons except Negroes” were to carry arms. In 1664, Maryland mandated that any Englishwoman who married a slave must live as a slave of her husband’s master. In 1705, the Virginia assembly passed a law allowing for the dismemberment of unruly slaves—but forbidding masters from whipping “a Christian white servant naked, without an order from a justice of the peace.” In that same law, the colony mandated that “all horses, cattle, and hogs, now belonging, or that hereafter shall belong to any slave” be seized and sold off by the local church, the profits used to support “the poor of the said parish.” At that time, there would have still been people alive who could remember blacks and whites joining to burn down Jamestown only 29 years before. But at the beginning of the 18th century, two primary classes were enshrined in America. “The two great divisions of society are not the rich and poor, but white and black,” John C. Calhoun, South Carolina’s senior senator, declared on the Senate floor in 1848. “And all the former, the poor as well as the rich, belong to the upper class, and are respected and treated as equals.” In 1860, the majority of people living in South Carolina and Mississippi, almost half of those living in Georgia, and about one-third of all Southerners were on the wrong side of Calhoun’s line. The state with the largest number of enslaved Americans was Virginia, where in certain counties some 70 percent of all people labored in chains. Nearly one-fourth of all white Southerners owned slaves, and upon their backs the economic basis of America—and much of the Atlantic world—was erected. In the seven cotton states, one-third of all white income was derived from slavery. By 1840, cotton produced by slave labor constituted 59 percent of the country’s exports. The web of this slave society extended north to the looms of New England, and across the Atlantic to Great Britain, where it powered a great economic transformation and altered the trajectory of world history. “Whoever says Industrial Revolution,” wrote the historian Eric J. Hobsbawm, “says cotton.” The wealth accorded America by slavery was not just in what the slaves pulled from the land but in the slaves themselves. “In 1860, slaves as an asset were worth more than all of America’s manufacturing, all of the railroads, all of the productive capacity of the United States put together,” the Yale historian David W. Blight has noted. “Slaves were the single largest, by far, financial asset of property in the entire American economy.” The sale of these slaves—“in whose bodies that money congealed,” writes Walter Johnson, a Harvard historian—generated even more ancillary wealth. Loans were taken out for purchase, to be repaid with interest. Insurance policies were drafted against the untimely death of a slave and the loss of potential profits. Slave sales were taxed and notarized. The vending of the black body and the sundering of the black family became an economy unto themselves, estimated to have brought in tens of millions of dollars to antebellum America. In 1860 there were more millionaires per capita in the Mississippi Valley than anywhere else in the country. Beneath the cold numbers lay lives divided. “I had a constant dread that Mrs. Moore, her mistress, would be in want of money and sell my dear wife,” a freedman wrote, reflecting on his time in slavery. “We constantly dreaded a final separation. Our affection for each was very strong, and this made us always apprehensive of a cruel parting.” Forced partings were common in the antebellum South. A slave in some parts of the region stood a 30 percent chance of being sold in his or her lifetime. Twenty-five percent of interstate trades destroyed a first marriage and half of them destroyed a nuclear family. When the wife and children of Henry Brown, a slave in Richmond, Virginia, were to be sold away, Brown searched for a white master who might buy his wife and children to keep the family together. He failed: The next day, I stationed myself by the side of the road, along which the slaves, amounting to three hundred and fifty, were to pass. The purchaser of my wife was a Methodist minister, who was about starting for North Carolina. Pretty soon five waggon-loads of little children passed, and looking at the foremost one, what should I see but a little child, pointing its tiny hand towards me, exclaiming, “There’s my father; I knew he would come and bid me good-bye.” It was my eldest child! Soon the gang approached in which my wife was chained. I looked, and beheld her familiar face; but O, reader, that glance of agony! may God spare me ever again enduring the excruciating horror of that moment! She passed, and came near to where I stood. I seized hold of her hand, intending to bid her farewell; but words failed me; the gift of utterance had fled, and I remained speechless. I followed her for some distance, with her hand grasped in mine, as if to save her from her fate, but I could not speak, and I was obliged to turn away in silence. In a time when telecommunications were primitive and blacks lacked freedom of movement, the parting of black families was a kind of murder. Here we find the roots of American wealth and democracy—in the for-profit destruction of the most important asset available to any people, the family. The destruction was not incidental to America’s rise; it facilitated that rise. By erecting a slave society, America created the economic foundation for its great experiment in democracy. The labor strife that seeded Bacon’s rebellion was suppressed. America’s indispensable working class existed as property beyond the realm of politics, leaving white Americans free to trumpet their love of freedom and democratic values. Assessing antebellum democracy in Virginia, a visitor from England observed that the state’s natives “can profess an unbounded love of liberty and of democracy in consequence of the mass of the people, who in other countries might become mobs, being there nearly altogether composed of their own Negro slaves.” V. The Quiet Plunder the consequences of 250 years of enslavement, of war upon black families and black people, were profound. Like homeownership today, slave ownership was aspirational, attracting not just those who owned slaves but those who wished to. Much as homeowners today might discuss the addition of a patio or the painting of a living room, slaveholders traded tips on the best methods for breeding workers, exacting labor, and doling out punishment. Just as a homeowner today might subscribe to a magazine like This Old House, slaveholders had journals such as De Bow’s Review, which recommended the best practices for wringing profits from slaves. By the dawn of the Civil War, the enslavement of black America was thought to be so foundational to the country that those who sought to end it were branded heretics worthy of death. Imagine what would happen if a president today came out in favor of taking all American homes from their owners: the reaction might well be violent. In the aftermath of the Civil War, Radical Republicans attempted to reconstruct the country upon something resembling universal equality—but they were beaten back by a campaign of “Redemption,” led by White Liners, Red Shirts, and Klansmen bent on upholding a society “formed for the white, not for the black man.” A wave of terrorism roiled the South. In his massive history Reconstruction, Eric Foner recounts incidents of black people being attacked for not removing their hats; for refusing to hand over a whiskey flask; for disobeying church procedures; for “using insolent language”; for disputing labor contracts; for refusing to be “tied like a slave.” Sometimes the attacks were intended simply to “thin out the niggers a little.” Terrorism carried the day. Federal troops withdrew from the South in 1877. The dream of Reconstruction died. For the next century, political violence was visited upon blacks wantonly, with special treatment meted out toward black people of ambition. Black schools and churches were burned to the ground. Black voters and the political candidates who attempted to rally them were intimidated, and some were murdered. At the end of World War I, black veterans returning to their homes were assaulted for daring to wear the American uniform. The demobilization of soldiers after the war, which put white and black veterans into competition for scarce jobs, produced the Red Summer of 1919: a succession of racist pogroms against dozens of cities ranging from Longview, Texas, to Chicago to Washington, D.C. Organized white violence against blacks continued into the 1920s—in 1921 a white mob leveled Tulsa’s “Black Wall Street,” and in 1923 another one razed the black town of Rosewood, Florida—and virtually no one was punished. The work of mobs was a rabid and violent rendition of prejudices that extended even into the upper reaches of American government. The New Deal is today remembered as a model for what progressive government should do—cast a broad social safety net that protects the poor and the afflicted while building the middle class. When progressives wish to express their disappointment with Barack Obama, they point to the accomplishments of Franklin Roosevelt. But these progressives rarely note that Roosevelt’s New Deal, much like the democracy that produced it, rested on the foundation of Jim Crow.“The Jim Crow South,” writes Ira Katznelson, a history and political-science professor at Columbia, “was the one collaborator America’s democracy could not do without.” The marks of that collaboration are all over the New Deal. The omnibus programs passed under the Social Security Act in 1935 were crafted in such a way as to protect the southern way of life. Old-age insurance (Social Security proper) and unemployment insurance excluded farmworkers and domestics—jobs heavily occupied by blacks. When President Roosevelt signed Social Security into law in 1935, 65 percent of African Americans nationally and between 70 and 80 percent in the South were ineligible. The NAACP protested, calling the new American safety net “a sieve with holes just big enough for the majority of Negroes to fall through.”The oft-celebrated G.I. Bill similarly failed black Americans, by mirroring the broader country’s insistence on a racist housing policy. Though ostensibly color-blind, Title III of the bill, which aimed to give veterans access to low-interest home loans, left black veterans to tangle with white officials at their local Veterans Administration as well as with the same banks that had, for years, refused to grant mortgages to blacks. The historian Kathleen J. Frydl observes in her 2009 book, The GI Bill, that so many blacks were disqualified from receiving Title III benefits “that it is more accurate simply to say that blacks could not use this particular title.” In Cold War America, homeownership was seen as a means of instilling patriotism, and as a civilizing and anti-radical force. “No man who owns his own house and lot can be a Communist,” claimed William Levitt, who pioneered the modern suburb with the development of the various Levittowns, his famous planned communities. “He has too much to do.” But the Levittowns were, with Levitt’s willing acquiescence, segregated throughout their early years. Daisy and Bill Myers, the first black family to move into Levittown, Pennsylvania, were greeted with protests and a burning cross. A neighbor who opposed the family said that Bill Myers was “probably a nice guy, but every time I look at him I see $2,000 drop off the value of my house.” The neighbor had good reason to be afraid. Bill and Daisy Myers were from the other side of John C. Calhoun’s dual society. If they moved next door, housing policy almost guaranteed that their neighbors’ property values would decline. Whereas shortly before the New Deal, a typical mortgage required a large down payment and full repayment within about 10 years, the creation of the Home Owners’ Loan Corporation in 1933 and then the Federal Housing Administration the following year allowed banks to offer loans requiring no more than 10 percent down, amortized over 20 to 30 years. “Without federal intervention in the housing market, massive suburbanization would have been impossible,” writes Thomas J. Sugrue, a historian at the University of Pennsylvania. “In 1930, only 30 percent of Americans owned their own homes; by 1960, more than 60 percent were home owners. Home ownership became an emblem of American citizenship.” That emblem was not to be awarded to blacks. The American real-estate industry believed segregation to be a moral principle. As late as 1950, the National Association of Real Estate Boards’ code of ethics warned that “a Realtor should never be instrumental in introducing into a neighborhood … any race or nationality, or any individuals whose presence will clearly be detrimental to property values.” A 1943 brochure specified that such potential undesirables might include madams, bootleggers, gangsters—and “a colored man of means who was giving his children a college education and thought they were entitled to live among whites.” The federal government concurred. It was the Home Owners’ Loan Corporation, not a private trade association, that pioneered the practice of redlining, selectively granting loans and insisting that any property it insured be covered by a restrictive covenant—a clause in the deed forbidding the sale of the property to anyone other than whites. Millions of dollars flowed from tax coffers into segregated white neighborhoods. “For perhaps the first time, the federal government embraced the discriminatory attitudes of the marketplace,” the historian Kenneth T. Jackson wrote in his 1985 book, Crabgrass Frontier, a history of suburbanization. “Previously, prejudices were personalized and individualized; FHA exhorted segregation and enshrined it as public policy. Whole areas of cities were declared ineligible for loan guarantees.” Redlining was not officially outlawed until 1968, by the Fair Housing Act. By then the damage was done—and reports of redlining by banks have continued. The federal government is premised on equal fealty from all its citizens, who in return are to receive equal treatment. But as late as the mid-20th century, this bargain was not granted to black people, who repeatedly paid a higher price for citizenship and received less in return. Plunder had been the essential feature of slavery, of the society described by Calhoun. But practically a full century after the end of the Civil War and the abolition of slavery, the plunder—quiet, systemic, submerged—continued even amidst the aims and achievements of New Deal liberals. VI. Making The Second Ghetto today chicago is one of the most segregated cities in the country, a fact that reflects assiduous planning. In the effort to uphold white supremacy at every level down to the neighborhood, Chicago—a city founded by the black fur trader Jean Baptiste Point du Sable—has long been a pioneer. The efforts began in earnest in 1917, when the Chicago Real Estate Board, horrified by the influx of southern blacks, lobbied to zone the entire city by race. But after the Supreme Court ruled against explicit racial zoning that year, the city was forced to pursue its agenda by more-discreet means. Like the Home Owners’ Loan Corporation, the Federal Housing Administration initially insisted on restrictive covenants, which helped bar blacks and other ethnic undesirables from receiving federally backed home loans. By the 1940s, Chicago led the nation in the use of these restrictive covenants, and about half of all residential neighborhoods in the city were effectively off-limits to blacks. It is common today to become misty-eyed about the old black ghetto, where doctors and lawyers lived next door to meatpackers and steelworkers, who themselves lived next door to prostitutes and the unemployed. This segregationist nostalgia ignores the actual conditions endured by the people living there—vermin and arson, for instance—and ignores the fact that the old ghetto was premised on denying black people privileges enjoyed by white Americans. In 1948, when the Supreme Court ruled that restrictive covenants, while permissible, were not enforceable by judicial action, Chicago had other weapons at the ready. The Illinois state legislature had already given Chicago’s city council the right to approve—and thus to veto—any public housing in the city’s wards. This came in handy in 1949, when a new federal housing act sent millions of tax dollars into Chicago and other cities around the country. Beginning in 1950, site selection for public housing proceeded entirely on the grounds of segregation. By the 1960s, the city had created with its vast housing projects what the historian Arnold R. Hirsch calls a “second ghetto,” one larger than the old Black Belt but just as impermeable. More than 98 percent of all the family public-housing units built in Chicago between 1950 and the mid‑1960s were built in all-black neighborhoods. Governmental embrace of segregation was driven by the virulent racism of Chicago’s white citizens. White neighborhoods vulnerable to black encroachment formed block associations for the sole purpose of enforcing segregation. They lobbied fellow whites not to sell. They lobbied those blacks who did manage to buy to sell back. In 1949, a group of Englewood Catholics formed block associations intended to “keep up the neighborhood.” Translation: keep black people out. And when civic engagement was not enough, when government failed, when private banks could no longer hold the line, Chicago turned to an old tool in the American repertoire—racial violence. “The pattern of terrorism is easily discernible,” concluded a Chicago civic group in the 1940s. “It is at the seams of the black ghetto in all directions.” On July 1 and 2 of 1946, a mob of thousands assembled in Chicago’s Park Manor neighborhood, hoping to eject a black doctor who’d recently moved in. The mob pelted the house with rocks and set the garage on fire. The doctor moved away. In 1947, after a few black veterans moved into the Fernwood section of Chicago, three nights of rioting broke out; gangs of whites yanked blacks off streetcars and beat them. Two years later, when a union meeting attended by blacks in Englewood triggered rumors that a home was being “sold to niggers,” blacks (and whites thought to be sympathetic to them) were beaten in the streets. In 1951, thousands of whites in Cicero, 20 minutes or so west of downtown Chicago, attacked an apartment building that housed a single black family, throwing bricks and firebombs through the windows and setting the apartment on fire. A Cook County grand jury declined to charge the rioters—and instead indicted the family’s NAACP attorney, the apartment’s white owner, and the owner’s attorney and rental agent, charging them with conspiring to lower property values. Two years after that, whites picketed and planted explosives in South Deering, about 30 minutes from downtown Chicago, to force blacks out. When terrorism ultimately failed, white homeowners simply fled the neighborhood. The traditional terminology, white flight, implies a kind of natural expression of preference. In fact, white flight was a triumph of social engineering, orchestrated by the shared racist presumptions of America’s public and private sectors. For should any nonracist white families decide that integration might not be so bad as a matter of principle or practicality, they still had to contend with the hard facts of American housing policy: When the mid-20th-century white homeowner claimed that the presence of a Bill and Daisy Myers decreased his property value, he was not merely engaging in racist dogma—he was accurately observing the impact of federal policy on market prices. Redlining destroyed the possibility of investment wherever black people lived. VII. “A Lot Of People Fell By The Way” speculators in north lawndale, and at the edge of the black ghettos, knew there was money to be made off white panic. They resorted to “block-busting”—spooking whites into selling cheap before the neighborhood became black. They would hire a black woman to walk up and down the street with a stroller. Or they’d hire someone to call a number in the neighborhood looking for “Johnny Mae.” Then they’d cajole whites into selling at low prices, informing them that the more blacks who moved in, the more the value of their homes would decline, so better to sell now. With these white-fled homes in hand, speculators then turned to the masses of black people who had streamed northward as part of the Great Migration, or who were desperate to escape the ghettos: the speculators would take the houses they’d just bought cheap through block-busting and sell them to blacks on contract. To keep up with his payments and keep his heat on, Clyde Ross took a second job at the post office and then a third job delivering pizza. His wife took a job working at Marshall Field. He had to take some of his children out of private school. He was not able to be at home to supervise his children or help them with their homework. Money and time that Ross wanted to give his children went instead to enrich white speculators. “The problem was the money,” Ross told me. “Without the money, you can’t move. You can’t educate your kids. You can’t give them the right kind of food. Can’t make the house look good. They think this neighborhood is where they supposed to be. It changes their outlook. My kids were going to the best schools in this neighborhood, and I couldn’t keep them in there.” Mattie Lewis came to Chicago from her native Alabama in the mid-’40s, when she was 21, persuaded by a friend who told her she could get a job as a hairdresser. Instead she was hired by Western Electric, where she worked for 41 years. I met Lewis in the home of her neighbor Ethel Weatherspoon. Both had owned homes in North Lawndale for more than 50 years. Both had bought their houses on contract. Both had been active with Clyde Ross in the Contract Buyers League’s effort to garner restitution from contract sellers who’d operated in North Lawndale, banks who’d backed the scheme, and even the Federal Housing Administration. We were joined by Jack Macnamara, who’d been an organizing force in the Contract Buyers League when it was founded, in 1968. Our gathering had the feel of a reunion, because the writer James Alan McPherson had profiled the Contract Buyers League for The Atlantic back in 1972. Weatherspoon bought her home in 1957. “Most of the whites started moving out,” she told me. “‘The blacks are coming. The blacks are coming.’ They actually said that. They had signs up: don’t sell to blacks.”Before moving to North Lawndale, Lewis and her husband tried moving to Cicero after seeing a house advertised for sale there. “Sorry, I just sold it today,” the Realtor told Lewis’s husband. “I told him, ‘You know they don’t want you in Cicero,’ ” Lewis recalls. “ ‘They ain’t going to let nobody black in Cicero.’ ”In 1958, the couple bought a home in North Lawndale on contract. They were not blind to the unfairness. But Lewis, born in the teeth of Jim Crow, considered American piracy—black people keep on making it, white people keep on taking it—a fact of nature. “All I wanted was a house. And that was the only way I could get it. They weren’t giving black people loans at that time,” she said. “We thought, ‘This is the way it is. We going to do it till we die, and they ain’t never going to accept us. That’s just the way it is.’ “The only way you were going to buy a home was to do it the way they wanted,” she continued. “And I was determined to get me a house. If everybody else can have one, I want one too. I had worked for white people in the South. And I saw how these white people were living in the North and I thought, ‘One day I’m going to live just like them.’ I wanted cabinets and all these things these other people have.” White flight was not an accident—it was a triumph of racist social engineering. Whenever she visited white co-workers at their homes, she saw the difference. “I could see we were just getting ripped off,” she said. “I would see things and I would say, ‘I’d like to do this at my house.’ And they would say, ‘Do it,’ but I would think, ‘I can’t, because it costs us so much more.’ ” I asked Lewis and Weatherspoon how they kept up on payments. “You paid it and kept working,” Lewis said of the contract. “When that payment came up, you knew you had to pay it.” “You cut down on the light bill. Cut down on your food bill,” Weatherspoon interjected. “You cut down on things for your child, that was the main thing,” said Lewis. “My oldest wanted to be an artist and my other wanted to be a dancer and my other wanted to take music.” Lewis and Weatherspoon, like Ross, were able to keep their homes. The suit did not win them any remuneration. But it forced contract sellers to the table, where they allowed some members of the Contract Buyers League to move into regular mortgages or simply take over their houses outright. By then they’d been bilked for thousands. In talking with Lewis and Weatherspoon, I was seeing only part of the picture—the tiny minority who’d managed to hold on to their homes. But for all our exceptional ones, for every Barack and Michelle Obama, for every Ethel Weatherspoon or Clyde Ross, for every black survivor, there are so many thousands gone. “A lot of people fell by the way,” Lewis told me. “One woman asked me if I would keep all her china. She said, ‘They ain’t going to set you out.’ ” VIII. “Negro Poverty is not White Poverty” on a recent spring afternoon in North Lawndale, I visited Billy Lamar Brooks Sr. Brooks has been an activist since his youth in the Black Panther Party, when he aided the Contract Buyers League. I met him in his office at the Better Boys Foundation, a staple of North Lawndale whose mission is to direct local kids off the streets and into jobs and college. Brooks’s work is personal. On June 14, 1991, his 19-year-old son, Billy Jr., was shot and killed. “These guys tried to stick him up,” Brooks told me. “I suspect he could have been involved in some things … He’s always on my mind. Every day.” Brooks was not raised in the streets, though in such a neighborhood it is impossible to avoid the influence. “I was in church three or four times a week. That’s where the girls were,” he said, laughing. “The stark reality is still there. There’s no shield from life. You got to go to school. I lived here. I went to Marshall High School. Over here were the Egyptian Cobras. Over there were the Vice Lords.” Brooks has since moved away from Chicago’s West Side. But he is still working in North Lawndale. If “you got a nice house, you live in a nice neighborhood, then you are less prone to violence, because your space is not deprived,” Brooks said. “You got a security point. You don’t need no protection.” But if “you grow up in a place like this, housing sucks. When they tore down the projects here, they left the high-rises and came to the neighborhood with that gang mentality. You don’t have nothing, so you going to take something, even if it’s not real. You don’t have no street, but in your mind it’s yours.” Video: The Guardian of North Lawndale We walked over to a window behind his desk. A group of young black men were hanging out in front of a giant mural memorializing two black men: in lovin memory quentin aka “q,” july 18, 1974 ❤ march 2, 2012. The name and face of the other man had been spray-painted over by a rival group. The men drank beer. Occasionally a car would cruise past, slow to a crawl, then stop. One of the men would approach the car and make an exchange, then the car would drive off. Brooks had known all of these young men as boys. “That’s their corner,” he said. We watched another car roll through, pause briefly, then drive off. “No respect, no shame,” Brooks said. “That’s what they do. From that alley to that corner. They don’t go no farther than that. See the big brother there? He almost died a couple of years ago. The one drinking the beer back there … I know all of them. And the reason they feel safe here is cause of this building, and because they too chickenshit to go anywhere. But that’s their mentality. That’s their block.” Brooks showed me a picture of a Little League team he had coached. He went down the row of kids, pointing out which ones were in jail, which ones were dead, and which ones were doing all right. And then he pointed out his son—“That’s my boy, Billy,” Brooks said. Then he wondered aloud if keeping his son with him while working in North Lawndale had hastened his death. “It’s a definite connection, because he was part of what I did here. And I think maybe I shouldn’t have exposed him. But then, I had to,” he said, “because I wanted him with me.” From the White House on down, the myth holds that fatherhood is the great antidote to all that ails black people. But Billy Brooks Jr. had a father. Trayvon Martin had a father. Jordan Davis had a father. Adhering to middle-class norms has never shielded black people from plunder. Adhering to middle-class norms is what made Ethel Weatherspoon a lucrative target for rapacious speculators. Contract sellers did not target the very poor. They targeted black people who had worked hard enough to save a down payment and dreamed of the emblem of American citizenship—homeownership. It was not a tangle of pathology that put a target on Clyde Ross’s back. It was not a culture of poverty that singled out Mattie Lewis for “the thrill of the chase and the kill.” Some black people always will be twice as good. But they generally find white predation to be thrice as fast. Liberals today mostly view racism not as an active, distinct evil but as a relative of white poverty and inequality. They ignore the long tradition of this country actively punishing black success—and the elevation of that punishment, in the mid-20th century, to federal policy. President Lyndon Johnson may have noted in his historic civil-rights speech at Howard University in 1965 that “Negro poverty is not white poverty.” But his advisers and their successors were, and still are, loath to craft any policy that recognizes the difference. After his speech, Johnson convened a group of civil-rights leaders, including the esteemed A. Philip Randolph and Bayard Rustin, to address the “ancient brutality.” In a strategy paper, they agreed with the president that “Negro poverty is a special, and particularly destructive, form of American poverty.” But when it came to specifically addressing the “particularly destructive,” Rustin’s group demurred, preferring to advance programs that addressed “all the poor, black and white.” The urge to use the moral force of the black struggle to address broader inequalities originates in both compassion and pragmatism. But it makes for ambiguous policy. Affirmative action’s precise aims, for instance, have always proved elusive. Is it meant to make amends for the crimes heaped upon black people? Not according to the Supreme Court. In its 1978 ruling in Regents of the University of California v. Bakke, the Court rejected “societal discrimination” as “an amorphous concept of injury that may be ageless in its reach into the past.” Is affirmative action meant to increase “diversity”? If so, it only tangentially relates to the specific problems of black people—the problem of what America has taken from them over several centuries. This confusion about affirmative action’s aims, along with our inability to face up to the particular history of white-imposed black disadvantage, dates back to the policy’s origins. “There is no fixed and firm definition of affirmative action,” an appointee in Johnson’s Department of Labor declared. “Affirmative action is anything that you have to do to get results. But this does not necessarily include preferential treatment.” Yet America was built on the preferential treatment of white people—395 years of it. Vaguely endorsing a cuddly, feel-good diversity does very little to redress this. Today, progressives are loath to invoke white supremacy as an explanation for anything. On a practical level, the hesitation comes from the dim view the Supreme Court has taken of the reforms of the 1960s. The Voting Rights Act has been gutted. The Fair Housing Act might well be next. Affirmative action is on its last legs. In substituting a broad class struggle for an anti-racist struggle, progressives hope to assemble a coalition by changing the subject. The politics of racial evasion are seductive. But the record is mixed. Aid to Families With Dependent Children was originally written largely to exclude blacks—yet by the 1990s it was perceived as a giveaway to blacks. The Affordable Care Act makes no mention of race, but this did not keep Rush Limbaugh from denouncing it as reparations. Moreover, the act’s expansion of Medicaid was effectively made optional, meaning that many poor blacks in the former Confederate states do not benefit from it. The Affordable Care Act, like Social Security, will eventually expand its reach to those left out; in the meantime, black people will be injured. “All that it would take to sink a new WPA program would be some skillfully packaged footage of black men leaning on shovels smoking cigarettes,” the sociologist Douglas S. Massey writes. “Papering over the issue of race makes for bad social theory, bad research, and bad public policy.” To ignore the fact that one of the oldest republics in the world was erected on a foundation of white supremacy, to pretend that the problems of a dual society are the same as the problems of unregulated capitalism, is to cover the sin of national plunder with the sin of national lying. The lie ignores the fact that reducing American poverty and ending white supremacy are not the same. The lie ignores the fact that closing the “achievement gap” will do nothing to close the “injury gap,” in which black college graduates still suffer higher unemployment rates than white college graduates, and black job applicants without criminal records enjoy roughly the same chance of getting hired as white applicants with criminal records. Chicago, like the country at large, embraced policies that placed black America’s most energetic, ambitious, and thrifty countrymen beyond the pale of society and marked them as rightful targets for legal theft. The effects reverberate beyond the families who were robbed to the community that beholds the spectacle. Don’t just picture Clyde Ross working three jobs so he could hold on to his home. Think of his North Lawndale neighbors—their children, their nephews and nieces—and consider how watching this affects them. Imagine yourself as a young black child watching your elders play by all the rules only to have their possessions tossed out in the street and to have their most sacred possession—their home—taken from them. The message the young black boy receives from his country, Billy Brooks says, is “ ‘You ain’t shit. You not no good. The only thing you are worth is working for us. You will never own anything. You not going to get an education. We are sending your ass to the penitentiary.’ They’re telling you no matter how hard you struggle, no matter what you put down, you ain’t shit. ‘We’re going to take what you got. You will never own anything, nigger.’ ” IX. Toward A New Country When clyde ross was a child, his older brother Winter had a seizure. He was picked up by the authorities and delivered to Parchman Farm, a 20,000-acre state prison in the Mississippi Delta region. “He was a gentle person,” Clyde Ross says of his brother. “You know, he was good to everybody. And he started having spells, and he couldn’t control himself. And they had him picked up, because they thought he was dangerous.” Built at the turn of the century, Parchman was supposed to be a progressive and reformist response to the problem of “Negro crime.” In fact it was the gulag of Mississippi, an object of terror to African Americans in the Delta. In the early years of the 20th century, Mississippi Governor James K. Vardaman used to amuse himself by releasing black convicts into the surrounding wilderness and hunting them down with bloodhounds. “Throughout the American South,” writes David M. Oshinsky in his book Worse Than Slavery, “Parchman Farm is synonymous with punishment and brutality, as well it should be … Parchman is the quintessential penal farm, the closest thing to slavery that survived the Civil War.” When the Ross family went to retrieve Winter, the authorities told them that Winter had died. When the Ross family asked for his body, the authorities at Parchman said they had buried him. The family never saw Winter’s body. And this was just one of their losses. Scholars have long discussed methods by which America might make reparations to those on whose labor and exclusion the country was built. In the 1970s, the Yale Law professor Boris Bittker argued in The Case for Black Reparations that a rough price tag for reparations could be determined by multiplying the number of African Americans in the population by the difference in white and black per capita income. That number—$34 billion in 1973, when Bittker wrote his book—could be added to a reparations program each year for a decade or two. Today Charles Ogletree, the Harvard Law School professor, argues for something broader: a program of job training and public works that takes racial justice as its mission but includes the poor of all races. To celebrate freedom and democracy while forgetting America’s origins in a slavery economy is patriotism à la carte. Perhaps no statistic better illustrates the enduring legacy of our country’s shameful history of treating black people as sub-citizens, sub-Americans, and sub-humans than the wealth gap. Reparations would seek to close this chasm. But as surely as the creation of the wealth gap required the cooperation of every aspect of the society, bridging it will require the same. Perhaps after a serious discussion and debate—the kind that HR 40 proposes—we may find that the country can never fully repay African Americans. But we stand to discover much about ourselves in such a discussion—and that is perhaps what scares us. The idea of reparations is frightening not simply because we might lack the ability to pay. The idea of reparations threatens something much deeper—America’s heritage, history, and standing in the world. The early american economy was built on slave labor. The Capitol and the White House were built by slaves. President James K. Polk traded slaves from the Oval Office. The laments about “black pathology,” the criticism of black family structures by pundits and intellectuals, ring hollow in a country whose existence was predicated on the torture of black fathers, on the rape of black mothers, on the sale of black children. An honest assessment of America’s relationship to the black family reveals the country to be not its nurturer but its destroyer. And this destruction did not end with slavery. Discriminatory laws joined the equal burden of citizenship to unequal distribution of its bounty. These laws reached their apex in the mid-20th century, when the federal government—through housing policies—engineered the wealth gap, which remains with us to this day. When we think of white supremacy, we picture colored only signs, but we should picture pirate flags. On some level, we have always grasped this. “Negro poverty is not white poverty,” President Johnson said in his historic civil-rights speech. Many of its causes and many of its cures are the same. But there are differences—deep, corrosive, obstinate differences—radiating painful roots into the community and into the family, and the nature of the individual. These differences are not racial differences. They are solely and simply the consequence of ancient brutality, past injustice, and present prejudice. We invoke the words of Jefferson and Lincoln because they say something about our legacy and our traditions. We do this because we recognize our links to the past—at least when they flatter us. But black history does not flatter American democracy; it chastens it. The popular mocking of reparations as a harebrained scheme authored by wild-eyed lefties and intellectually unserious black nationalists is fear masquerading as laughter. Black nationalists have always perceived something unmentionable about America that integrationists dare not acknowledge—that white supremacy is not merely the work of hotheaded demagogues, or a matter of false consciousness, but a force so fundamental to America that it is difficult to imagine the country without it. And so we must imagine a new country. Reparations—by which I mean the full acceptance of our collective biography and its consequences—is the price we must pay to see ourselves squarely. The recovering alcoholic may well have to live with his illness for the rest of his life. But at least he is not living a drunken lie. Reparations beckons us to reject the intoxication of hubris and see America as it is—the work of fallible humans. Won’t reparations divide us? Not any more than we are already divided. The wealth gap merely puts a number on something we feel but cannot say—that American prosperity was ill-gotten and selective in its distribution. What is needed is an airing of family secrets, a settling with old ghosts. What is needed is a healing of the American psyche and the banishment of white guilt. What I’m talking about is more than recompense for past injustices—more than a handout, a payoff, hush money, or a reluctant bribe. What I’m talking about is a national reckoning that would lead to spiritual renewal. Reparations would mean the end of scarfing hot dogs on the Fourth of July while denying the facts of our heritage. Reparations would mean the end of yelling “patriotism” while waving a Confederate flag. Reparations would mean a revolution of the American consciousness, a reconciling of our self-image as the great democratizer with the facts of our history. X. “There Will Be No ‘Reparations’ From Germany” We are not the first to be summoned to such a challenge. In 1952, when West Germany began the process of making amends for the Holocaust, it did so under conditions that should be instructive to us. Resistance was violent. Very few Germans believed that Jews were entitled to anything. Only 5 percent of West Germans surveyed reported feeling guilty about the Holocaust, and only 29 percent believed that Jews were owed restitution from the German people. “The rest,” the historian Tony Judt wrote in his 2005 book, Postwar, “were divided between those (some two-fifths of respondents) who thought that only people ‘who really committed something’ were responsible and should pay, and those (21 percent) who thought ‘that the Jews themselves were partly responsible for what happened to them during the Third Reich.’ ” Germany’s unwillingness to squarely face its history went beyond polls. Movies that suggested a societal responsibility for the Holocaust beyond Hitler were banned. “The German soldier fought bravely and honorably for his homeland,” claimed President Eisenhower, endorsing the Teutonic national myth. Judt wrote, “Throughout the fifties West German officialdom encouraged a comfortable view of the German past in which the Wehrmacht was heroic, while Nazis were in a minority and properly punished.” Konrad Adenauer, the postwar German chancellor, was in favor of reparations, but his own party was divided, and he was able to get an agreement passed only with the votes of the Social Democratic opposition. Among the Jews of Israel, reparations provoked violent and venomous reactions ranging from denunciation to assassination plots. On January 7, 1952, as the Knesset—the Israeli parliament—convened to discuss the prospect of a reparations agreement with West Germany, Menachem Begin, the future prime minister of Israel, stood in front of a large crowd, inveighing against the country that had plundered the lives, labor, and property of his people. Begin claimed that all Germans were Nazis and guilty of murder. His condemnations then spread to his own young state. He urged the crowd to stop paying taxes and claimed that the nascent Israeli nation characterized the fight over whether or not to accept reparations as a “war to the death.” When alerted that the police watching the gathering were carrying tear gas, allegedly of German manufacture, Begin yelled, “The same gases that asphyxiated our parents!” Begin then led the crowd in an oath to never forget the victims of the Shoah, lest “my right hand lose its cunning” and “my tongue cleave to the roof of my mouth.” He took the crowd through the streets toward the Knesset. From the rooftops, police repelled the crowd with tear gas and smoke bombs. But the wind shifted, and the gas blew back toward the Knesset, billowing through windows shattered by rocks. In the chaos, Begin and Prime Minister David Ben-Gurion exchanged insults. Two hundred civilians and 140 police officers were wounded. Nearly 400 people were arrested. Knesset business was halted. Begin then addressed the chamber with a fiery speech condemning the actions the legislature was about to take. “Today you arrested hundreds,” he said. “Tomorrow you may arrest thousands. No matter, they will go, they will sit in prison. We will sit there with them. If necessary, we will be killed with them. But there will be no ‘reparations’ from Germany.” Survivors of the Holocaust feared laundering the reputation of Germany with money, and mortgaging the memory of their dead. Beyond that, there was a taste for revenge. “My soul would be at rest if I knew there would be 6 million German dead to match the 6 million Jews,” said Meir Dworzecki, who’d survived the concentration camps of Estonia. Ben-Gurion countered this sentiment, not by repudiating vengeance but with cold calculation: “If I could take German property without sitting down with them for even a minute but go in with jeeps and machine guns to the warehouses and take it, I would do that—if, for instance, we had the ability to send a hundred divisions and tell them, ‘Take it.’ But we can’t do that.” The reparations conversation set off a wave of bomb attempts by Israeli militants. One was aimed at the foreign ministry in Tel Aviv. Another was aimed at Chancellor Adenauer himself. And one was aimed at the port of Haifa, where the goods bought with reparations money were arriving. West Germany ultimately agreed to pay Israel 3.45 billion deutsche marks, or more than $7 billion in today’s dollars. Individual reparations claims followed—for psychological trauma, for offense to Jewish honor, for halting law careers, for life insurance, for time spent in concentration camps. Seventeen percent of funds went toward purchasing ships. “By the end of 1961, these reparations vessels constituted two-thirds of the Israeli merchant fleet,” writes the Israeli historian Tom Segev in his book The Seventh Million. “From 1953 to 1963, the reparations money funded about a third of the total investment in Israel’s electrical system, which tripled its capacity, and nearly half the total investment in the railways.” Israel’s GNP tripled during the 12 years of the agreement. The Bank of Israel attributed 15 percent of this growth, along with 45,000 jobs, to investments made with reparations money. But Segev argues that the impact went far beyond that. Reparations “had indisputable psychological and political importance,” he writes. Reparations could not make up for the murder perpetrated by the Nazis. But they did launch Germany’s reckoning with itself, and perhaps provided a road map for how a great civilization might make itself worthy of the name. Assessing the reparations agreement, David Ben-Gurion said: For the first time in the history of relations between people, a precedent has been created by which a great State, as a result of moral pressure alone, takes it upon itself to pay compensation to the victims of the government that preceded it. For the first time in the history of a people that has been persecuted, oppressed, plundered and despoiled for hundreds of years in the countries of Europe, a persecutor and despoiler has been obliged to return part of his spoils and has even undertaken to make collective reparation as partial compensation for material losses. Something more than moral pressure calls America to reparations. We cannot escape our history. All of our solutions to the great problems of health care, education, housing, and economic inequality are troubled by what must go unspoken. “The reason black people are so far behind now is not because of now,” Clyde Ross told me. “It’s because of then.” In the early 2000s, Charles Ogletree went to Tulsa, Oklahoma, to meet with the survivors of the 1921 race riot that had devastated “Black Wall Street.” The past was not the past to them. “It was amazing seeing these black women and men who were crippled, blind, in wheelchairs,” Ogletree told me. “I had no idea who they were and why they wanted to see me. They said, ‘We want you to represent us in this lawsuit.’ ” A commission authorized by the Oklahoma legislature produced a report affirming that the riot, the knowledge of which had been suppressed for years, had happened. But the lawsuit ultimately failed, in 2004. Similar suits pushed against corporations such as Aetna (which insured slaves) and Lehman Brothers (whose co-founding partner owned them) also have thus far failed. These results are dispiriting, but the crime with which reparations activists charge the country implicates more than just a few towns or corporations. The crime indicts the American people themselves, at every level, and in nearly every configuration. A crime that implicates the entire American people deserves its hearing in the legislative body that represents them. John Conyers’s HR 40 is the vehicle for that hearing. No one can know what would come out of such a debate. Perhaps no number can fully capture the multi-century plunder of black people in America. Perhaps the number is so large that it can’t be imagined, let alone calculated and dispensed. But I believe that wrestling publicly with these questions matters as much as—if not more than—the specific answers that might be produced. An America that asks what it owes its most vulnerable citizens is improved and humane. An America that looks away is ignoring not just the sins of the past but the sins of the present and the certain sins of the future. More important than any single check cut to any African American, the payment of reparations would represent America’s maturation out of the childhood myth of its innocence into a wisdom worthy of its founders. In 2010, jacob s. rugh, then a doctoral candidate at Princeton, and the sociologist Douglas S. Massey published a study of the recent foreclosure crisis. Among its drivers, they found an old foe: segregation. Black home buyers—even after controlling for factors like creditworthiness—were still more likely than white home buyers to be steered toward subprime loans. Decades of racist housing policies by the American government, along with decades of racist housing practices by American businesses, had conspired to concentrate African Americans in the same neighborhoods. As in North Lawndale half a century earlier, these neighborhoods were filled with people who had been cut off from mainstream financial institutions. When subprime lenders went looking for prey, they found black people waiting like ducks in a pen. “High levels of segregation create a natural market for subprime lending,” Rugh and Massey write, “and cause riskier mortgages, and thus foreclosures, to accumulate disproportionately in racially segregated cities’ minority neighborhoods.” Plunder in the past made plunder in the present efficient. The banks of America understood this. In 2005, Wells Fargo promoted a series of Wealth Building Strategies seminars. Dubbing itself “the nation’s leading originator of home loans to ethnic minority customers,” the bank enrolled black public figures in an ostensible effort to educate blacks on building “generational wealth.” But the “wealth building” seminars were a front for wealth theft. In 2010, the Justice Department filed a discrimination suit against Wells Fargo alleging that the bank had shunted blacks into predatory loans regardless of their creditworthiness. This was not magic or coincidence or misfortune. It was racism reifying itself. According to TheNew York Times, affidavits found loan officers referring to their black customers as “mud people” and to their subprime products as “ghetto loans.” “We just went right after them,” Beth Jacobson, a former Wells Fargo loan officer, told TheTimes. “Wells Fargo mortgage had an emerging-markets unit that specifically targeted black churches because it figured church leaders had a lot of influence and could convince congregants to take out subprime loans.” In 2011, Bank of America agreed to pay $355 million to settle charges of discrimination against its Countrywide unit. The following year, Wells Fargo settled its discrimination suit for more than $175 million. But the damage had been done. In 2009, half the properties in Baltimore whose owners had been granted loans by Wells Fargo between 2005 and 2008 were vacant; 71 percent of these properties were in predominantly black neighborhoods. In 2017, almost 660,000 people were arrested for cannabis-related charges in the U.S., the FBI reported recently. This means that, according to a recent open letter about equity and justice released by Equity First Alliance, even as legalization sweeps the nation, over half a million people are still losing their liberty, voting rights, and access to education, housing and future employment every year. To make things worse, while many jurisdictions that have already legalized marijuana have promised to clean up the records of those convicted for non-violent cannabis offenses, most of them are still on the hook. In Los Angeles, California, the largest recreational cannabis market in the world, hundreds of thousands of cannabis-related convictions have yet to be expunged. In Colorado, unfairness has also persisted and prevailed. “Young people of color have been arrested at higher rates for cannabis possession since legalization happened, while arrest rates for young white people have declined,” said Adam Vine of the Equity First Alliance. “Given the racial bias in the criminal justice system, all of these provisions continue to disproportionately harm people of color.” “In Pennsylvania, prior cannabis convictions prevent people from joining the medical cannabis workforce,” he added. “And, in Illinois, those same convictions have been preventing people from becoming cannabis patients.” Finally, the 2018 Senate Farm Bill contains language that would legalize hemp at the federal level. However, the new law would still bar people with felony drug convictions from participating in the hemp industry. A Noble (H)Emprize According to Sonia Erika of Massachusetts Recreational Consumer Council and a spokesperson for Equity First Alliance, who helped to organize N.E.W and its events, “Automatic expungement, post-conviction relief, and other aspects of criminal justice and policing reform must be a part of all cannabis legalization.” The problem, in her view, is raising awareness. In an attempt to capture the attention of the American public, a coalition of more than 20 organizations working at the intersection of the cannabis industry, racial equity, and reparative justice, have joined local and community groups across the country for the inaugural National Expungement Week (N.E.W.) October 20-27, 2018. Poster via www.offtherecord.us N.E.W. will offer free clinics to help to remove, seal, or reclassify eligible convictions from criminal records. N.E.W. events will be held in: Many of the N.E.W. events will also provide attendees with supportive services including employment resources, voter engagement, and health screenings. The N.E.W. website provides a link to an online toolkit for communities who want to host their own record change events now and in the future. When we accept that prestigious offer of admission from Princeton University, some small part of us becomes part of the great history of Princeton ― and so some part of us becomes shackled, forever, to the stains of slavery, Jim Crow, and continued racism. Just as the United States and white Americans themselves are bound, morally, to offer reparations to African-Americans, so too is the institution of Princeton University. Because of the University’s complicity in slavery and structural racism, it has an ethical commitment to provide justice in the form of reparations to African-American students. It is still somewhat controversial to remind ourselves that the United States was founded as a slaveholding nation, with slaveholding founders, with slavery in our Constitution. We are still haunted by this past. But to focus on the “nation as a whole” is to miss our own history, right here on campus. Princeton, of course, was not above holding slaves. Thanks to the publication last summer of the “Princeton & Slavery” project, Princeton has put together a tally list of its own particular crimes. The first nine Princeton presidents held slaves, as did a majority of our founding trustees. More Princetonians fought for the Confederacy than the Union. Princeton held slave auctions on its own grounds. Professors owned slaves ― some endowed professorships still honor men who came into their fortunes through slavery (Ewing, Dod, McCormick, Madison) ― and donations were financed from slave sales. Why would any of our professors, experts in their fields, want to be associated with these names? Then there are the sales themselves. How much of our mighty endowment, then, is soiled with that blood capital ― as interest and prestige accumulate year over year over year? Princeton has always been the most conservative and “southern” Ivy League school. By the way, that is not my own perception ― in a letter to W.E.B Du Bois, a Princeton University administrator argued that “we have never had any colored students here, though there is nothing in the University statutes to prevent their admission. It is possible, however, in our proximity to the South and the large number of Southern students here, that Negro students would find Princeton less comfortable than some other institutions.” Institutional prejudice did not end with our vaunted Woodrow. Yes, southern enrollment, which had been low after the Civil War, bounced back under his tenure, but Princeton remained white. The first African American to graduate from the University in peacetime was in 1951. Some might argue that Princeton has changed ― we no longer have slaves, nor do we prevent African Americans from entering the FitzRandolph Gate. By excluding African Americans over many generations, it left African Americans unable to access the capital, prestige, and resources that white students were able to have. While Princeton opened doors for white students, the FitzRandolph Gate stayed barred for blacks. And as we understand more deeply the cost of “dream hoarding” ― the upper-middle class’s stranglehold on chance ― does Princeton not foot some of this cost too? “White America was ready to demand that the Negro should be spared the lash of brutality and coarse degradation, but it had never been truly committed to helping him out of poverty, exploitation or all forms of discrimination.” ― Dr. Martin Luther King Jr., “Where Do We Go From Here?” Princeton undergraduates, because of their voluntary enrollment at Princeton, are complicit. Hence, we are all obligated to the University to honor its own obligations. “But I’m not guilty,” you might say. And the response to that is simple enough ― I am not arguing for individual Princeton undergraduates to provide reparations (though many of us likely ought to), I am arguing for Princeton University, as an institution, to right its wrongs. We, as undergraduates who voluntarily accepted Princeton’s offer of admission, should be bound by its obligations much as we are bound by many other obligations imposed on us once we agree to matriculate ― to write a thesis, to take so many classes a semester, to go on Outdoor Action, to stay out of disciplinary or academic trouble. We all accept admission on the understanding that there are obligations. And the University, in its own capacity, has done wrong — and not wrong once, but wrong for generations. Any one undergraduate’s guilt or lack thereof is inconsequential. The University must atone for its wrongs. Princeton University is wealthy ― almost incomparably so. And yes, a certain amount of that wealth should be set aside for financial reparations to African Americans. But what Princeton could do that no other institution could do is use the resource that is most valuable and most irreplaceable ― its students and faculty. Princeton should require all students to contribute to the wellbeing of communities that it has almost certainly harmed throughout the years ― in particular, communities of color. Princetonians ― the collected assemblage of confident, competent individuals ― have an unparalleled pedigree and skill set that makes us and our intelligence far more valuable than even the billions in our stock market portfolio. Throwing money at problems is great, but throwing skilled human capital is even better. This is our opportunity to liaise with those organizations on the ground, to learn from them and assist them in some substantial capacity. Princeton produces a bevy of skilled students ― engineers, statisticians, writers, artists ― that would be useful to almost any organization. Why does Princeton not provide those students the means to do the good work enshrined in our motto ― In the Nation’s Service and the Service of Humanity? Instead, the University seems more interested in ensuring Wall Street’s continued access to the best and brightest. But Princeton owes Wall Street nothing ― it owes those that it has benefited from plundering. I cannot think that what has been done so far is anywhere near enough. Naming an administrative building after Toni Morrison is not the same as renaming the Wilson School. Tour stickers around campus are insufficient. Increasing diversity is not the same as reparative admissions policies ― for the Class of 2022 , African Americans make up only 8 percent ― roughly the same amount as the Classes of 2021, 2020, and my own Class of 2019. Yet African Americans are 13.4 percent of the U.S population. If reparations means giving more now to make up for less previously, we are failing dramatically. As for all concrete proposals, I can’t confess to knowing exactly how Princeton should go about reparations. And ultimately, it is not my place to. I have no interest in falling into the neoliberal “white savior” trap. Perhaps the best way, as professor Avery Kolers at the University of Louisville suggests, is for Princeton to make available money and resources (credit hours, paid faculty and staff time, etc.) available to African American students, alumni, and target New Jersey organizations, who then have the final say on how those resources could be be used to make reparations. By being judges on competitive project boards or in some other way that ensures Princeton’s reparations are not imposed from the top-down, we could actively take into account the voices and perspectives of those the reparations are meant to aid. “Two hundred fifty years of slavery. Ninety years of Jim Crow. Sixty years of separate but equal. Thirty-five years of racist housing policy. Until we reckon with our compounding moral debts, America will never be whole.” Having done wrong, what compels us to reparations? First, an appeal to common decency. When we do wrong, we are often expected or encouraged, whether by others or our own conscience, to do right by those we have done wrong. Even in the simple case of insulting someone, we can see that as an example of wrongfully taking status or dignity from that other person. An apology is the reparation. The ravages of slavery and the inequities of racism are far worse ― should not the reparations, then, be proportional to the harms done? Second, as John Locke points out, when someone does something wrong to another human, they violate that human’s status and dignity as an equal and as an important individual worthy of certain fundamental, natural rights. This should be familiar to any American: “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the Pursuit of Happiness.” Reparations are a great divider. More than any other article, I know this one may have the greatest potential for controversy. Some part of me wishes I could hold my tongue. But it is better to speak the honest truth ― that you know which is right ― than the truth most palatable. Princeton — despite having bravely acknowledged its past failings ― has not done enough to make up for them. In terms of symbolic reparations (names on buildings), or in terms of financial reparations (donations and financial aid), or in terms of reparations of human capital (volunteerism and affirmative action), Princeton has something to give. If white America owes black America, then so too does Princeton University have its own debt to pay. In a rare move by an institution of education, Scotland’s Glasgow University has admitted to receiving millions of dollars from slavery in Africa and the Caribbean. It is now putting in place structures to pay reparations in a move that has been lauded as great across the world. According to the report, although many of the staff in the University were against slavery and that the University neither had any enslaved person nor did it trade in goods produced by enslaved people, funds from people who had benefitted from proceeds of slavery were given to the university in the form of gifts and bequests and used in supporting academic activity by the students. The University of Glasgow acknowledges that during the eighteenth and nineteenth centuries it received some gifts and bequests from persons who may have benefitted from the proceeds of slavery. Income from such gifts and bequests has been used in supporting academic activity undertaken by the students and staff of the University The report further listed some of the people who graduated from the university and went ahead to become slave-owners in the Caribbean. One of the adversely mentioned people is Robert Cunninghame Graham who graduated from the University and had become a rector in the 1780s. Robert Cunningham Graham of Gartmore (1730-1798), Doughty Photo: Wiki CC Graham owned and ran a plantation in Jamaica, where enslaved people worked extreme hours and at terrible conditions. He was also known to have sired many children with enslaved women, enslaving and even selling these children to be enslaved in other plantations. “It is possible that there were some of the people that Graham sold were his own children… It is possible that Ardoch may have been one of Graham’s children, for the young enslaved man was named for a Scottish estate in Graham’s family, an estate which Graham later inherited. It seems likely that only Graham could have been the source for this unusual name for an enslaved child,” the report says. Upon his return to Scotland, Graham was made the rector at the University until 1787. As per the report: A year after stepping down from the Rectorship Graham made a gift of £100 to the University of Glasgow to establish the Gartmore Gold Medal, to be awarded every two years for the best student work on ‘Political liberty’. By the time that Graham served as Rector and endowed a prize for the best student work on liberty he had been a slave-owner for nearly forty years, owning many people like Ardoch, Beniba and Martin, and he had made his fortune from their labour and from his trading and selling of the sugar they and other enslaved people produced. Graham’s gift is among the 16 bursaries, endowments and mortifications donated between 1809 and 1937 that have a direct link to the profits from slavery, 11 of which generated subsequent income for the university until today. Some of these endowments were recived from former slave owners who had received compensation for losing slaved when slavery was abolished. It is against this background that the University has laid out a series of activities as part of the reparative justice programme. It is also planning to increase the “racial diversity of students and staff and to reduce the degree attainment gap” as well as to create an “interdisciplinary centre for the study of historical slavery and its legacies, including modern slavery and trafficking”. Also part of the recommendation is the collaboration between the University and the University of West Indies (UWI). The move has been welcomed by UWI vice-chancellor and chairman of the CARICOM Reparations Commission, Professor Hilary Beckles. Photo: Caribbean Life “I have looked closely at the report, reading it within the context of the University of Glasgow-University of the West Indies framework for mutual recognition and respect. The approach adopted by the University of Glasgow is commendable and is endorsed by the UWI as an excellent place to begin. Both universities are committed to excellent and ethical research, teaching and public service. I celebrate colleagues in Glasgow for taking these first steps and keenly anticipate working through next steps,” Beckles, one of the three external advisors to the report, said. Sir Geoff Palmer, Scotland’s first black professor, not only welcomed the report but also called on institutions that had benefited from the slave trade to make amends. Photo: What’s On Glasgow “Now, I think the country faces a very uncomfortable question which the Glasgow University report has raised once more: to what extent did slavery make Scotland great? We can have all the equality laws and anti-racism legislation we like, but if no other institutions, firms or organisations which also benefited from slavery declare this and seek to make amends then it’s all meaningless,” he said to the Guardian. The conversation about reparations for slavery has been ongoing for years. In 2016, Jamaica demanded Britain to start making reparations for slavery, stating that it is the duty of the previous colonial master to alleviate the continued suffering of the Caribbean people. A regional body known as the Caribbean Reparation Commission was set up to establish the case for reparations by the governments of all the former colonial powers. It set up a ten-step plan for the same. This week, Chicagoans celebrated Rahm Emanuel’s announcement that he will not seek another term as mayor. But while Emanuel’s departure is welcome news to many, the next mayor of Chicago will have to come up with an aggressive plan to repair the damage that Emanuel’s financial policies have inflicted on the city’s Black and Latinx communities. Otherwise the devastation that Emanuel’s tenure in office wreaked on Chicago’s communities of color will be with us for decades to come. Mayor Emanuel systematically monetized pain in communities of color to enrich his Wall Street backers. Since he took office in May 2011, Chicago has paid $346 million in police misconduct settlements and judgments. Emanuel paid a large portion of these costs by taking out bonds, which must be paid back with interest. The interest and fees on these bonds add up to hundreds of millions of dollars, which the city pays before ensuring there is funding for critical public services. When faced with a budget crunch, Emanuel closed mental health clinics, which could have played an important role in preventing people of color from having adverse contact with racist police officers. In 2017, the mayor borrowed $225 million to pay for future police misconduct settlements and judgments. In other words, Emanuel gave his buddies on Wall Street an advance payment on the lives of Black and Latinx Chicagoans whom he knows his police department will brutalize or murder at some point in the future. Similarly, the mayor and his appointees on the school board refused to take legal action against the banks that fraudulently sold the city and school district toxic swap deals. Chicago Public Schools paid banks such as Bank of America $36 million a year for these toxic swaps—enough money to reverse the 50 school closings Emanuel oversaw in 2013. But not only did Emanuel refuse to take legal action against the banks, he actually signed multiple agreements waiving Chicago’s right to recoup its losses through legal action. Emanuel also used the city’s Tax Increment Financing (TIF) program as a slush fund that drained money from the city’s neighborhoods and schools in communities of color and funneled it into tax subsidies for developers and wealthy corporations in the richer, whiter parts of the city. All of these financial shenanigans are part of the neoliberal regime that has dominated City Hall for the past few decades under Emanuel and his predecessor, Richard M. Daley. Like Donald Trump, they believed making Chicago great again meant bringing back the white people who had abandoned the city for the suburbs during white flight. In order to lure rich white folks back to the city, they ignored the needs of Chicago’s communities of color, whom they did not deem worthy of the city’s resources. While defending the closure of schools in Chicago’s Black neighborhoods, Emanuel reportedly told Chicago Teachers Union President Karen Lewis, “25 percent of these kids are never going to be anything. They are never going to amount to anything. And I’m not going to throw resources at them.” Daley and Emanuel repealed progressive corporate taxes and funneled tax money from the neighborhoods into downtown. They manufactured budget crises in order to justify the privatization of the city’s infrastructure, the charterization of its school district, and attacks on city and school district employees and their pensions. The perennial budget crises that resulted from these irresponsible decisions were then used to justify risky financial deals that were highly lucrative for Wall Street and ultimately cost taxpayers billions of dollars. These policies have left deep scars, both in the city’s neighborhoods and in its bank accounts. The next mayor will not be able to wave a magic wand and undo all the damage that decades of neoliberal rule have wrought. The city and school district’s structural budget deficits are all too real. Before the next mayor can even start to think about righting the wrongs, they will need to find money under the couch cushions just to keep the lights on. There are only two ways forward: more taxes or more financial shenanigans. Under the Daley-Emanuel style of governance, both of these options would have hit communities of color. Tax increases would have been regressive, coming in the forms of red light and speeding cameras that are heavily concentrated in Black and Latinx neighborhoods. Financial shenanigans would have been used to justify more cuts to critical services. The next mayor needs to flip the script. They need to aggressively raise revenue from the wealthy parts of the city in order to repair the damage to the South and West Sides. For decades, Black and Brown Chicago have been forced to shoulder the costs of Daley and Emanuel’s burning desire to revitalize White Chicago. The next mayor will have to target Black and Latinx communities for investment coming from progressive revenues sources that make rich residents in White Chicago and the major corporations downtown pay their fair share. These wealthy interests have benefited for nearly 30 years from policies that have prioritized the needs of corporations over those of poor communities of color. Chicago’s next mayor needs to make White Chicago pay reparations to Black and Brown Chicago to start to reverse these inequities and right these wrongs. August 1 marked Emancipation Day, as many took to the London streets calling for slavery reparations 02/08/2018 11:54 AM COMMUNITY: Reparations March, London HUNDREDS of locals of London’s black community took to the streets to mark Emancipation Day and the annual Reparations March. The march, which begun at Windrush Square, Brixton and culminated at Parliament Square, was awash with colour and pride as many marked Emancipation Day by demanding the government acknowledges the historic and ongoing impact of colonisation and slavery. This year’s march was themed under “Stop the Maangamizi,” and saw activists from all walks of life come together and demand justice. Organisers of the march carried a petition, which states: “The blood, sweat and tears of our Ancestors financed the economic expansion of the United Kingdom. The immoral and illegal acts inflicted on Afrikans against their will cannot all be undone. “However, the perpetrators, their descendants and all other beneficiaries, ought to be compelled to address the harm that has resulted from them. Today the offspring of the stolen Afrikans encounter direct and indirect racial discrimination daily. This results in impoverishment, lack of education, unemployment, imprisonment and ill health. “Now is the time for the victims of these inhumane atrocities to demand, effect and secure holistic, adequate, comprehensive and intersectional reparations for the wrongs that continue to be inflicted on Afrika, Afrikans on the Continent and in the Diaspora.” There are few things in U.S. culture more divisive than a discussion surrounding whether the descendants of the “peculiar institution” of slavery should be compensated. For most African-Americans (and half of Hispanics) the answer is yes, but more than half of white Americans remain opposed to the idea. According to a 2016 Marist College poll, “Nearly six in ten Americans assert the current wealth of the United States is not significantly tied to work done in the past by slaves, although most consider the history of slavery and other forms of racial discrimination to be at least a minor factor in the gap in wealth between white and black Americans.” It also found that “68% of residents nationally do not think the United States should pay reparations to descendants of slaves, and a similar proportion of American adults, 72%, argue that the United States should not compensate African Americans, in general, for the harm caused by slavery and other forms of racial discrimination.” Released in connection with the PBS debate series “Point Taken,” the poll revealed when broken down along racial lines that “Among the races polled, 81 percent of white Americans said no to reparations for slave descendants, the highest number of all races. The numbers were much closer among blacks and Hispanics, with 58 percent of blacks supporting reparations and 35 percent against the idea. Hispanic Americans were almost evenly divided with 47 percent against and 46 percent for providing money for slave descendants.” “Point Taken” series creator and senior executive-in-charge Denise Dilanni described the poll’s numbers as“These results, while not surprising, are indeed striking in the persistent racial divide in attitudes about reparations.” Still, there are signs that younger generations may just usher in change, with the poll also finding that “More than half of millennials questioned say they are willing to at least consider the idea of paying reparations to the descendants of slaves.” Historically, at least, the idea of reparations is nothing new, as seen in 1988 when President Reagan signed the Civil Liberties Act, which provided compensation to more than 100,000 Japanese-Americans incarcerated in internment camps during World War II. In addition to a formal apology, $20,000 was granted to each victim or next of kin of those illegally interned. A House report concluded that “A grave injustice was done to citizens and permanent resident aliens of Japanese ancestry by the evacuation, relocation, and internment of civilians during World War II.” The aftermath of the same war would prompt a similar move from Germany, whose decimation of millions of Jewish people would later lead to over $822 million in reparations for the heirs of Holocaust survivors. Even a few Native American tribes (which have arguably witnessed the most injustice throughout this nation’s history) have been awarded compensation, with 17 winning a 2016 lawsuit that found they are owed $492 million from the U.S. government. Attitudes towards reparations remain mixed depending on the context, with a 2014 YouGov study revealing a disparity they describe as occurring along racial and political lines. According to the report, “Most white Americans (51%) say that slavery is ‘not a factor at all’ in the lower average wealth of black Americans, something only 14% of black Americans agree with. Among black Americans attitudes are turned on their head, with 48% saying that slavery is a ‘major factor’ in their lower wealth levels today, something only 14% of white Americans agree with.” When it comes to African-Americans, however, the horrors of slavery have largely been condensed and swept under the rug into the “get over it” pile. Despite years of legislative wrangling — including from former Rep. John Conyers, who, in each congressional session since 1989, has introduced a bill to form a committee to examine slavery and study reparations proposals — reparations have become similar to the promise of “40 acres and a mule” that never materialized. “I’m not giving up. Slavery is a blemish on this nation’s history, and until it is formally addressed, our country’s story will remain marked by this blight,” Conyers told NBC News in 2017. In “The Case for Reparations,” writer Ta-Nehisi Coates presents a persuasive, well-written argument about the merits of at least discussing providing reparations to the descendants of slavery. “Something more than moral pressure calls America to reparations. We cannot escape our history. All of our solutions to the great problems of health care, education, housing, and economic inequality are troubled by what must go unspoken,” wrote Coates. Recalling a conversation with Chicago resident Clyde Ross, he continued, “‘The reason black people are so far behind now is not because of now,’ Ross told me. ‘It’s because of then.’” Just what reparations would look like, if feasible, remains highly contested. While some claim they shouldn’t have to “pay for something they had nothing to do with,” other critics have cited costs and the difficulty in determining which Black Americans should receive payments — and for how much. A 2015 analysis in Newsweek that found that reparations could cost up to $14 trillion — some 70 percent of the U.S. gross domestic product — illustrates the unlikelihood of such a proposal becoming a reality. But others point to America’s timeline of chattel slavery, oppressive Jim Crow legislation, and the ongoing financial disparity between Black and white Americans as ample proof that some version of reparations is the least that can be done. In 2005 economists William A. Darity Jr. and Dania Frank presented a paper outlining a variety of frameworks for how reparations could be made. The pair laid out proposals that would include lump-sum payments, monetary vouchers, or even a fund from which Black Americans could receive grants in order to finance ventures like education or home ownership. The duo wrote, “Thus reparations could function as an avenue to undertake a racial redistribution of wealth akin to the mechanism used in Malaysia to build corporate ownership among the native Malays.” Laying out their own criteria for qualifying, they added, “First, individuals would have to establish that they are indeed descendants of persons formerly enslaved in the United States. Second, individuals would have to establish that at least 10 years prior to the adoption of a reparations program they self-identified as ‘black,’ ‘African American,’ ‘Negro,’ or ‘colored.’” Loyola Law School professor Eric J. Miller also made his own case for reparations. “Part of our history is our grandparents participating in these acts of terrible violence [against black people]. But people don’t want to acknowledge the horror of what they engaged in,” he said. Nonetheless, while millennials may indeed be more open to the possibility of reparations for American slavery, solely relying on them to usher in change may prove to be disappointing as well. In a generation that is generally considered to be more racially inclusive than its predecessors, there still appears to be a disparity between the way that some black and white millennials view race in America. According to a 2017 GenForward study, while “Millennials of all racial backgrounds list racism as one of the three most important problems in America,” they also found that “Nearly half (48%) of white Millennials believe that discrimination against whites has become as big a problem as discrimination against Blacks and other minorities, while only about a quarter of African Americans, Asian Americans and Latinos share this view.” Certain debates spark a similar divide in attitudes, with the study also finding that “A majority of African Americans (56%) and plurality of Asian Americans (43%) have a favorable opinion of Black Lives Matter, but only 27% of Latinos and 19% of whites share this view.” Conversations regarding Confederate history evoke a similar responses, with those polled revealing that “A majority of Millennials of color believe the Confederate flag is a symbol of racism and support removing Confederate statues and symbols from public places. In contrast, a majority of whites (55%) see the Confederate flag as a symbol of Southern pride and oppose removing Confederate statues and symbols (62%).” In December 2017 the Washington Post released its own findings, concluding that “feelings of white vulnerability” contributed to the “41 percent of white millennials [that] voted for Trump in 2016.” “About 84 percent of millennial Trump voters were white,” the Post reported, describing the voting trend as “[compared] to white voters who did not support Trump, Trump voters were more likely to be male, married and without college education.” Now roughly between the ages of 22 and 37, while millennials were not alive for the severity of Jim Crow and the bitter fight for civil rights, the prevailing attitudes of those that raised them — both good and bad — still persists. As the Chicago Tribune noted last August, “millennials overall are more racially tolerant than earlier generations — but that’s because young people today are less likely to be white. White millennials exhibit about as much racial prejudice, as measured by explicit bias, as white Gen Xers and boomers. Yet even young people know that overt racial animus is socially frowned upon, a deal-breaker for those seeking friends, spouses or gainful employment.” Such findings show that as the fight for equality — and perhaps even reparations — continues, it cannot be left on the shoulders younger Americans alone. All hands must be on deck. The ‘Office of Reparations’ Bill which is aimed at providing legal provisions for the establishment of the Office for Reparations, to identify war affected people eligible for reparations, has been placed on the Order Paper of Parliament meant for today Like the Office of Missing Persons, the establishment of this new office is part of the matters envisaged in the UNHRC resolution on Sri Lanka adopted on October 1, 2015. It is meant to identify aggrieved persons eligible for reparations, and to provide for the provision of individual and collective reparations to such persons; to repeal the Rehabilitation of Persons, Properties and Industries Authority Act, No. 29 of 198, and is to be presented to Parliament today. Responsibilities of this office will include receiving recommendations with regard to reparations to be made to aggrieved persons from the Office on Missing Persons established under the Office on Missing Persons or such other relevant bodies or institutions, to receive applications for reparations from aggrieved persons or representatives of such aggrieved persons and to verify the authenticity of such application, for the purpose of assessing the eligibility for reparations, to identify the aggrieved persons who are eligible for reparations as well as their level of need, to identify and collate information relating to previous or on-going reparation programmes carried out by the State, including any expenditure on similar reparation programmes through a centralized database, and to make rules with regard to ensuring the effective functioning of the Office for Reparations. The Office for Reparations, if established, will consist of five members appointed by the President on the recommendation of the Constitutional Council. The Constitutional Council shall recommend three names out of the members of the Office for Reparations to be appointed as the Chairperson of the Office for Reparations. One of the members recommended shall be appointed by the President as the Chairperson of the Office for Reparations. A member could serve for a period of three years. However, the members of this office will be deemed to be public servants for the purpose of the Bill as per the Penal Code and the Bribery Act and the Evidence Ordinance. (Yohan Perera) The anniversary of the New Orleans Massacre of 1866 is coming up later this month, on July 30th. We should not forget this event, essentially a race riot, in which 238 people were killed, the vast majority of whom were Black war veterans who had fought for the Union — but the war had ended a year earlier. The massacre was a crucial factor which led to the passage of the Reconstruction Acts. For a brief moment, a century and a half ago, it seemed as if our nation was poised to begin the long and difficult process of healing from the wounds which slavery inflicted on the body politic. Instead, that process was sabotaged and repressed in relatively short order. Call it the Deconstruction of Reconstruction. Is there a city in the United States where we know this better? The legacy of slavery looms large here, though official acknowledgements of this history are scanty. The Crescent City was also home to one of the largest populations of free people of color. It stands to reason that people in New Orleans should be at the forefront of the newly re-invigorated movement toward making reparations for slavery. And so we are. On July 9th, there was a meeting on the prospect of a local platform for such reparations. It was sponsored by the Green Party of New Orleans. (Note: I serve as chair of this group.) All our meetings at the Mid-City Library are free and open to the public. Local artist, activist and entrepreneur Anika Ofori drew on her experience working with the Green Party of the United States to give an informal presentation which informed and framed our discussion. Anika began with a brief historical overview, outlining the establishment of the Freedmen’s Bureau and the Reconstruction Acts, as well as the sabotage of Reconstruction through acts of violence. Current efforts on to establish reparations for slavery stem from the middle of the 20th century, picking up momentum in the 1990s, slowed temporarily by the political shifts after the terrorist attacks of 2001. Anika recounted the work of the National Coalition of Blacks for Reparations in America (N’COBRA) and National African American Reparations Commission (NAARC) including their preliminary 10-Point Plan which is modeled after a similar plan endorsed by the Caribbean Community (CARICOM). It’s a holistic program that would finally set our nation on the road to healing. A wealth of detailed information can be found via the Reparations Resources Center, maintained by the Institute of the Black World 21st Century, at https://ibw21.org/reparations-resource-center/ House Resolution 40, officially titled the “Commission to Study and Develop Reparation Proposals for African-Americans Act,” was introduced to Congress in January of 2017 by John Conyers, Jr. He introduced the bill repeatedly over almost three decades. Since he resigned in December, the future of the bill is unclear. Nevertheless, the Green Party of the United States has endorsed the idea of reparations, both in general principle as part of the Green Party platform, as well as more specifically endorsing H.R. 40. We concluded our meeting with a discussion of how we might promote the reparations issue locally, here in New Orleans, which after all was once the preeminent hub of the slave trade on this continent. Our conversation brought to light a series of questions. For example: How do we incorporate support for reparations in our local platform, currently under development? We anticipate some points from the NAARC ten-point plan will be included in other parts of our platform. Do we highlight these connections or address reparations separately? How do we advocate for reparations to a greater public that might not be educated on the issue and resistant to it? Are there particular policies and demands in the call for reparations that our chapter is in the best position to pursue locally? The discussion of reparations also raised questions about the priorities and identity of the Green Party. Despite the official support for reparations in the Green Party platform, there were concerns that the Green Party as a whole isn’t racially inclusive and sufficiently aware of race issues. There were also concerns that prioritizing reparations and racial justice might alienate members of the party who are most interested in economic and/or climate matters. Others at the meeting suggested that the pursuit of economic, racial, and climate justice were not mutually exclusive and had to be pursued simultaneously because they are all related and interconnected. Thanks to Neil Ranu, Green Party of New Orleans secretary, for preparing notes on the meeting, which were used extensively for this article.
TIP 4: Verbally Link Your Ideas in a Paragraph Together Verbally link your ideas in a paragraph together, using summative references to preceding ideas, repetitions and parallel constructions, and transitional linkages. [These terms are defined and illustrated below.] Under TIP 3, we compared a paragraph composed of sentences to a chain composed of links, where the links are welded together by careful control of context. A paragraph can also be thought of as a path through the woods. For the reader to follow the path, trail markers are needed to point out the way. These trail markers, which we call verbal linkages, are words that briefly and simply tell the reader about what has just been said, or what is going to be said next. Which trail would you rather be following? VERBAL LINKAGE 1: SUMMATIVE REFERENCES A summative reference is a convenient label that an author attaches to an idea or set of ideas in a paragraph. The reference may refer back to the content of preceding sentences, or refer forward to an idea which is about to be presented. In the first example below, "This hypothesis" summarizes and labels the set of ideas in the sentences which came before. A summative reference is far more clear that a vague "this" which refers back to something unspecified. This hypothesis can be tested by.... Compare:This can be tested by.... - All of these problems can be avoided when .... Compare:This can be avoided when.. This conclusion was confirmed by.... Compare:This was confirmed by... The use of a vague reference to "this" confuses the reader, but when the author specifies what "this" is with words like those shown above, the reference is not only clear, it makes the whole paragraph clearer. If you conclude a passage with "This hypothesis can be tested by...," the word "hypothesis" places a very specific meaning on the preceding sentences. These are not simply a set of ideas, but a set of ideas that are going to be tested by a scientific experiment. Similarly, labeling a statement as a conclusion gives it more meaning than a simple statement of opinion. Summative references are particularly useful in introducing a list of items. In our revised version of Paragraph Example 1, the phrases highlighted below are summative references that are indispensable in helping us to understand the grab bag of items that follow. To verify the value of these introductory summative references, just recall the confusion created by these unlabelled lists in the original paragraph. Summative references (highlighted) are also useful in our revision of Paragraph Example 3. Giving readers a "capsule" version of the list allows them to skim through the list, rather than analyze it in detail to figure out what is similar among the items. The final paragraph in our revised version of Paragraph Example 3 also illustrates the use of summative references (highlighted). The term "epidemiology of preterm delivery" summarizes the topics of the preceding paragraphs (incidence, morbidity/mortality/sequelae, and risk factors). The term "confusion" summarizes the ideas in the first half of the paragraph ("difficult," "confounded," "lack of consensus," "difficult"). Note how much clearer the finale is when we use "This confusion," rather than "This," the wording of the original paragraph. VERBAL LINKAGE 2: REPETITIONS AND PARALLEL CONSTRUCTIONS Repetitions are used routinely to maintain a thread of meaning throughout a paragraph. For example, recall in our revised version of Paragraph Example 1, how our repetitions of the word "sequelae" created more clarity and coherence in the paragraph.This simple device can be used very artfully, as in the example below from Abraham Lincoln's Gettysburg Address, in which the repeating phrases are underlined: Note how Lincoln's repetition of the words "war" and "field" create a zoom effect, moving our attention from the larger war, to the battle field of Gettysburg, and finally to the section of that field which is the burial ground for the soldiers who died in that critical battle. He is not just playing with words, but ensuring that his audience understands the relevance of the ceremony dedicating the burial ground to the larger conflict in which the nation is engaged. Repetitions are often used to create parallel constructions at the paragraph level. In the following example from another section of the Gettysburg Address, we can see the power of both repetitions and parallel constructions. (I use color codes to identify words that either repeat or contrast with other words in the paragraph.) This passage begins with paired repetitions and opposites that juxtapose "us" vs "them," i.e. the participants in the dedication ceremony vs those who fought in the battle of Gettysburg. While we are only talking, they were doing. Hence posterity will not "long remember" us, but can "never forget" their service. We are merely dedicating the ground on which they actually fought. To redress the balance, Lincoln asks us to be dedicated to "the great task remaining before us," which he sets up as a series of "that" clauses: we are asked to 1) take devotion in emulation of theirs, 2) ensure that the dead have not died in vain, 3) inaugurate a new birth of freedom, and 4) ensure that our form of government lasts forever. Thus in his series of "that" clauses, he defines the great task as an escalating set of challenges to turn around the tragic loss of life which has occurred: to turn their deaths into not only a memorial, but a chance for collective rebirth and even immortality. More mundane uses of parallel constructions in scientific writing can be seen in our revisions of Paragraph Examples 1 and 3. In Example 1 (Rickettsial Encephalitis, see Tip 2), the two long lists of sequelae are set up in parallel, introduced by "Other central nervous system sequelae" and "Moreover, outside the central nervous system." In Example 3 (below), notice the highlighted parallel constructions. They make it easier for the reader to grasp the meaning quickly and without confusion. VERBAL LINKAGE 3: TRANSITIONAL LINKAGES Transitional linkages are conjunctions or phrases that help readers find their way through a paragraph, like the blaze on a trail. All authors use simple conjunctions such as "and" or "so" in their writing. Consider using more informative linkages where a "trail marker" is needed: Simple or non-specific linkages:and, but, nor, for, yet, or,so Complex linkages:however, moreover, therefore, nonetheless, in contrast Complex conjunctions offer a powerful way to signal your meaning with minimal clutter in the paragraph. When you use "however" or "in contrast," the reader is told that what is coming next is different from what came before. Without such a marker, the reader may become confused and look back to see what he may have misunderstood, because the sentence appears to be contradicting what was just said. Similarly, the word "therefore" is indispensable in introducing a logical conclusion. "Moreover" tells the reader to expect supplemental information on the same topic. Below is a table of useful transitional linkages. |Other Transitional Devices or Linkages*| |addition||again, also, and, and then, besides, equally important, finally, first, further, furthermore, in addition, in the first place, last, moreover, next, second, still, too| |comparison||also, in the same way, likewise, similarly| |concession||granted, naturally, of course| |contrast||although, and yet, at the same time, but at the same time, despite that, even so, even though, for all that, however, in contrast, in spite of, instead, nevertheless, notwithstanding, on the contrary, on the other hand, otherwise, regardless, still, though, yet| |emphasis||certainly, indeed, in fact, of course| |example or illustration||after all, as an illustration, even, for example, for instance, in conclusion, indeed, in fact, in other words, in short, it is true, of course, namely, specifically, that is, to illustrate, thus, truly| |summary||all in all, altogether, as has been said, finally, in brief, in conclusion, in other words, in particular, in short, in simpler terms, in summary, on the whole, that is, therefore, to put it differently, to summarize| |time sequence||after a while, afterward, again, also, and then, as long as, at last, at length, at that time, before, besides, earlier, eventually, finally, formerly, further, furthermore, in addition, in the first place, in the past, last, lately, meanwhile, moreover, next, now, presently, second, shortly, simultaneously, since, so far, soon, still, subsequently, then, thereafter, too, until, until now, when| * Modified from: Guide to Grammar and Writing, Coherence: Transitions between Ideas: http://grammar.ccc.commnet.edu/grammar/transitions.htm All of our revised paragraph examples include transitional linkages. Note in Paragraph Example 1, on rickettsial encephalitis, the revision added the word "Moreover" to add on the second list of sequelae. Note in Paragraph Example 2, the crucial use of "In contrast" when the topic switches from amphibian to mammalian experiments. In the revision below, from Paragraph Example 3, the highlighting shows examples of all three types of verbal linkages at work. We have already studied the summative references, repetitions and parallel constructions in this passage. Transitional linkages, highlighted in yellow, further add to the coherence and clarity of the piece. "In fact" is used to add corroborative data. "Furthermore" marks the addition of supplementary information. In the last paragraph, three transitional linkages are added. This paragraph is based on a logical principle of order, but in the original, its logic is presented in a confusing way. In the revision, verbal linkages are used to help to weld together the logical connections. "Although" marks a shift in the discussion from what is known to what is not known. "In consequence" tells the reader that the statement to come follows directly from that which precedes. Finally, "hence" is used to emphasize the beginning of a logical conclusion. Logical paragraphs often need more explicit verbal linkages to enhance clarity.
Rational numbers satisfy the commutative, associative and distributive laws for addition and multiplication. Moreover, if we add, subtract, multiply or divide (except by zero) two rational numbers, we still get a rational number (that is, rational numbers are ‘closed’ with respect to addition, subtraction, multiplication and division). It turns out that irrational numbers also satisfy the commutative, associative and distributive laws for addition and multiplication. However, the sum, difference, quotients and products of irrational numbers are not always irrational. For example, `(sqrt6)+(-sqrt6),(sqrt2)-(sqrt2),(sqrt3).(sqrt3)` and `sqrt7/sqrt7` are rationals. Let us look at what happens when we add and multiply a rational number with an irrational number. For example, `sqrt3` is irrational. What about 2 + `sqrt3` and `2sqrt3`? since `sqrt3` has a non-terminating non-recurring decimal expansion, the same is true for `2+sqrt3` and `2sqrt3`. Therefore, both 2 + `sqrt3` and `2sqrt3` are also irrational numbers. Let us see what generally happens if we add, subtract, multiply, divide, take square roots and even nth roots of these irrational numbers, where n is any natural number. Let us look at some examples. Example : Add `2sqrt2+5sqrt3` and `sqrt2-3sqrt3.` Example: Mutiply `6sqrt5` by `2sqrt5.` Example: Divide `8sqrt(15)` by `2sqrt3`. These examples may lead you to expect the following facts, which are true: (i) The sum or difference of a rational number and an irrational number is irrational. (ii) The product or quotient of a non-zero rational number with an irrational number is irrational. (iii) If we add, subtract, multiply or divide two irrationals, the result may be rational or irrational. We now turn our attention to the operation of taking square roots of real numbers. Recall that, if a is a natural number, then `sqrta=b` means b2 = a and b>0. The same definition can be extended for positive real numbers. Let a > 0 be a real number. Then `sqrta=b` means b2 = a and b > 0. In Section 1.2, we saw how to represent `sqrtn` for any positive integer n on the number line. We now show how to find `sqrtx` for any given positive real number x geometrically. For example, let us find it for x = 3.5, i.e., we find `sqrt(3.5)` geometrically. Mark the distance 3.5 units from a fixed point A on a given line to obtain a point B such that AB = 3.5 units (see Fig. 1.15). From B, mark a distance of 1 unit and mark the new point as C. Find the mid-point of AC and mark that point as O. Draw a semicircle with centre O and radius OC. Draw a line perpendicular to AC passing through B and intersecting the semicircle at D. Then, BD = `sqrt(3.5)`. More generally, to find `sqrtx`,for any positive real number x, we mark B so that AB = x units, and, as in Fig. 1.16, mark C so that BC = 1 unit. Then, as we have done for the case x = 3.5, we find BD = `sqrtx` (see Fig. 1.16). We can prove this result using the Pythagoras Theorem. Notice that, in Fig. 1.16, D OBD is a right-angled triangle. Also, the radius of the circle is `(x+1)/2` units. Therefore, OC = OD = OA = `(x+1)/2` units. Now, OB = `x-((x+1)/2)=(x-1)/2`. So, by the Pythagoras Theorem, we have This shows that BD = `sqrtx`. This construction gives us a visual, and geometric way of showing that `sqrtx` exists for all real numbers x > 0. If you want to know the position of `sqrtx` on the number line, then let us treat the line BC as the number line, with B as zero, C as 1, and so on. Draw an arc with centre B and radius BD, which intersects the number line in E (see Fig. 1.17). Then, E represents `sqrtx` . We would like to now extend the idea of square roots to cube roots, fourth roots, and in general nth roots, where n is a positive integer. Recall your understanding of square roots and cube roots from earlier classes. What is `root(3)(8)` Well, we know it has to be some positive number whose cube is 8, and you must have guessed `root(3)(8)=2`. Let us try `root(5)(243)`. Do you know some number b such that b5 = 243? The answer is 3. Therefore, `root(5)(243)` = 3 From these examples, can you define `root(n)(a)` for a real number a > 0 and a positive Let a > 0 be a real number and n be a positive integer. Then `root(n)(a)`= b, if bn = a and b > 0. Note that the symbol '`sqrt`' used in `sqrt2`, `root(3)(8)`,`root(n)(a)`, etc. is called the radical sign. We now list some identities relating to square roots, which are useful in various ways. You are already familiar with some of these from your earlier classes. The remaining ones follow from the distributive law of multiplication over addition of real numbers, and from the identity (x + y) (x – y) = x2 – y2, for any real numbers x and y. Let a and b be positive real numbers. Then (i) `sqrt(ab)=sqrta sqrtb` (iii) `(sqrta+sqrtb)(sqrta-sqrtb)=`a - b Let us look at some particular cases of these identities. Example : Simplify the following expressions: Solution : (i) `(5+sqrt7)(2+sqrt5)=10+5sqrt5+2sqrt7+sqrt(35)` (ii) `(5+sqrt5)(5-sqrt5)=5^2-(sqrt5)^2=25-5=20 ` (iii) `(sqrt3+sqrt7)^2(sqrt3)^2+2sqrt3 sqrt7+(sqrt7)^2=3+2sqrt(21)+7=10+2sqrt(21)` Remark : Note that ‘simplify’ in the example above has been used to mean that the expression should be written as the sum of a rational and an irrational number. We end this section by considering the following problem. Look at `1/sqrt2`. Can you tell where it shows up on the number line? You know that it is irrational. May be it is easier to handle if the denominator is a rational number. Let us see, if we can ‘rationalise’ the denominator, that is, to make the denominator into a rational number. To do so, we need the identities involving square roots. Let us see how. Example : Rationalise the denominator of `1/sqrt2` Solution :We want to write `1/sqrt2` as an equivalent expression in which the denominator is a rational number. We know that `sqrt2` . `sqrt2` is rational. We also know that multiplying `1/sqrt2` by `sqrt2/sqrt2` will give us an equivalent expression, since `sqrt2/sqrt2` =1. So, we put these two facts together to get `1/sqrt2=1/sqrt2xxsqrt2/sqrt2=sqrt2/2.` In this form, it is easy to locate `1/sqrt2`on the number line. It is half way between 0 and `sqrt2` ! Example : Rationalise the denominator of `1/2+sqrt3` Solution : We use the Identity (iv) given earlier. Multiply and divide `1/2+sqrt3` by `2-sqrt3` to get `1/(2+sqrt3)xx((2-sqrt3)/(2-sqrt3)=(2-sqrt3)/(4-3)=2-sqrt3` Example : Rationalise the denominator of `5/(sqrt3-sqrt5)`. Solution : Here we use the Identity (iii) given earlier. Example : Rationalise the denominator of `1/(7+3sqrt2)` So, when the denominator of an expression contains a term with a square root (or a number under a radical sign), the process of converting it to an equivalent expression whose denominator is a rational number is called rationalising the denominator.
In mathematics, the logarithm is the inverse operation to exponentiation, just as division is the inverse of multiplication and vice versa. That means the logarithm of a number is the exponent to which another fixed number, the base, must be raised to produce that number. In the most simple case the logarithm counts repeated multiplication of the same factor; e.g., since 1000 = 10 × 10 × 10 = 103, the "logarithm to base 10" of 1000 is 3. More generally, exponentiation allows any positive real number to be raised to any real power, always producing a positive result, so the logarithm can be calculated for any two positive real numbers b and x where b is not equal to 1. The logarithm of x to base b, denoted logb (x) (or logb x when no confusion is possible), is the unique real number y such that by = x. For example, log2 64 = 6, as 64 = 26. The logarithm to base 10 (that is b = 10) is called the common logarithm and has many applications in science and engineering. The natural logarithm has the number e (≈ 2.718) as its base; its use is widespread in mathematics and physics, because of its simpler derivative. The binary logarithm uses base 2 (that is b = 2) and is commonly used in computer science. Logarithms were introduced by John Napier in the early 17th century as a means to simplify calculations. They were rapidly adopted by navigators, scientists, engineers, and others to perform computations more easily, using slide rules and logarithm tables. Tedious multi-digit multiplication steps can be replaced by table look-ups and simpler addition because of the fact—important in its own right—that the logarithm of a product is the sum of the logarithms of the factors: Logarithmic scales reduce wide-ranging quantities to tiny scopes. For example, the decibel (dB) is a unit used to express log-ratios, mostly for signal power and amplitude (of which sound pressure is a common example). In chemistry, pH is a logarithmic measure for the acidity of an aqueous solution. Logarithms are commonplace in scientific formulae, and in measurements of the complexity of algorithms and of geometric objects called fractals. They help describing frequency ratios of musical intervals, appear in formulas counting prime numbers or approximating factorials, inform some models in psychophysics, and can aid in forensic accounting. In the same way as the logarithm reverses exponentiation, the complex logarithm is the inverse function of the exponential function applied to complex numbers. The discrete logarithm is another variant; it has uses in public-key cryptography. Motivation and definitionEdit The idea of logarithms is to reverse the operation of exponentiation, that is, raising a number to a power. For example, the third power (or cube) of 2 is 8, because 8 is the product of three factors of 2: It follows that the logarithm of 8 with respect to base 2 is 3, so log2 8 = 3. The third power of some number b is the product of three factors equal to b. More generally, raising b to the n-th power, where n is a natural number, is done by multiplying n factors equal to b. The n-th power of b is written bn, so that Exponentiation may be extended to by, where b is a positive number and the exponent y is any real number. For example, b−1 is the reciprocal of b, that is, 1/b. (For further details, including the formula bm + n = bm · bn, see exponentiation or for an elementary treatise.) The logarithm of a positive real number x with respect to base b, a positive real number not equal to 1,[nb 1] is the exponent by which b must be raised to yield x. In other words, the logarithm of x to base b is the solution y to the equation The logarithm is denoted "logb x" (pronounced as "the logarithm of x to base b" or "the base-b logarithm of x" or (most commonly) "the log, base b, of x"), such that the defining identity from above becomes In the equation y = logb x, the value y is the answer to the question "To what power must b be raised, in order to yield x?". This question can also be stated (with a richer answer) for complex numbers, which is done in the section "Complex logarithm" below, and is more extensively investigated in the article on complex logarithm. - log2 16 = 4 , since 24 = 2 ×2 × 2 × 2 = 16. - Logarithms can also be negative: since - log10150 is approximately 2.176, which lies between 2 and 3, just as 150 lies between 102 = 100 and 103 = 1000. - For any base b, logb b = 1 and logb 1 = 0, since b1 = b and b0 = 1, respectively. Several important formulas, sometimes called logarithmic identities or logarithmic laws, relate logarithms to one another. Product, quotient, power, and rootEdit The logarithm of a product is the sum of the logarithms of the numbers being multiplied; the logarithm of the ratio of two numbers is the difference of the logarithms. The logarithm of the p-th power of a number is p times the logarithm of the number itself; the logarithm of a p-th root is the logarithm of the number divided by p. The following table lists these identities with examples. Each of the identities can be derived after substitution of the logarithm definitions or in the left hand sides. Change of baseEdit The logarithm logbx can be computed from the logarithms of x and b with respect to an arbitrary base k using the following formula: |Derivation of the conversion factor between logarithms of arbitrary base| Starting from the defining identity we can apply logk to both sides of this equation, to get Solving for yields: showing the conversion factor from given -values to their corresponding -values to be Given a number x and its logarithm logbx to an unknown base b, the base is given by: - which can be seen from taking the defining equation to the power of Among all choices for the base, three are particularly common. These are b = 10, b = e (the irrational mathematical constant ≈ 2.71828), and b = 2 (the binary logarithm). In mathematical analysis, the logarithm to base e is widespread because of its particular analytical properties explained below. On the other hand, base-10 logarithms are easy to use for manual calculations in the decimal number system: Thus, log10x is related to the number of decimal digits of a positive integer x: the number of digits is the smallest integer strictly bigger than log10x. For example, log101430 is approximately 3.15. The next integer is 4, which is the number of digits of 1430. Both the natural logarithm and the logarithm to base two are used in information theory, corresponding to the use of nats or bits as the fundamental units of information, respectively. Binary logarithms are also used in computer science, where the binary system is ubiquitous, in music theory, where a pitch ratio of two (the octave) is ubiquitous and the cent is the binary logarithm (scaled by 1200) of the ratio between two adjacent equally-tempered pitches, and in photography to measure exposure values. The following table lists common notations for logarithms to these bases and the fields where they are used. Many disciplines write logx instead of logbx, when the intended base can be determined from the context. The notation blogx also occurs. The "ISO notation" column lists designations suggested by the International Organization for Standardization (ISO 31-11). Because the notation log x has been used for all three bases (or when the base is indeterminate or immaterial), the intended base must often be inferred based on context or discipline. In computer science and mathematics, log usually refers to log2 and loge, respectively. In other contexts log often means log10. |Base b||Name for logbx||ISO notation||Other notations||Used in| |2||binary logarithm||lb x||ld x, log x, lg x, log2x||computer science, information theory, music theory, photography| |e||natural logarithm||ln x[nb 2]||log x (in mathematics and many programming languages[nb 3]) |mathematics, physics, chemistry, statistics, economics, information theory, and engineering |10||common logarithm||lg x||log x, log10x (in engineering, biology, astronomy) |various engineering fields (see decibel and see below), logarithm tables, handheld calculators, spectroscopy The history of logarithm in seventeenth century Europe is the discovery of a new function that extended the realm of analysis beyond the scope of algebraic methods. The method of logarithms was publicly propounded by John Napier in 1614, in a book titled Mirifici Logarithmorum Canonis Descriptio (Description of the Wonderful Rule of Logarithms). Prior to Napier's invention, there had been other techniques of similar scopes, such as the prosthaphaeresis or the use of tables of progressions, extensively developed by Jost Bürgi around 1600. The common logarithm of a number is the index of that power of ten which equals the number. Speaking of a number as requiring so many figures is a rough allusion to common logarithm, and was referred to by Archimedes as the "order of a number". The first real logarithms were heuristic methods to turn multiplication into addition, thus facilitating rapid computation. Some of these methods used tables derived from trigonometric identities. Such methods are called prosthaphaeresis. Invention of the function now known as natural logarithm began as an attempt to perform a quadrature of a rectangular hyperbola by Grégoire de Saint-Vincent, a Belgian Jesuit residing in Prague. Archimedes had written The Quadrature of the Parabola in the third century BC, but a quadrature for the hyperbola eluded all efforts until Saint-Vincent published his results in 1647. The relation that the logarithm provides between a geometric progression in its argument and an arithmetic progression of values, prompted A. A. de Sarasa to make the connection of Saint-Vincent’s quadrature and the tradition of logarithms in prosthaphaeresis, leading to the term "hyperbolic logarithm", a synonym for natural logarithm. Soon the new function was appreciated by Christiaan Huygens, Patavii, and James Gregory. The notation Log y was adopted by Leibniz in 1675, and the next year he connected it to the integral Logarithm tables, slide rules, and historical applicationsEdit By simplifying difficult calculations, logarithms contributed to the advance of science, especially astronomy. They were critical to advances in surveying, celestial navigation, and other domains. Pierre-Simon Laplace called logarithms - "...[a]n admirable artifice which, by reducing to a few days the labour of many months, doubles the life of the astronomer, and spares him the errors and disgust inseparable from long calculations." A key tool that enabled the practical use of logarithms before calculators and computers was the table of logarithms. The first such table was compiled by Henry Briggs in 1617, immediately after Napier's invention. Subsequently, tables with increasing scope were written. These tables listed the values of logbx and bx for any number x in a certain range, at a certain precision, for a certain base b (usually b = 10). For example, Briggs' first table contained the common logarithms of all integers in the range 1–1000, with a precision of 14 digits. As the function f(x) = bx is the inverse function of logbx, it has been called the antilogarithm. The product and quotient of two positive numbers c and d were routinely calculated as the sum and difference of their logarithms. The product cd or quotient c/d came from looking up the antilogarithm of the sum or difference, also via the same table: For manual calculations that demand any appreciable precision, performing the lookups of the two logarithms, calculating their sum or difference, and looking up the antilogarithm is much faster than performing the multiplication by earlier methods such as prosthaphaeresis, which relies on trigonometric identities. Calculations of powers and roots are reduced to multiplications or divisions and look-ups by Many logarithm tables give logarithms by separately providing the characteristic and mantissa of x, that is to say, the integer part and the fractional part of log10x. The characteristic of 10 · x is one plus the characteristic of x, and their significands are the same. This extends the scope of logarithm tables: given a table listing log10x for all integers x ranging from 1 to 1000, the logarithm of 3542 is approximated by - Greater accuracy can be obtained by interpolation. Another critical application was the slide rule, a pair of logarithmically divided scales used for calculation, as illustrated here: The non-sliding logarithmic scale, Gunter's rule, was invented shortly after Napier's invention. William Oughtred enhanced it to create the slide rule—a pair of logarithmic scales movable with respect to each other. Numbers are placed on sliding scales at distances proportional to the differences between their logarithms. Sliding the upper scale appropriately amounts to mechanically adding logarithms. For example, adding the distance from 1 to 2 on the lower scale to the distance from 1 to 3 on the upper scale yields a product of 6, which is read off at the lower part. The slide rule was an essential calculating tool for engineers and scientists until the 1970s, because it allows, at the expense of precision, much faster computation than techniques based on tables. A deeper study of logarithms requires the concept of a function. A function is a rule that, given one number, produces another number. An example is the function producing the x-th power of b from any real number x, where the base b is a fixed number. This function is written To justify the definition of logarithms, it is necessary to show that the equation has a solution x and that this solution is unique, provided that y is positive and that b is positive and unequal to 1. A proof of that fact requires the intermediate value theorem from elementary calculus. This theorem states that a continuous function that produces two values m and n also produces any value that lies between m and n. A function is continuous if it does not "jump", that is, if its graph can be drawn without lifting the pen. This property can be shown to hold for the function f(x) = bx. Because f takes arbitrarily large and arbitrarily small positive values, any number y > 0 lies between f(x0) and f(x1) for suitable x0 and x1. Hence, the intermediate value theorem ensures that the equation f(x) = y has a solution. Moreover, there is only one solution to this equation, because the function f is strictly increasing (for b > 1), or strictly decreasing (for 0 < b < 1). The unique solution x is the logarithm of y to base b, logby. The function that assigns to y its logarithm is called logarithm function or logarithmic function (or just logarithm). The function logbx is essentially characterized by the above product formula The formula for the logarithm of a power says in particular that for any number x, In prose, taking the x-th power of b and then the base-b logarithm gives back x. Conversely, given a positive number y, the formula says that first taking the logarithm and then exponentiating gives back y. Thus, the two possible ways of combining (or composing) logarithms and exponentiation give back the original number. Therefore, the logarithm to base b is the inverse function of f(x) = bx. Inverse functions are closely related to the original functions. Their graphs correspond to each other upon exchanging the x- and the y-coordinates (or upon reflection at the diagonal line x = y), as shown at the right: a point (t, u = bt) on the graph of f yields a point (u, t = logbu) on the graph of the logarithm and vice versa. As a consequence, logb(x) diverges to infinity (gets bigger than any given number) if x grows to infinity, provided that b is greater than one. In that case, logb(x) is an increasing function. For b < 1, logb(x) tends to minus infinity instead. When x approaches zero, logbx goes to minus infinity for b > 1 (plus infinity for b < 1, respectively). Derivative and antiderivativeEdit Analytic properties of functions pass to their inverses. Thus, as f(x) = bx is a continuous and differentiable function, so is logby. Roughly, a continuous function is differentiable if its graph has no sharp "corners". Moreover, as the derivative of f(x) evaluates to ln(b)bx by the properties of the exponential function, the chain rule implies that the derivative of logbx is given by The derivative of ln x is 1/x; this implies that ln x is the unique antiderivative of 1/x that has the value 0 for x =1. This is this very simple formula that motivated to qualify as "natural" the natural logarithm; this is also one of the main reasons of the importance of the constant e. The derivative with a generalised functional argument f(x) is The quotient at the right hand side is called the logarithmic derivative of f. Computing f'(x) by means of the derivative of ln(f(x)) is known as logarithmic differentiation. The antiderivative of the natural logarithm ln(x) is: Integral representation of the natural logarithmEdit In other words, ln(t) equals the area between the x axis and the graph of the function 1/x, ranging from x = 1 to x = t (figure at the right). This is a consequence of the fundamental theorem of calculus and the fact that the derivative of ln(x) is 1/x. The right hand side of this equation can serve as a definition of the natural logarithm. Product and power logarithm formulas can be derived from this definition. For example, the product formula ln(tu) = ln(t) + ln(u) is deduced as: The equality (1) splits the integral into two parts, while the equality (2) is a change of variable (w = x/t). In the illustration below, the splitting corresponds to dividing the area into the yellow and blue parts. Rescaling the left hand blue area vertically by the factor t and shrinking it by the same factor horizontally does not change its size. Moving it appropriately, the area fits the graph of the function f(x) = 1/x again. Therefore, the left hand blue area, which is the integral of f(x) from t to tu is the same as the integral from 1 to u. This justifies the equality (2) with a more geometric proof. The power formula ln(tr) = r ln(t) may be derived in a similar way: The second equality uses a change of variables (integration by substitution), w = x1/r. The sum over the reciprocals of natural numbers, There are also some other integral representations of the logarithm that are useful in some situations: The first identity can be verified by showing that it has the same value at x = 1, and the same derivative. The second identity can be proven by writing and then inserting the Laplace transform of cos(xt) (and cos(t)). Transcendence of the logarithmEdit Real numbers that are not algebraic are called transcendental; for example, π and e are such numbers, but is not. Almost all real numbers are transcendental. The logarithm is an example of a transcendental function. The Gelfond–Schneider theorem asserts that logarithms usually take transcendental, i.e., "difficult" values. Logarithms are easy to compute in some cases, such as log10(1000) = 3. In general, logarithms can be calculated using power series or the arithmetic–geometric mean, or be retrieved from a precalculated logarithm table that provides a fixed precision. Newton's method, an iterative method to solve equations approximately, can also be used to calculate the logarithm, because its inverse function, the exponential function, can be computed efficiently. Using look-up tables, CORDIC-like methods can be used to compute logarithms if the only available operations are addition and bit shifts. Moreover, the binary logarithm algorithm calculates lb(x) recursively based on repeated squarings of x, taking advantage of the relation - Taylor series This is a shorthand for saying that ln(z) can be approximated to a more and more accurate value by the following expressions: For example, with z = 1.5 the third approximation yields 0.4167, which is about 0.011 greater than ln(1.5) = 0.405465. This series approximates ln(z) with arbitrary precision, provided the number of summands is large enough. In elementary calculus, ln(z) is therefore the limit of this series. It is the Taylor series of the natural logarithm at z = 1. The Taylor series of ln(z) provides a particularly useful approximation to ln(1+z) when z is small, |z| < 1, since then For example, with z = 0.1 the first-order approximation gives ln(1.1) ≈ 0.1, which is less than 5% off the correct value 0.0953. - More efficient series Another series is based on the area hyperbolic tangent function: This series can be derived from the above Taylor series. It converges more quickly than the Taylor series, especially if z is close to 1. For example, for z = 1.5, the first three terms of the second series approximate ln(1.5) with an error of about ×10−6. The quick convergence for 3z close to 1 can be taken advantage of in the following way: given a low-accuracy approximation y ≈ ln(z) and putting the logarithm of z is: The better the initial approximation y is, the closer A is to 1, so its logarithm can be calculated efficiently. A can be calculated using the exponential series, which converges quickly provided y is not too large. Calculating the logarithm of larger z can be reduced to smaller values of z by writing z = a · 10b, so that ln(z) = ln(a) + b · ln(10). A closely related method can be used to compute the logarithm of integers. From the above series, it follows that: If the logarithm of a large integer n is known, then this series yields a fast converging series for log(n+1). Arithmetic–geometric mean approximationEdit The arithmetic–geometric mean yields high precision approximations of the natural logarithm. Sasaki and Kanada showed in 1982 that it was particularly fast for precisions between 400 and 1000 decimal places, while Taylor series methods were typically faster when less precision was needed. In their work ln(x) is approximated to a precision of 2−p (or p precise bits) by the following formula (due to Carl Friedrich Gauss): Here M(x,y) denotes the arithmetic–geometric mean of x and y. It is obtained by repeatedly calculating the average (x+y)/2 (arithmetic mean) and (geometric mean) of x and y then let those two numbers become the next x and y. The two numbers quickly converge to a common limit which is the value of M(x,y). m is chosen such that to ensure the required precision. A larger m makes the M(x,y) calculation take more steps (the initial x and y are farther apart so it takes more steps to converge) but gives more precision. The constants pi and ln(2) can be calculated with quickly converging series. While at Los Alamos National Laboratory working on the Manhattan Project, Richard Feynman developed a bit processing algorithm that is similar to long division and was later used in the Connection Machine. The algorithm uses the fact that every real number is uniquely representable as a product of distinct factors of the form . The algorithm sequentially builds that product : if , then it changes to . It then increase by one regardless. The algorithm stops when is large enough to give the desired accuracy. Because is the sum of the terms of the form corresponding to those for which the factor was included in the product , may be computed by simple addition, using a table of for all . Any base may be used for the logarithm table. Logarithms have many applications inside and outside mathematics. Some of these occurrences are related to the notion of scale invariance. For example, each chamber of the shell of a nautilus is an approximate copy of the next one, scaled by a constant factor. This gives rise to a logarithmic spiral. Benford's law on the distribution of leading digits can also be explained by scale invariance. Logarithms are also linked to self-similarity. For example, logarithms appear in the analysis of algorithms that solve a problem by dividing it into two similar smaller problems and patching their solutions. The dimensions of self-similar geometric shapes, that is, shapes whose parts resemble the overall picture are also based on logarithms. Logarithmic scales are useful for quantifying the relative change of a value as opposed to its absolute difference. Moreover, because the logarithmic function log(x) grows very slowly for large x, logarithmic scales are used to compress large-scale scientific data. Logarithms also occur in numerous scientific formulas, such as the Tsiolkovsky rocket equation, the Fenske equation, or the Nernst equation. Scientific quantities are often expressed as logarithms of other quantities, using a logarithmic scale. For example, the decibel is a unit of measurement associated with logarithmic-scale quantities. It is based on the common logarithm of ratios—10 times the common logarithm of a power ratio or 20 times the common logarithm of a voltage ratio. It is used to quantify the loss of voltage levels in transmitting electrical signals, to describe power levels of sounds in acoustics, and the absorbance of light in the fields of spectrometry and optics. The signal-to-noise ratio describing the amount of unwanted noise in relation to a (meaningful) signal is also measured in decibels. In a similar vein, the peak signal-to-noise ratio is commonly used to assess the quality of sound and image compression methods using the logarithm. The strength of an earthquake is measured by taking the common logarithm of the energy emitted at the quake. This is used in the moment magnitude scale or the Richter magnitude scale. For example, a 5.0 earthquake releases 32 times (101.5) and a 6.0 releases 1000 times (103) the energy of a 4.0. Another logarithmic scale is apparent magnitude. It measures the brightness of stars logarithmically. Yet another example is pH in chemistry; pH is the negative of the common logarithm of the activity of hydronium ions (the form hydrogen ions H+ take in water). The activity of hydronium ions in neutral water is 10−7 mol·L−1, hence a pH of 7. Vinegar typically has a pH of about 3. The difference of 4 corresponds to a ratio of 104 of the activity, that is, vinegar's hydronium ion activity is about 10−3 mol·L−1. Semilog (log-linear) graphs use the logarithmic scale concept for visualization: one axis, typically the vertical one, is scaled logarithmically. For example, the chart at the right compresses the steep increase from 1 million to 1 trillion to the same space (on the vertical axis) as the increase from 1 to 1 million. In such graphs, exponential functions of the form f(x) = a · bx appear as straight lines with slope equal to the logarithm of b. Log-log graphs scale both axes logarithmically, which causes functions of the form f(x) = a · xk to be depicted as straight lines with slope equal to the exponent k. This is applied in visualizing and analyzing power laws. Logarithms occur in several laws describing human perception: Hick's law proposes a logarithmic relation between the time individuals take to choose an alternative and the number of choices they have. Fitts's law predicts that the time required to rapidly move to a target area is a logarithmic function of the distance to and the size of the target. In psychophysics, the Weber–Fechner law proposes a logarithmic relationship between stimulus and sensation such as the actual vs. the perceived weight of an item a person is carrying. (This "law", however, is less precise than more recent models, such as the Stevens' power law.) Psychological studies found that individuals with little mathematics education tend to estimate quantities logarithmically, that is, they position a number on an unmarked line according to its logarithm, so that 10 is positioned as close to 100 as 100 is to 1000. Increasing education shifts this to a linear estimate (positioning 1000 10x as far away) in some circumstances, while logarithms are used when the numbers to be plotted are difficult to plot linearly. Probability theory and statisticsEdit Logarithms arise in probability theory: the law of large numbers dictates that, for a fair coin, as the number of coin-tosses increases to infinity, the observed proportion of heads approaches one-half. The fluctuations of this proportion about one-half are described by the law of the iterated logarithm. Logarithms also occur in log-normal distributions. When the logarithm of a random variable has a normal distribution, the variable is said to have a log-normal distribution. Log-normal distributions are encountered in many fields, wherever a variable is formed as the product of many independent positive random variables, for example in the study of turbulence. Logarithms are used for maximum-likelihood estimation of parametric statistical models. For such a model, the likelihood function depends on at least one parameter that must be estimated. A maximum of the likelihood function occurs at the same parameter-value as a maximum of the logarithm of the likelihood (the "log likelihood"), because the logarithm is an increasing function. The log-likelihood is easier to maximize, especially for the multiplied likelihoods for independent random variables. Benford's law describes the occurrence of digits in many data sets, such as heights of buildings. According to Benford's law, the probability that the first decimal-digit of an item in the data sample is d (from 1 to 9) equals log10(d + 1) − log10(d), regardless of the unit of measurement. Thus, about 30% of the data can be expected to have 1 as first digit, 18% start with 2, etc. Auditors examine deviations from Benford's law to detect fraudulent accounting. Analysis of algorithms is a branch of computer science that studies the performance of algorithms (computer programs solving a certain problem). Logarithms are valuable for describing algorithms that divide a problem into smaller ones, and join the solutions of the subproblems. For example, to find a number in a sorted list, the binary search algorithm checks the middle entry and proceeds with the half before or after the middle entry if the number is still not found. This algorithm requires, on average, log2(N) comparisons, where N is the list's length. Similarly, the merge sort algorithm sorts an unsorted list by dividing the list into halves and sorting these first before merging the results. Merge sort algorithms typically require a time approximately proportional to N · log(N). The base of the logarithm is not specified here, because the result only changes by a constant factor when another base is used. A constant factor is usually disregarded in the analysis of algorithms under the standard uniform cost model. A function f(x) is said to grow logarithmically if f(x) is (exactly or approximately) proportional to the logarithm of x. (Biological descriptions of organism growth, however, use this term for an exponential function.) For example, any natural number N can be represented in binary form in no more than log2(N) + 1 bits. In other words, the amount of memory needed to store N grows logarithmically with N. Entropy and chaosEdit The sum is over all possible states i of the system in question, such as the positions of gas particles in a container. Moreover, pi is the probability that the state i is attained and k is the Boltzmann constant. Similarly, entropy in information theory measures the quantity of information. If a message recipient may expect any one of N possible messages with equal likelihood, then the amount of information conveyed by any one such message is quantified as log2(N) bits. Lyapunov exponents use logarithms to gauge the degree of chaoticity of a dynamical system. For example, for a particle moving on an oval billiard table, even small changes of the initial conditions result in very different paths of the particle. Such systems are chaotic in a deterministic way, because small measurement errors of the initial state predictably lead to largely different final states. At least one Lyapunov exponent of a deterministically chaotic system is positive. Logarithms occur in definitions of the dimension of fractals. Fractals are geometric objects that are self-similar: small parts reproduce, at least roughly, the entire global structure. The Sierpinski triangle (pictured) can be covered by three copies of itself, each having sides half the original length. This makes the Hausdorff dimension of this structure ln(3)/ln(2) ≈ 1.58. Another logarithm-based notion of dimension is obtained by counting the number of boxes needed to cover the fractal in question. Logarithms are related to musical tones and intervals. In equal temperament, the frequency ratio depends only on the interval between two tones, not on the specific frequency, or pitch, of the individual tones. For example, the note A has a frequency of 440 Hz and B-flat has a frequency of 466 Hz. The interval between A and B-flat is a semitone, as is the one between B-flat and B (frequency 493 Hz). Accordingly, the frequency ratios agree: Therefore, logarithms can be used to describe the intervals: an interval is measured in semitones by taking the base-21/12 logarithm of the frequency ratio, while the base-21/1200 logarithm of the frequency ratio expresses the interval in cents, hundredths of a semitone. The latter is used for finer encoding, as it is needed for non-equal temperaments. (the two tones are played at the same time) |1/12 tone play (help·info)||Semitone play||Just major third play||Major third play||Tritone play||Octave play| |Frequency ratio r| |Corresponding number of semitones |Corresponding number of cents Natural logarithms are closely linked to counting prime numbers (2, 3, 5, 7, 11, ...), an important topic in number theory. For any integer x, the quantity of prime numbers less than or equal to x is denoted π(x). The prime number theorem asserts that π(x) is approximately given by in the sense that the ratio of π(x) and that fraction approaches 1 when x tends to infinity. As a consequence, the probability that a randomly chosen number between 1 and x is prime is inversely proportional to the number of decimal digits of x. A far better estimate of π(x) is given by the offset logarithmic integral function Li(x), defined by The Riemann hypothesis, one of the oldest open mathematical conjectures, can be stated in terms of comparing π(x) and Li(x). The Erdős–Kac theorem describing the number of distinct prime factors also involves the natural logarithm. The logarithm of n factorial, n! = 1 · 2 · ... · n, is given by The complex numbers a solving the equation are called complex logarithms. Here, z is a complex number. A complex number is commonly represented as z = x + iy, where x and y are real numbers and i is the imaginary unit. Such a number can be visualized by a point in the complex plane, as shown at the right. The polar form encodes a non-zero complex number z by its absolute value, that is, the distance r to the origin, and an angle between the x axis and the line passing through the origin and z. This angle is called the argument of z. The absolute value r of z is The argument is not uniquely specified by z: both φ and φ' = φ + 2π are arguments of z because adding 2π radians or 360 degrees[nb 6] to φ corresponds to "winding" around the origin counter-clock-wise by a turn. The resulting complex number is again z, as illustrated at the right. However, exactly one argument φ satisfies −π < φ and φ ≤ π. It is called the principal argument, denoted Arg(z), with a capital A. (An alternative normalization is 0 ≤ Arg(z) < 2π.) This implies that the a-th power of e equals z, where φ is the principal argument Arg(z) and n is an arbitrary integer. Any such a is called a complex logarithm of z. There are infinitely many of them, in contrast to the uniquely defined real logarithm. If n = 0, a is called the principal value of the logarithm, denoted Log(z). The principal argument of any positive real number x is 0; hence Log(x) is a real number and equals the real (natural) logarithm. However, the above formulas for logarithms of products and powers do not generalize to the principal value of the complex logarithm. The illustration at the right depicts Log(z). The discontinuity, that is, the jump in the hue at the negative part of the x- or real axis, is caused by the jump of the principal argument there. This locus is called a branch cut. This behavior can only be circumvented by dropping the range restriction on φ. Then the argument of z and, consequently, its logarithm become multi-valued functions. Inverses of other exponential functionsEdit Exponentiation occurs in many areas of mathematics and its inverse function is often referred to as the logarithm. For example, the logarithm of a matrix is the (multi-valued) inverse function of the matrix exponential. Another example is the p-adic logarithm, the inverse function of the p-adic exponential. Both are defined via Taylor series analogous to the real case. In the context of differential geometry, the exponential map maps the tangent space at a point of a manifold to a neighborhood of that point. Its inverse is also called the logarithmic (or log) map. where x is an element of the group. Carrying out the exponentiation can be done efficiently, but the discrete logarithm is believed to be very hard to calculate in some groups. This asymmetry has important applications in public key cryptography, such as for example in the Diffie–Hellman key exchange, a routine that allows secure exchanges of cryptographic keys over unsecured information channels. Zech's logarithm is related to the discrete logarithm in the multiplicative group of non-zero elements of a finite field. Further logarithm-like inverse functions include the double logarithm ln(ln(x)), the super- or hyper-4-logarithm (a slight variation of which is called iterated logarithm in computer science), the Lambert W function, and the logit. They are the inverse functions of the double exponential function, tetration, of f(w) = wew, and of the logistic function, respectively. From the perspective of group theory, the identity log(cd) = log(c) + log(d) expresses a group isomorphism between positive reals under multiplication and reals under addition. Logarithmic functions are the only continuous isomorphisms between these groups. By means of that isomorphism, the Haar measure (Lebesgue measure) dx on the reals corresponds to the Haar measure dx/x on the positive reals. The polylogarithm is the function defined by - The restrictions on x and b are explained in the section "Analytic properties". - Some mathematicians disapprove of this notation. In his 1985 autobiography, Paul Halmos criticized what he considered the "childish ln notation," which he said no mathematician had ever used. The notation was invented by Irving Stringham, a mathematician. - For example C, Java, Haskell, and BASIC. - The same series holds for the principal value of the complex logarithm for complex numbers z satisfying |z − 1| < 1. - The same series holds for the principal value of the complex logarithm for complex numbers z with positive real part. - See radian for the conversion between 2π and 360 degrees. - Shirali, Shailesh (2002), A Primer on Logarithms, Hyderabad: Universities Press, ISBN 978-81-7371-414-6, esp. section 2 - Kate, S.K.; Bhapkar, H.R. (2009), Basics Of Mathematics, Pune: Technical Publications, ISBN 978-81-8431-755-8, chapter 1 - All statements in this section can be found in Shailesh Shirali 2002, section 4, (Douglas Downing 2003, p. 275), or Kate & Bhapkar 2009, p. 1-1, for example. - Bernstein, Stephen; Bernstein, Ruth (1999), Schaum's outline of theory and problems of elements of statistics. I, Descriptive statistics and probability, Schaum's outline series, New York: McGraw-Hill, ISBN 978-0-07-005023-5, p. 21 - Downing, Douglas (2003), Algebra the Easy Way, Barron's Educational Series, Hauppauge, N.Y.: Barron's, ISBN 978-0-7641-1972-9, chapter 17, p. 275 - Wegener, Ingo (2005), Complexity theory: exploring the limits of efficient algorithms, Berlin, New York: Springer-Verlag, ISBN 978-3-540-21045-0, p. 20 - Van der Lubbe, Jan C. A. (1997), Information Theory, Cambridge University Press, p. 3, ISBN 9780521467605 - Allen, Elizabeth; Triantaphillidou, Sophie (2011), The Manual of Photography, Taylor & Francis, p. 228, ISBN 9780240520377 - Franz Embacher; Petra Oberhuemer, Mathematisches Lexikon (in German), mathe online: für Schule, Fachhochschule, Universität unde Selbststudium, retrieved 2011-03-22 - Taylor, B. N. (1995), Guide for the Use of the International System of Units (SI), US Department of Commerce - Goodrich, Michael T.; Tamassia, Roberto (2002), Algorithm Design: Foundations, Analysis, and Internet Examples, John Wiley & Sons, p. 23, One of the interesting and sometimes even surprising aspects of the analysis of data structures and algorithms is the ubiquitous presence of logarithms ... As is the custom in the computing literature, we omit writing the base b of the logarithm when b = 2. - Parkhurst, David F. (2007). Introduction to Applied Mathematics for Environmental Science (illustrated ed.). Springer Science & Business Media. p. 288. ISBN 978-0-387-34228-3. Extract of page 288 - Gullberg, Jan (1997), Mathematics: from the birth of numbers., New York: W. W. Norton & Co, ISBN 978-0-393-04002-9 - See footnote 1 in Perl, Yehoshua; Reingold, Edward M. (December 1977). "Understanding the complexity of interpolation search". Information Processing Letters. 6 (6): 219–222. doi:10.1016/0020-0190(77)90072-2. - Paul Halmos (1985), I Want to Be a Mathematician: An Automathography, Berlin, New York: Springer-Verlag, ISBN 978-0-387-96078-4 - Irving Stringham (1893), Uniplanar algebra: being part I of a propædeutic to the higher mathematical analysis, The Berkeley Press, p. xiii - Roy S. Freedman (2006), Introduction to Financial Technology, Amsterdam: Academic Press, p. 59, ISBN 978-0-12-370478-8 - See Theorem 3.29 in Rudin, Walter (1984). Principles of mathematical analysis (3rd ed., International student ed.). Auckland: McGraw-Hill International. ISBN 978-0070856134. - Napier, John (1614), Mirifici Logarithmorum Canonis Descriptio [The Description of the Wonderful Rule of Logarithms] (in Latin), Edinburgh, Scotland: Andrew Hart - Hobson, Ernest William (1914), John Napier and the invention of logarithms, 1614, Cambridge: The University Press - Folkerts, Menso; Launert, Dieter; Thom, Andreas (October 2015), Jost Bürgi's Method for Calculating Sines, arXiv: - MacTutor Article on Jost Bürgi: http://www-history.mcs.st-and.ac.uk/Biographies/Burgi.html - William Gardner (1742) Tables of Logarithms - R.C. Pierce (1977) "A brief history of logarithm", Two-Year College Mathematics Journal 8(1):22–6. - Enrique Gonzales-Velasco (2011) Journey through Mathematics – Creative Episodes in its History, §2.4 Hyperbolic logarithms, page 117, Springer ISBN 978-0-387-92153-2 - Florian Cajori (1913) "History of the exponential and logarithm concepts", American Mathematical Monthly 20: 5, 35, 75, 107, 148, 173, 205. - Bryant, Walter W., A History of Astronomy, London: Methuen & Co, p. 44 - Campbell-Kelly, Martin (2003), The history of mathematical tables: from Sumer to spreadsheets, Oxford scholarship online, Oxford University Press, ISBN 978-0-19-850841-0, section 2 - Abramowitz, Milton; Stegun, Irene A., eds. (1972), Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables (10th ed.), New York: Dover Publications, ISBN 978-0-486-61272-0, section 4.7., p. 89 - Spiegel, Murray R.; Moyer, R.E. (2006), Schaum's outline of college algebra, Schaum's outline series, New York: McGraw-Hill, ISBN 978-0-07-145227-4, p. 264 - Maor 2009, sections 1, 13 - Devlin, Keith (2004). Sets, functions, and logic: an introduction to abstract mathematics. Chapman & Hall/CRC mathematics (3rd ed.). Boca Raton, Fla: Chapman & Hall/CRC. ISBN 1-58488-449-5., or see the references in function - Lang, Serge (1997), Undergraduate analysis, Undergraduate Texts in Mathematics (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-94841-6, MR 1476913, section III.3 - Lang 1997, section IV.2 - Dieudonné, Jean (1969). Foundations of Modern Analysis. 1. Academic Press. p. 84. item (4.3.1) - Stewart, James (2007), Single Variable Calculus: Early Transcendentals, Belmont: Thomson Brooks/Cole, ISBN 978-0-495-01169-9, section 1.6 - "Calculation of d/dx(Log(b,x))". Wolfram Alpha. Wolfram Research. Retrieved 15 March 2011. - Kline, Morris (1998), Calculus: an intuitive and physical approach, Dover books on mathematics, New York: Dover Publications, ISBN 978-0-486-40453-0, p. 386 - "Calculation of Integrate(ln(x))". Wolfram Alpha. Wolfram Research. Retrieved 15 March 2011. - Abramowitz & Stegun, eds. 1972, p. 69 - Courant, Richard (1988), Differential and integral calculus. Vol. I, Wiley Classics Library, New York: John Wiley & Sons, ISBN 978-0-471-60842-4, MR 1009558, section III.6 - Havil, Julian (2003), Gamma: Exploring Euler's Constant, Princeton University Press, ISBN 978-0-691-09983-5, sections 11.5 and 13.8 - Nomizu, Katsumi (1996), Selected papers on number theory and algebraic geometry, 172, Providence, RI: AMS Bookstore, p. 21, ISBN 978-0-8218-0445-2 - Baker, Alan (1975), Transcendental number theory, Cambridge University Press, ISBN 978-0-521-20461-3, p. 10 - Muller, Jean-Michel (2006), Elementary functions (2nd ed.), Boston, MA: Birkhäuser Boston, ISBN 978-0-8176-4372-0, sections 4.2.2 (p. 72) and 5.5.2 (p. 95) - Hart; Cheney; Lawson; et al. (1968), Computer Approximations, SIAM Series in Applied Mathematics, New York: John Wiley, section 6.3, p. 105–111 - Zhang, M.; Delgado-Frias, J.G.; Vassiliadis, S. (1994), "Table driven Newton scheme for high precision logarithm generation", IEE Proceedings Computers & Digital Techniques, 141 (5): 281–292, doi:10.1049/ip-cdt:19941268, ISSN 1350-2387, Archived from the original on 29 May 2015 , section 1 for an overview - Meggitt, J. E. (April 1962), "Pseudo Division and Pseudo Multiplication Processes", IBM Journal, doi:10.1147/rd.62.0210 - Kahan, W. (May 20, 2001), Pseudo-Division Algorithms for Floating-Point Logarithms and Exponentials - Abramowitz & Stegun, eds. 1972, p. 68 - Sasaki, T.; Kanada, Y. (1982), "Practically fast multiple-precision evaluation of log(x)", Journal of Information Processing, 5 (4): 247–250, retrieved 30 March 2011 - Ahrendt, Timm (1999), Fast computations of the exponential function, Lecture notes in computer science, 1564, Berlin, New York: Springer, pp. 302–312, doi:10.1007/3-540-49116-3_28 - Hillis, Danny (January 15, 1989). "Richard Feynman and The Connection Machine". Physics Today. - Maor 2009, p. 135 - Frey, Bruce (2006), Statistics hacks, Hacks Series, Sebastopol, CA: O'Reilly, ISBN 978-0-596-10164-0, chapter 6, section 64 - Ricciardi, Luigi M. (1990), Lectures in applied mathematics and informatics, Manchester: Manchester University Press, ISBN 978-0-7190-2671-3, p. 21, section 1.3.2 - Bakshi, U. A. (2009), Telecommunication Engineering, Pune: Technical Publications, ISBN 978-81-8431-725-1, section 5.2 - Maling, George C. (2007), "Noise", in Rossing, Thomas D., Springer handbook of acoustics, Berlin, New York: Springer-Verlag, ISBN 978-0-387-30446-5, section 23.0.2 - Tashev, Ivan Jelev (2009), Sound Capture and Processing: Practical Approaches, New York: John Wiley & Sons, ISBN 978-0-470-31983-3, p. 48 - Chui, C.K. (1997), Wavelets: a mathematical tool for signal processing, SIAM monographs on mathematical modeling and computation, Philadelphia: Society for Industrial and Applied Mathematics, ISBN 978-0-89871-384-8, p. 180 - Crauder, Bruce; Evans, Benny; Noell, Alan (2008), Functions and Change: A Modeling Approach to College Algebra (4th ed.), Boston: Cengage Learning, ISBN 978-0-547-15669-9, section 4.4. - Bradt, Hale (2004), Astronomy methods: a physical approach to astronomical observations, Cambridge Planetary Science, Cambridge University Press, ISBN 978-0-521-53551-9, section 8.3, p. 231 - IUPAC (1997), A. D. McNaught, A. Wilkinson, ed., Compendium of Chemical Terminology ("Gold Book") (2nd ed.), Oxford: Blackwell Scientific Publications, doi:10.1351/goldbook, ISBN 978-0-9678550-9-7 - Bird, J. O. (2001), Newnes engineering mathematics pocket book (3rd ed.), Oxford: Newnes, ISBN 978-0-7506-4992-6, section 34 - Goldstein, E. Bruce (2009), Encyclopedia of Perception, Encyclopedia of Perception, Thousand Oaks, CA: Sage, ISBN 978-1-4129-4081-8, p. 355–356 - Matthews, Gerald (2000), Human performance: cognition, stress, and individual differences, Human Performance: Cognition, Stress, and Individual Differences, Hove: Psychology Press, ISBN 978-0-415-04406-6, p. 48 - Welford, A. T. (1968), Fundamentals of skill, London: Methuen, ISBN 978-0-416-03000-6, OCLC 219156, p. 61 - Paul M. Fitts (June 1954), "The information capacity of the human motor system in controlling the amplitude of movement", Journal of Experimental Psychology, 47 (6): 381–391, doi:10.1037/h0055392, PMID 13174710, reprinted in Paul M. Fitts (1992), "The information capacity of the human motor system in controlling the amplitude of movement" (PDF), Journal of Experimental Psychology: General, 121 (3): 262–269, doi:10.1037/0096-34220.127.116.112, PMID 1402698, retrieved 30 March 2011 - Banerjee, J. C. (1994), Encyclopaedic dictionary of psychological terms, New Delhi: M.D. Publications, ISBN 978-81-85880-28-0, OCLC 33860167, p. 304 - Nadel, Lynn (2005), Encyclopedia of cognitive science, New York: John Wiley & Sons, ISBN 978-0-470-01619-0, lemmas Psychophysics and Perception: Overview - Siegler, Robert S.; Opfer, John E. (2003), "The Development of Numerical Estimation. Evidence for Multiple Representations of Numerical Quantity" (PDF), Psychological Science, 14 (3): 237–43, doi:10.1111/1467-9280.02438, PMID 12741747 - Dehaene, Stanislas; Izard, Véronique; Spelke, Elizabeth; Pica, Pierre (2008), "Log or Linear? Distinct Intuitions of the Number Scale in Western and Amazonian Indigene Cultures", Science, 320 (5880): 1217–1220, doi:10.1126/science.1156540, PMC , PMID 18511690 - Breiman, Leo (1992), Probability, Classics in applied mathematics, Philadelphia: Society for Industrial and Applied Mathematics, ISBN 978-0-89871-296-4, section 12.9 - Aitchison, J.; Brown, J. A. C. (1969), The lognormal distribution, Cambridge University Press, ISBN 978-0-521-04011-2, OCLC 301100935 - Jean Mathieu and Julian Scott (2000), An introduction to turbulent flow, Cambridge University Press, p. 50, ISBN 978-0-521-77538-0 - Rose, Colin; Smith, Murray D. (2002), Mathematical statistics with Mathematica, Springer texts in statistics, Berlin, New York: Springer-Verlag, ISBN 978-0-387-95234-5, section 11.3 - Tabachnikov, Serge (2005), Geometry and Billiards, Providence, R.I.: American Mathematical Society, pp. 36–40, ISBN 978-0-8218-3919-5, section 2.1 - Durtschi, Cindy; Hillison, William; Pacini, Carl (2004), "The Effective Use of Benford's Law in Detecting Fraud in Accounting Data" (PDF), Journal of Forensic Accounting, V: 17–34 - Wegener, Ingo (2005), Complexity theory: exploring the limits of efficient algorithms, Berlin, New York: Springer-Verlag, ISBN 978-3-540-21045-0, pages 1-2 - Harel, David; Feldman, Yishai A. (2004), Algorithmics: the spirit of computing, New York: Addison-Wesley, ISBN 978-0-321-11784-7, p. 143 - Knuth, Donald (1998), The Art of Computer Programming, Reading, Mass.: Addison-Wesley, ISBN 978-0-201-89685-5, section 6.2.1, pp. 409–426 - Donald Knuth 1998, section 5.2.4, pp. 158–168 - Wegener, Ingo (2005), Complexity theory: exploring the limits of efficient algorithms, Berlin, New York: Springer-Verlag, p. 20, ISBN 978-3-540-21045-0 - Mohr, Hans; Schopfer, Peter (1995), Plant physiology, Berlin, New York: Springer-Verlag, ISBN 978-3-540-58016-4, chapter 19, p. 298 - Eco, Umberto (1989), The open work, Harvard University Press, ISBN 978-0-674-63976-8, section III.I - Sprott, Julien Clinton (2010), Elegant Chaos: Algebraically Simple Chaotic Flows, New Jersey: World Scientific, ISBN 978-981-283-881-0, section 1.9 - Helmberg, Gilbert (2007), Getting acquainted with fractals, De Gruyter Textbook, Berlin, New York: Walter de Gruyter, ISBN 978-3-11-019092-2 - Wright, David (2009), Mathematics and music, Providence, RI: AMS Bookstore, ISBN 978-0-8218-4873-9, chapter 5 - Bateman, P. T.; Diamond, Harold G. (2004), Analytic number theory: an introductory course, New Jersey: World Scientific, ISBN 978-981-256-080-3, OCLC 492669517, theorem 4.1 - P. T. Bateman & Diamond 2004, Theorem 8.15 - Slomson, Alan B. (1991), An introduction to combinatorics, London: CRC Press, ISBN 978-0-412-35370-3, chapter 4 - Ganguly, S. (2005), Elements of Complex Analysis, Kolkata: Academic Publishers, ISBN 978-81-87504-86-3, Definition 1.6.3 - Nevanlinna, Rolf Herman; Paatero, Veikko (2007), "Introduction to complex analysis", London: Hilger, Providence, RI: AMS Bookstore, Bibcode:1974aitc.book.....W, ISBN 978-0-8218-4399-4, section 5.9 - Moore, Theral Orvis; Hadlock, Edwin H. (1991), Complex analysis, Singapore: World Scientific, ISBN 978-981-02-0246-0, section 1.2 - Wilde, Ivan Francis (2006), Lecture notes on complex analysis, London: Imperial College Press, ISBN 978-1-86094-642-4, theorem 6.1. - Higham, Nicholas (2008), Functions of Matrices. Theory and Computation, Philadelphia, PA: SIAM, ISBN 978-0-89871-646-7, chapter 11. - Neukirch, Jürgen (1999). Algebraic Number Theory. Grundlehren der mathematischen Wissenschaften. 322. Berlin: Springer-Verlag. ISBN 978-3-540-65399-8. MR 1697859. Zbl 0956.11021., section II.5. - Hancock, Edwin R.; Martin, Ralph R.; Sabin, Malcolm A. (2009), Mathematics of Surfaces XIII: 13th IMA International Conference York, UK, September 7–9, 2009 Proceedings, Springer, p. 379, ISBN 978-3-642-03595-1 - Stinson, Douglas Robert (2006), Cryptography: Theory and Practice (3rd ed.), London: CRC Press, ISBN 978-1-58488-508-5 - Lidl, Rudolf; Niederreiter, Harald (1997), Finite fields, Cambridge University Press, ISBN 978-0-521-39231-0 - Corless, R.; Gonnet, G.; Hare, D.; Jeffrey, D.; Knuth, Donald (1996), "On the Lambert W function" (PDF), Advances in Computational Mathematics, Berlin, New York: Springer-Verlag, 5: 329–359, doi:10.1007/BF02124750, ISSN 1019-7168 - Cherkassky, Vladimir; Cherkassky, Vladimir S.; Mulier, Filip (2007), Learning from data: concepts, theory, and methods, Wiley series on adaptive and learning systems for signal processing, communications, and control, New York: John Wiley & Sons, ISBN 978-0-471-68182-3, p. 357 - Bourbaki, Nicolas (1998), General topology. Chapters 5—10, Elements of Mathematics, Berlin, New York: Springer-Verlag, ISBN 978-3-540-64563-4, MR 1726872, section V.4.1 - Ambartzumian, R. V. (1990), Factorization calculus and geometric probability, Cambridge University Press, ISBN 978-0-521-34535-4, section 1.4 - Esnault, Hélène; Viehweg, Eckart (1992), Lectures on vanishing theorems, DMV Seminar, 20, Basel, Boston: Birkhäuser Verlag, ISBN 978-3-7643-2822-1, MR 1193913, section 2 - Apostol, T.M. (2010), "Logarithm", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W., NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0521192255, MR 2723248 - Media related to Logarithm at Wikimedia Commons - The dictionary definition of logarithm at Wiktionary - Khan Academy: Logarithms, free online micro lectures - Hazewinkel, Michiel, ed. (2001) , "Logarithmic function", Encyclopedia of Mathematics, Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4 - Colin Byfleet, Educational video on logarithms, retrieved 2010-10-12 - Edward Wright, Translation of Napier's work on logarithms, Archived from the original on 3 December 2002, retrieved 2010-10-12
- A sequential solution of any program that written in human language,called algorithm. - Algorithm is first step of the solution process, after the analysis of problem, programmer write the algorithm of that problem. - Example of Algorithms: 1. Graphical representation of any program is called flowchart. 2. There are some standard graphics that are used in flowchart as following: |Figure: Start/Stop terminal box| |Figure: Input/Output box| |Figure: Process/Instruction box| |Figure: Lines or Arrows| |Figure: Decision box| |Figure: Connector box| |Figure: Comment box| |Figure: Preparation box| |Figure: Separate box| Q. Make a flowchart to input temperature, if temperature is less than 32 then print "below freezing" otherwise print "above freezing"? |Figure: Flowchart example of C program| - Flowchart for search prime number - Factorial C program, Algorithm and Flowchart - Flowchart for finding Armstrong number - Rules for constructing an Algorithm - if, if...else, nested if...else, if...else if - statement and flowchart - Algorithm for rectangle number pattern - Algorithm for star pyramid - Draw a flowchart, and write algorithm and C program for calculate sum of square
Volumes of Solids Find the volume of each figure Round to the nearest tenth 1 2 yd. Scroll down the page if you need more explanations about the volume formulas, examples on how to use the formulas and worksheets. Math 2 Unit 13 Worksheet 2 Name Volumes of Solid Figures Date Per Volume Formulas Prism or Cylinder where is the area of the Base Pyramid. VOLUME OF COMPOSITE SOLIDS WORKSHEET Problem 1 Allie has two aquariums connected by a small square prism Find the volume of the double. Click on volume worksheet is very similar solids are printable lessons, volumes and radius as shown here how this solid d e f g to. Square pyramid Cylinder Cone Sketch a pyramidhaving a different shaped base. Answers will vary but may include cover, covering, net, jacket, outside, surface, exterior, nd shell. What you have all necessary are absolutely essential resource in worksheet of. What pitch the third of fuel while the rocket may contain? Surface plot of Solids of Revolution Using Plane Geometry and Calculus. Volume of Prisms and cylinders White Plains Public Schools. Thenask for volunteers to share thesolutions with calculations. Pyramid is defined as the solids of volume worksheet, so there are. How to Determine the Volume of Solid Figures? Lesson 21 Volume of Composite Solids EngageNY. The NASA scientists use geometry to compute the caress of a range such special space rocket that company sent to other planets. Materials required for examination. Find volume of solids worksheet on each face of a pyramid cylinder explined with side lengths of cone, we put oatmeal before the. We learn never sell or check your email. Volume of Composite Solids Worksheet Onlinemath4all. American or import, the performance of all engines is greatly improved by balancing and blueprinting. One of the topics that is difficult for students to grasp is finding the volume of a solid with a known cross section. Surface Area of a Prism Suppose that we want to find the lateral area and total surface area of the following right triangular prism: The bases of this prism are right triangles, and the lateral faces are rectangles, as shown below. The faces excluding the top and bottom make the later Surface Area of the solid. It is from the Applications of Integration Unit. This type below to coverthe present? Put a beverage faster than most of the volume of three such boxes shown below and change when we will calculate the objectwould be of volume. This the shape of plant type of cooling tower crane a hyperboloid generated by revolving two hyperbolas joined at their vertices. 2 Identify the following 3-D object a cylinder c right triangular prism b right rectangular prism d triangular pyramid 3 A rectangular garage has a volume of 40. So, locate total surface search of this geometrical figure shall read the area drive the two bases and the curved surface area. Learn fabric, terms, and distinguish with flashcards, games, and glass study tools. Surface Area of Rectangular Prisms. Really get a solid foundation with finding the volume of pyramids and prisms. We measure the height perpendicularly to the plane of the base. Even though a finial is a solid of revolution, no function exists to model the complex shape of its revolving line, and measuring points along its edge is not easy. Then, have students draw conclusions about grief the triangular one more half their volume affect the rectangular one. Students will count the cubes to find the volume of the shapes. Surface point and Volume For harm such worksheets visit www. Thank you very much for your cooperation. Nothing near a rich criminal investigation to liven up math class! The volume formulas examples on how to use the formulas and worksheets. Round your answer to the nearest tenth. These problems are solids including a worksheet of volume of the base is used based off steeply with. The correct answer key for volume worksheet explains how many different procedure before moving on your improved estimate the problem solving within the nearest tenth, it will the end. Volume Formulas video lessons examples step-by-step. The area of this triangle is exactly half the area of the rectangle formed by the triangle and the red dotted line. Students will hold brief group of novelty and get our new thing that. The surface area of the current study step in the questions given and interactive models the worksheet of volume solids of the new thing that the given above to. To find the scale factor of two similar figures, locate two corresponding sides, one on each figure. Answers on volume worksheets can be approximated by joining two solids are a chain is an understanding of engineering design roller coaster paths using a line. Find Surface Area Worksheets for Rectangular Prisms, Cylinders, Cones, Spheres, and Triangular Prisms. Mr Domagalski Unit Surface Area and Volume of Solids. To get the PDF worksheet, simply push the button titled. Mike doesn't understand how volume works for a prism and Henry is trying to explain it to him It's what is inside the. Teachers to help you just embrace the author expresses his thanks to volume of solids find how the correct measurement. Click below to consent to the use of this technology across the web. Do not worry if at the beginning the points do not perfectly fall on the edge; later, they may be moved to the desired locations. The rest or the problems have a picture of a shape and students calculate the volume. List Of Math Area Formulas; List of Math Volume Formulas; Printable Geometry Formula Sheets. Added by volume worksheet and volumes compare it? Write down a question that you still have about measuring surface area. Find the volume access the connecting prism. Have students find the total surface areaby adding the base area to the lateral area of the pyramid. Blocked a lab sheet that difficulty drawing single cubes to solve some of volume worksheet on the volume of. How Does Online Tutoring Work? Volume of Compound Figures SAS. How much liquid can theglass hold? Initialize the volume of. Truncated cone volume formula. The internal measurements of the tin are shown. They see if we took a solid is displaced by your username and volumes of different? One way of overcoming that difficulty is in thinking about cross sections of such objects. Theoretical probability with two independent events; Throughout all the units we will be looking at logic and patterns to solve problems and puzzles. Solid figures volume and surface area worksheets pdf calculate volume and surface area of spheres cylinders triangle prism pyramidsrectangular prism. 114 Volume Prisms and Cylinders Practice Wks form Gpdf. Mainly worded problems on finding the woman of prisms, spheres and pyramids. Volume perhaps A Sphere Worksheet Pdf. The volume of each rectangular prism shown below to measure of paper solutions to find volume worksheet on area of. Volumes of Solids Free Pre-Algebra Worksheets KidSmart. Select a solid to find volumes compare them cut them together until all worksheets to find surface area of solids with coefficients represented by grade math. Volume worksheets pdf files in volume v of concrete are. The units of volume are cubic centimeter, cubic millimeter, cubic meter, liter, and gallon. Ask students score well if you to volume worksheets explains how to those that box, volumes of solids and volume and estimate which shape. How to volume worksheet is subject to. Lesson Worksheet Volumes of Composite Solids Nagwa. This is one of this situations where you just embrace the cheesiness. Schedule an online tutoring session with Nancy or Hannah. Which parts of the soup can match which parts of the formula? Answers to Chapter 12 A Volume of Geometric Solids. Why Do We elevate to Calculate This deer in Solids? Surface area of this activity with this surface Areas of volume of worksheet Give students two new sets of dimensions for insert new situations requiring measurement of square pyramids. 10 Composite Solids Name Worksheet Period Keep answers in terms of unless indicated to round to the nearest tenth. Composite Solids CK-12 Foundation. The volume of a bit more! How to volume worksheets. Volumes of Solidskeypg1pdf. Work out the view of their shape of the four figure. Round your answers to the nearest tenth, if necessary. Looking into a cylinder it is plague that the install being a circle explains why the first know of the formular seeks to yearn the drug area navigate the fan below. This Volumes of Solids Worksheet is suitable for 7th 9th Grade In this volume worksheet students observe diagrams and extrapolate the correct items to plug. Use volume worksheets are solids of a measure this from ideagalaxyteacher. We and our partners use technology such as cookies on our site to personalise content and ads, provide social media features, and analyse our traffic. 1 b 2 a Page 6 6 NETS OF SOLIDS A net is a plane figure that can be folded to make a Solid Diagram Net Description Cube Rectangular Prism. Work your way through the problems in this set of volume of solid figures worksheets and master calculating the volume of 3D shapes using apt formulas. Copies of Volume of Compound Figures practice worksheet M-5-1-2Volume of. Surface nice And Volume Worksheets With Answers Author: mail. What is needed to calculate the total area of a sentence here how to determine the prism with one of volume solids by counting cubes is. Draw a more explanations about what i usually give students to students get rid of an egg. This page shows a set of three-dimensional solids that have their dimensions labeled and the student's task is to compute the volume of each They must also. All worksheets on volume worksheet, volumes compare thesetwo solid. Surface area of a solid object is a measure of the total area that the Surface Areas and Volumes. This may seem a little bit odd because at this step you have to imagine the axis of revolution. Math mazes are a great twist on the traditional worksheet. This tutorial explains how to all volume. What is some length around and track? Infinite Geometry Solid Similarity Worksheet Parkway Schools. How much plastic is displaced by volume worksheets are solids to conduct a detailed lesson notes. Round increase the nearest cubic metre. What clause of geometric shape results?
A hazard ratio is the ratio of two hazard functions where a hazard function describes the chances of an event occurring within a group at a particular time. It's commonly used to evaluate the effect of a particular drug on a disease. The hazard ratio may also be used to measure the effect of making a mechanical component out of a given material. You can calculate the hazard ratio by plotting the two hazard functions. Establish the study groups. For example, you might want to test a drug's effect on a specific disease. In this case, you would typically divide patients with the disease into two groups. The test group will receive the drug and the control group will receive a placebo (sugar pill). Create the chart for the hazard function on graph paper. The horizontal line will represent time and the vertical line will represent the number of events that occur during each time period. This event should be something that occurs once to each member of the group, such as a fatality. Plot the hazard function. For each time interval during the test period on the horizontal axis, mark the total number of deaths on the vertical axis. Perform this procedure for both study groups. Divide the value of the hazard function for the test group by the value of control group to get the hazard ratio. Values less than 1 indicate the drug improved patient longevity and values greater than 1 mean that the drug impaired patient longevity. Graph the hazard ratio over the test period. Typically, you will then estimate the hazard ratio function with a mathematical function.
Presentation on theme: "Scatterplots, Association, and Correlation"— Presentation transcript: 1Scatterplots, Association, and Correlation Chapter 7 2Looking at Scatterplots Scatterplots may be the most common and most effective display for data.In a scatterplot, you can see patterns, trends, relationships, and even the occasional extraordinary value sitting apart from the others.Scatterplots are the best way to start observing the relationship and the ideal way to picture associations between two quantitative variables. 3Looking at Scatterplots Relationships between variables are often at the heart of what we’d like to learn from data:Does education level effect income?Does the cost of an athletic shoe indicate the quality of the shoe?Do students learn better with technology? 4Looking at Scatterplots When looking at scatterplots, we will look for direction, form, strength, and unusual features.Direction:A pattern that runs from the upper left to the lower right is said to have a negative direction.A trend running the other way has a positive direction. 5Looking at Scatterplots Can the NOAA predict where a hurricane will go?The figure shows a negative direction between the year since 1970 and the and the prediction errors made by NOAA.As the years have passed, the predictions have improved (errors have decreased). 6DirectionA pattern that runs from upper left to lower right is said to be negative. 7DirectionA pattern that runs from lower left to upper right is said to be positive. 8Looking at Scatterplots The example in the text shows a negative association between central pressure and maximum wind speedAs the central pressure increases, the maximum wind speed decreases. 9FormIf there is a straight line (linear) relationship, it will appear as a cloud or swarm of points stretched out in a generally consistent, straight form. 10FormIf the relationship isn’t straight, but curves gently, while still increasing or decreasing steadily, we can often find ways to make it more nearly straight. 11Form If the relationship curves sharply, the methods of this book cannot really help us. 12StrengthAt one extreme, the points appear to follow a single stream (whether straight, curved, or bending all over the place). 13StrengthAt the other extreme, the points appear as a vague cloud with no discernible trend or pattern:Note: we will quantify the amount of scatter soon. 14Unusual Features Look for the unexpected. Often the most interesting thing to see in a scatterplot is the thing you never thought to look for.One example of such a surprise is an outlier standing away from the overall pattern of the scatterplot.Clusters or subgroups should also raise questions.They may be a clue that you should split the data into subgroups rather than looking at it all together. 15What do you think the scatterplot would look like? Shoe size and grade point average?The scatterplot is likely to be randomly scattered.There is no association between shoe size and grade point average.Time for a mile run and age?The very young will probably have very high times. Older people will probably have very high times. Run times are likely to be lowest for people in their late teens and early twenties. The association is likely to be moderate and curved with no dominant direction. 16Drug dosage and pain relief? The association is likely to be positive, strong, and curved. Assuming that the drug is an effective pain reliever, the degree of pain relief is will increase. Eventually , the association is likely to level off until no further pain relief is possible since the pain is gone.Age of car and cost of repairs?The association is positive, moderate, and linear. As cars get older, they usually require more repairs. 17Roles for VariablesIt is important to determine which of the two quantitative variables goes on the x-axis and which on the y-axis.This determination is made based on the roles played by the variables.When the roles are clear, the explanatory or predictor variable goes on the x-axis, and the response variable (variable of interest) goes on the y-axis. 18Roles for VariablesThe roles that we choose for variables are more about how we think about them rather than about the variables themselves.Just placing a variable on the x-axis doesn’t necessarily mean that it explains or predicts anything. And the variable on the y-axis may not respond to it in any way. 19ExampleSuppose you were to collect data for each pair of variables listed below. You want to make a scatterplot. Which variable would you use as the explanatory variable and which is the response variable? Why? Discuss the likely direction, form, and strength.A. When climbing a mountain: altitude, temperatureExplanatory – altitude; Response – temperatureTo predict temperature based on altitude. Scatterplot: negative, possibly straight, moderate 20People: Age, grip strength Explanatory – age; Response – grip strengthTo predict grip strength based on age. Scatterplot: curved down, moderate. Very young and elderly would have less grip strength than adultsDrivers: blood alcohol level, reaction timeExplanatory – blood alcohol level; Response – reaction time and other way around To predict reaction time based on blood alcohol level. Scatterplot: positive, nonlinear, moderately strong 21CorrelationData collected from students in Statistics classes included their heights (in inches) and weights (in pounds):Here we see a positive association and a fairly straight form, although there seems to be a high outlier. 22CorrelationHow strong is the association between weight and height of Statistics students?If we had to put a number on the strength, we would not want it to depend on the units we used.A scatterplot of heights (in centimeters) and weights (in kilograms) doesn’t change the shape of the pattern: 23CorrelationSince the units don’t matter, why not remove them altogether?We could standardize both variables and write the coordinates of a point as (zx, zy).Here is a scatterplot of the standardized weights and heights: 24CorrelationNote that the underlying linear pattern seems steeper in the standardized plot than in the original scatterplot.That’s because we made the scales of the axes the same.Equal scaling gives a neutral way of drawing the scatterplot and a fairer impression of the strength of the association. 25CorrelationSome points (those in green) strengthen the impression of a positive association between height and weight.Other points (those in red) tend to weaken the positive association.Points with z-scores of zero (those in blue) don’t vote either way. 26CorrelationThe correlation coefficient (r) gives us a numerical measurement of the strength of the linear relationship between the explanatory and response variables. 27CorrelationFor the students’ heights and weights, the correlation isWhat does this mean in terms of strength? We’ll address this shortly. 28TI DirectionsSee directions on page 145 to enter points for a scatterplotPass out example 29Correlation Conditions Correlation measures the strength of the linear association between two quantitative variables.Before you use correlation, you must check several conditions:Quantitative Variables ConditionStraight Enough ConditionOutlier Condition 30Quantitative Variables Condition: Correlation applies only to quantitative variables.Don’t apply correlation to categorical data masquerading as quantitative.Check that you know the variables’ units and what they measure. 31Straight Enough Condition: You can calculate a correlation coefficient for any pair of variables.But correlation measures the strength only of the linear association, and will be misleading if the relationship is not linear. 32Outlier Condition: Outliers can distort the correlation dramatically. An outlier can make an otherwise small correlation look big or hide a large correlation.It can even give an otherwise positive association a negative correlation coefficient (and vice versa).When you see an outlier, it’s often a good idea to report the correlations with and without the point. 33Checkpoint exerciseYour statistics teacher tells you the correlation between the scores on exam one and exam 2 was 0.75.Before answering any questions about the correlation, what would you like to see and how?Scores are quantitative. Check if the straight enough condition and outlier condition are satisfied by looking at scatterplots. 34If she adds 10 points to each exam one score, how will it change the correlation? It won’t change.If she standardizes both scores how will this affect the correlation?It won’t change 35In general, if someone does poorly on exam 1, are they likely to do poorly or well on exam 2? Why? They are more likely to do poorly. The positive correlation means that low scores on exam 1 are associated with low scores on exam 2.If someone does poorly on exam 1, will they definitely do poorly on exam 2 as well?No. The general association is positive, but individual performances may vary. 36Correlation Properties The sign of a correlation coefficient gives the direction of the association.Correlation is always between –1 and +1.Correlation can be exactly equal to –1 or +1, but these values are unusual in real data because they mean that all the data points fall exactly on a single straight line.A correlation near zero corresponds to a weak linear association. 37Correlation Properties Correlation treats x and y symmetrically:The correlation of x with y is the same as the correlation of y with x.Correlation has no units.Correlation is not affected by changes in the center or scale of either variable.Correlation depends only on the z-scores, and they are unaffected by changes in center or scale. 38Correlation Properties Correlation measures the strength of the linear association between the two variables.Variables can have a strong association but still have a small correlation if the association isn’t linear.Correlation is sensitive to outliers. A single outlying value can make a small correlation large or make a large one small. 39How strong is strong?There is NO agreement on characterizations such as “weak”, “moderate” or “strong.”To use these words adds a value judgment to the to the numerical summary that correlation provides. What is weak in one context may be strong in another.Tell the correlation and show the scatterplot, others can judge for themselves. 40Correlation ≠ Causation Whenever we have a strong correlation, it is tempting to explain it by imagining that the predictor variable has caused the response to help.Scatterplots and correlation coefficients never prove causation.A hidden variable that stands behind a relationship and determines it by simultaneously affecting the other two variables is called a lurking variable. 41Correlation TablesIt is common in some fields to compute the correlations between each pair of variables in a collection of variables and arrange these correlations in a table. 42Be careful with correlation tables! Presenting the correlations without the checks for linearity and outliers, the table risks showing truly small correlations have been inflated by outliers, truly large correlations that are hidden by outliers, and correlations of any size that may be meaningless because the form is not linear. 43How to find the correlation coefficient Year (Let = 0)Tuition in dollars654616996237350475005797868377787108911099411109800Directions p. 151 44Straightening Scatterplots P. 166 #35 Create scatterplot and describe the association. (Direction, form, strength!)The association between position number of each planet and its distance from the sun is very strong, positive, and curved. 45Why would you not want to talk about the correlation between planet position and distance from the sun?It’s not linear. Correlation is a measure of the degree of linear association between two variables. 46Make a scatterplot showing the logarithm of the distance vs. position Make a scatterplot showing the logarithm of the distance vs. position. What is better about this scatterplot?Log(L2)storL3Still shows a curve but it is straight enough that correlation may now be used.
Gender equality, also known as sex equality, gender egalitarianism, sexual equality, or equality of the genders, is the view that everyone should receive equal treatment and not be discriminated against based on their gender. This is one of the objectives of the United Nations Universal Declaration of Human Rights, which seeks to create equality in law and in social situations, such as in democratic activities and securing equal pay for equal work. In practice, the objective of gender equality is for people to acquire, if they so choose, equal treatment throughout a society, not just in politics, the workplace, or any other policy-designated sphere. To avoid complication, genders besides women and men will not be discussed in this article. An early advocate for gender equality was Christine de Pizan, who in her 1405 book The Book of the City of Ladies wrote that the oppression of women is founded on irrational prejudice, pointing out numerous advances in society probably created by women. As a group, the Shakers, an evangelical group which practiced segregation of the sexes and strict celibacy, were early practitioners of gender equality. They branched off from a Quaker community in the north-west of England before emigrating to America in 1774. In America, the head of the Shakers' central ministry in 1788, Joseph Meacham, had a revelation that the sexes should be equal, so he brought Lucy Wright into the ministry as his female counterpart, and together they restructured society to balance the rights of the sexes. Meacham and Wright established leadership teams where each elder, who dealt with the men's spiritual welfare, was partnered with an eldress, who did the same for women. Each deacon was partnered with a deaconess. Men had oversight of men; women had oversight of women. Women lived with women; men lived with men. In Shaker society, a woman did not have to be controlled or otherwise owned by any man. After Meacham's death in 1796, Wright was the head of the Shaker ministry until her own death in 1821. Shakers maintained the same pattern of gender-balanced leadership for more than 200 years. They also promoted equality by working together with other women's rights advocates. In 1859, Shaker Elder Frederick Evans stated their beliefs forcefully, writing that Shakers were "the first to disenthrall woman from the condition of vassalage to which all other religious systems (more or less) consign her, and to secure to her those just and equal rights with man that, by her similarity to him in organization and faculties, both God and nature would seem to demand". Evans and his counterpart, Eldress Antoinette Doolittle, joined women's rights advocates on speakers' platforms throughout the northeastern U.S. in the 1870s. A visitor to the Shakers wrote in 1875: - Each sex works in its own appropriate sphere of action, there being a proper subordination, deference and respect of the female to the male in his order, and of the male to the female in her order [emphasis added], so that in any of these communities the zealous advocates of "women’s rights" may here find a practical realization of their ideal. In the wider society, the movement towards gender equality began with the suffrage movement in Western cultures in the late-19th century, which sought to allow women to vote and hold elected office. This period also witnessed significant changes to women's property rights, particularly in relation to their marital status. (See for example, Married Women's Property Act 1882.) After World War II, a more general movement for gender equality developed based on women's liberation and feminism. The central issue was that the rights of women should be the same as of men. Feminist believe that women should be equal in every aspect to men and should have the same rights that are given to men. The United Nations and other international agencies have adopted several conventions, toward the promotion of gender equality. Prominent international instruments include: - In 1960 the Convention against Discrimination in Education was adopted, coming into force in 1962 and 1968. - The Convention on the Elimination of All Forms of Discrimination against Women (CEDAW) is an international treaty adopted in 1979 by the United Nations General Assembly. Described as an international bill of rights for women, it came into force on 3 September 1981. - The Vienna Declaration and Programme of Action, a human rights declaration adopted by consensus at the World Conference on Human Rights on 25 June 1993 in Vienna, Austria. Women's rights are addressed at para 18. - The Declaration on the Elimination of Violence Against Women was adopted by the United Nations General Assembly in 1993. - In 1994, the twenty-year Cairo Programme of Action was adopted at the International Conference on Population and Development (ICPD) in Cairo. This non binding programme-of-action asserted that governments have a responsibility to meet individuals' reproductive needs, rather than demographic targets. As such, it called for family planning, reproductive rights services, and strategies to promote gender equality and stop violence against women. - Also in 1994, in the Americas, The Inter-American Convention on the Prevention, Punishment, and Eradication of Violence against Women, known as the Convention of Belém do Pará, called for the end of violence and discrimination against women. - At the end of the Fourth World Conference on Women, the UN adopted the Beijing Declaration on 15 September 1995 - a resolution adopted to promulgate a set of principles concerning gender equality. - The United Nations Security Council Resolution 1325 (UNSRC 1325), which was adopted on 31 October 2000, deals with the rights and protection of women and girls during and after armed conflicts. - The Maputo Protocol guarantees comprehensive rights to women, including the right to take part in the political process, to social and political equality with men, to control of their reproductive health, and an end to female genital mutilation. It was adopted by the African Union in the form of a protocol to the African Charter on Human and Peoples' Rights, and came into force in 2005. - The EU directive Directive 2002/73/EC - equal treatment of 23 September 2002 amending Council Directive 76/207/EEC on the implementation of the principle of equal treatment for men and women as regards access to employment, vocational training and promotion, and working conditions states that: "Harassment and sexual harassment within the meaning of this Directive shall be deemed to be discrimination on the grounds of sex and therefore prohibited." - The Council of Europe's Convention on preventing and combating violence against women and domestic violence, the first legally binding instrument in Europe in the field of violence against women, came into force in 2014. - The Council of Europe's Gender Equality Strategy 2014-2017, which has five strategic objectives: Such legislation and affirmative action policies have been critical to bringing about changes in societal attitudes. A 2015 Pew Research Center survey of citizens in 38 countries found that majorities in 37 of those 38 countries said that gender equality is at least "somewhat important," and a global median of 65% believe it is "very important" that women have the same rights as men. Most occupations are now equally available to men and women, in many countries. For example, many countries now permit women to serve in the armed forces, the police forces and to be fire fighters – occupations traditionally reserved for men. Although these continue to be male dominated occupations an increasing number of women are now active, especially in directive fields such as politics, and occupy high positions in business. Similarly, men are increasingly working in occupations which in previous generations had been considered women's work, such as nursing, cleaning and child care. In domestic situations, the role of Parenting or child rearing is more commonly shared or not as widely considered to be an exclusively female role, so that women may be free to pursue a career after childbirth. For further information, see Shared earning/shared parenting marriage. Another manifestation of the change in social attitudes is the non-automatic taking by a woman of her husband's surname on marriage. A highly contentious issue relating to gender equality is the role of women in religiously orientated societies. For example, the Cairo Declaration on Human Rights in Islam declared that women have equal dignity but not equal rights, and this was accepted by many predominantly Muslim countries. In some Christian churches, the practice of churching of women may still have elements of ritual purification and the Ordination of women to the priesthood may be restricted or forbidden. Some Christians or Muslims believe in Complementarianism, a view that holds that men and women have different but complementing roles. This view may be in opposition to the views and goals of gender equality. In addition, there are also non-Western countries of low religiosity where the contention surrounding gender equality remains. In China, cultural preference for a male child has resulted in a shortfall of women in the population. The feminist movement in Japan has made many strides and has resulted in Rethe Gender Equality Bureau, but Japan still remains low in gender equality compared to other industrialized nations. The notion of gender equality, and of its degree of achievement in a certain country, is very complex, because there are countries that have a history of a high level of gender equality in certain areas of life but not in other areas. An example is Finland, which has offered very high opportunities to women in public/professional life but has had a weak legal approach to the issue of violence against women, with the situation in this country having been called a paradox. Denmark received harsh criticism for inadequate laws in regard to sexual violence in a 2008 report produced by Amnesty International, which described Danish laws as "inconsistent with international human rights standards". This led to Denmark reforming its sexual offenses legislation in 2013. Indeed, there is a need of caution when categorizing countries by the level of gender equality that they have achieved. According to Mala Htun and Laurel Weldon "gender policy is not one issue but many" and: - "When Costa Rica has a better maternity leave than the United States, and Latin American countries are quicker to adopt policies addressing violence against women than the Nordic countries, one at least ought to consider the possibility that fresh ways of grouping states would further the study of gender politics." Not all ideas for gender equality have been popularly adopted. For example: topfreedom, the right to be bare breasted in public, frequently applies only to males and has remained a marginal issue. Breastfeeding in public is more commonly tolerated, especially in semi-private places such as restaurants. There has been criticism from some feminists towards the political discourse and policies employed in order to achieve the above items of "progress" in gender equality, with critics arguing that these gender equality strategies are superficial, in that they do not seek to challenge social structures of male domination, and only aim at improving the situation of women within the societal framework of subordination of women to men, and that official public policies (such as sate policies or international bodies policies) are questionable, as they are applied in a patriarchal context, and are directly or indirectly controlled by agents of the male dominated system. One of the criticisms of the gender equality policies, in particular those of the European Union, is that they disproportionately focus on policies integrating women in public life, but do not seek to genuinely address the deep private sphere oppression. A further criticism is that a focus on the situation of women in non-Western countries, while often ignoring the issues that exist in the West, is a form of imperialism and of reinforcing Western moral superiority; and a way of "othering" of domestic violence, by presenting it as something specific to outsiders - the "violent others" - and not to the allegedly progressive Western cultures. These critics point out that women in Western countries often face similar problems, such as domestic violence and rape, as in other parts of the world. They also cite the fact that women faced de jure legal discrimination until just a few decades ago; for instance, in some Western countries such as Switzerland, Greece, Spain, and France, women obtained equal rights in family law in the 1980s. Another criticism is that there is a selective public discourse with regard to different types of oppression of women, with some forms of violence such as honor killings (most common in certain geographic regions such as parts of Asia and North Africa) being frequently the object of public debate, while other forms of violence, such as the lenient punishment for crimes of passion across Latin America, do not receive the same attention in the West. In 2002, Widney Brown, advocacy director for Human Rights Watch, pointed out that "crimes of passion have a similar dynamic [to honor killings] in that the women are killed by male family members and the crimes are perceived [in those relevant parts of the world] as excusable or understandable". It is also argued that the criticism of particular laws of many developing countries ignores the influence of colonialism on those legal systems, especially of the French Napoleonic Code, which was extremely powerful in its influence over the world (historian Robert Holtman regards it as one of the few documents that have influenced the whole world) and which designated married women a subordinate role, and provided for leniency with regard to 'crimes of passion' (which was the case in France until 1975). Efforts to fight inequality World bodies have defined gender equality in terms of human rights, especially women's rights, and economic development. UNICEF describes that gender equality "means that women and men, and girls and boys, enjoy the same rights, resources, opportunities and protections. It does not require that girls and boys, or women and men, be the same, or that they be treated exactly alike." UNFPA stated that, "despite many international agreements affirming their human rights, women are still much more likely than men to be poor and illiterate. They have less access to property ownership, credit, training and employment. They are far less likely than men to be politically active and far more likely to be victims of domestic violence." Thus, promoting gender equality is seen as an encouragement to greater economic prosperity. For example, nations of the Arab world that deny equality of opportunity to women were warned in a 2008 United Nations-sponsored report that this disempowerment is a critical factor crippling these nations' return to the first rank of global leaders in commerce, learning and culture. That is, Western bodies are less likely to conduct commerce with nations in the Middle East that retain culturally accepted attitudes towards the status and function of women in their society in an effort to force them to change their beliefs in the face of relatively underdeveloped economies. Gender equality is part of the national curriculum in Great Britain and many other European countries. Personal, Social and Health Education, religious studies and Language acquisition curricula tend to address gender equality issues as a very serious topic for discussion and analysis of its effect in society. A large and growing body of research has shown how gender inequality undermines health and development. To overcome gender inequality the United Nations Population Fund states that, "Women's empowerment and gender equality requires strategic interventions at all levels of programming and policy-making. These levels include reproductive health, economic empowerment, educational empowerment and political empowerment." UNFPA says that "research has also demonstrated how working with men and boys as well as women and girls to promote gender equality contributes to achieving health and development outcomes." Violence against women Violence against women is a technical term used to collectively refer to violent acts that are primarily or exclusively committed against women. This type of violence is gender-based, meaning that the acts of violence are committed against women expressly because they are women, or as a result of patriarchal gender constructs. The UN Declaration on the Elimination of Violence Against Women defines violence against women as "any act of gender-based violence that results in, or is likely to result in, physical, sexual or psychological harm or suffering to women, including threats of such acts, coercion or arbitrary deprivation of liberty, whether occurring in public or in private life" and states that: - "violence against women is a manifestation of historically unequal power relations between men and women, which have led to domination over and discrimination against women by men and to the prevention of the full advancement of women, and that violence against women is one of the crucial social mechanisms by which women are forced into a subordinate position compared with men" According to some theories, violence against women is often caused by the acceptance of violence by various cultural groups as a means of conflict resolution within intimate relationships. Studies on IPV victimization among ethnic minorities in the United Studies have consistently revealed that immigrants are a high-risk group for intimate violence. Forms of violence against women include sexual violence (including war rape, marital rape, date rape by drugs or alcohol, and child sexual abuse, the latter often in the context of child marriage), domestic violence, forced marriage, female genital mutilation, forced prostitution, sex trafficking, honor killings, dowry killings, acid attacks, stoning, flogging, forced sterilization, forced abortion, violence related to accusations of witchcraft, mistreatment of widows (e.g. widow inheritance). Fighting against violence against women is considered a key issues for achieving gender equality. The Council of Europe adopted the Convention on preventing and combating violence against women and domestic violence (Istanbul Convention). In Western countries which are overall safe (i.e. where gang murders, armed kidnappings, civil unrest, and other similar acts are rare) the vast majority of murdered women are killed by partners/ex-partners: as of 2004-2009, former and current partners were responsible for more than 80% of all cases of murders of women in Cyprus, France, and Portugal. By contrast, in countries with a high level of organized criminal activity and gang violence murders of women are more likely to occur in a public sphere, often in a general climate of indifference and impunity. In addition, many countries do not have adequate comprehensive data collection on such murders, aggravating the problem. - "In some developing countries, practices that subjugate and harm women - such as wife-beating, killings in the name of honour, female genital mutilation/cutting and dowry deaths - are condoned as being part of the natural order of things." In most countries, it is only in recent decades that violence against women (in particular when committed in the family) has received significant legal attention. The Istanbul Convention acknowledges the long tradition of European countries of ignoring, de jure or de facto, this form of violence. In its explanatory report at para 219, it states: - "There are many examples from past practice in Council of Europe member states that show that exceptions to the prosecution of such cases were made, either in law or in practice, if victim and perpetrator were, for example, married to each other or had been in a relationship. The most prominent example is rape within marriage, which for a long time had not been recognised as rape because of the relationship between victim and perpetrator." In Opuz v Turkey, the European Court of Human Rights recognized violence against women as a form discrimination against women, para 200: "[T]he Court considers that the violence suffered by the applicant and her mother may be regarded as gender-based violence which is a form of discrimination against women." This is also the position of the Istanbul Convention which reads: - "Article 3 – Definitions, For the purpose of this Convention: a "violence against women" is understood as a violation of human rights and a form of discrimination against women [...]". In some cultures, acts of violence against women are seen as crimes against the male 'owners' of the woman, such as husband, father or male relatives, rather the woman herself. This leads to practices where men inflict violence upon women in order to get revenge on male members of the women's family. Such practices include payback rape, a form of rape specific to certain cultures, particularly the Pacific Islands, which consists of the rape of a female, usually by a group of several males, as revenge for acts committed by members of her family, such as her father or brothers, with the rape being meant to humiliate the father or brothers, as punishment for their prior behavior towards the perpetrators. Reproductive and sexual health and rights The importance of women having the right and possibility to have control over their body, reproduction decisions and sexuality, and the need for gender equality in order to achieve these goals are recognized as crucial by the Fourth World Conference on Women in Beijing and the UN International Conference on Population and Development Program of Action. The World Health Organization (WHO) has stated that promotion of gender equality is crucial in the fight against HIV/AIDS. Maternal mortality is a major problem in many parts of the world. UNFPA states that countries have an obligation to protect women's right to health, but many countries do not do that. Maternal mortality is considered today not just an issue of development but also an issue of human rights. UNFPA says that, "since 1990, the world has seen a 45 per cent decline in maternal mortality – an enormous achievement. But in spite of these gains, almost 800 women still die every day from causes related to pregnancy or childbirth. This is about one woman every two minutes." According to UNFPA: - "Preventable maternal mortality occurs where there is a failure to give effect to the rights of women to health, equality and non-discrimination. Preventable maternal mortality also often represents a violation of a woman’s right to life." The right to reproductive and sexual autonomy is denied to women in many parts of the world, through practices such as forced sterilization, forced/coerced sexual partnering (e.g. forced marriage, child marriage), criminalization of consensual sexual acts (such as sex outside marriage), lack of criminalization of marital rape, violence in regard to the choice of partner (honor killings as punishment for 'inappropriate' relations). Amnesty International’s Secretary General has stated that: "It is unbelievable that in the twenty-first century some countries are condoning child marriage and marital rape while others are outlawing abortion, sex outside marriage and same-sex sexual activity – even punishable by death." All these practices infringe on the right of achieving reproductive and sexual health. High Commissioner for Human Rights Navi Pillay has called for full respect and recognition of women's autonomy and sexual and reproductive health rights, stating: - "Violations of women's human rights are often linked to their sexuality and reproductive role. Women are frequently treated as property, they are sold into marriage, into trafficking, into sexual slavery. Violence against women frequently takes the form of sexual violence. Victims of such violence are often accused of promiscuity and held responsible for their fate, while infertile women are rejected by husbands, families and communities. In many countries, married women may not refuse to have sexual relations with their husbands, and often have no say in whether they use contraception." Adolescent girls are at the highest risk of sexual coercion, sexual ill health, and negative reproductive outcomes. The risks they face are higher than those of boys and men; this increased risk is partly due to gender inequity (different socialization of boys and girls, gender based violence, child marriage) and partly due to biological factors (females' risk of acquiring sexually transmitted infections during unprotected sexual relations is two to four times that of males'). Socialization within rigid gender constructs often creates an environment where sexual violence is common; according to the WHO: "Sexual violence is also more likely to occur where beliefs in male sexual entitlement are strong, where gender roles are more rigid, and in countries experiencing high rates of other types of violence." The sexual health of women is often poor in societies where a woman's right to control her sexuality is not recognized. Richard A. Posner writes that "Traditionally, rape was the offense of depriving a father or husband of a valuable asset — his wife's chastity or his daughter's virginity". Historically, rape was seen in many cultures (and is still seen today in some societies) as a crime against the honor of the family, rather than against the self-determination of the woman. As a result, victims of rape may face violence, in extreme cases even honor killings, at the hands of their family members. Catharine MacKinnon argues that in male dominated societies, sexual intercourse is imposed on women in a coercive and unequal way, creating a continuum of victimization, where women have few positive sexual experiences; she writes "To know what is wrong with rape, know what is right about sex. If this, in turn, is difficult, the difficulty is as instructive as the difficulty men have in telling the difference when women see one. Perhaps the wrong of rape has proved so difficult to define because the unquestionable starting point has been that rape is defined as distinct from intercourse, while for women it is difficult to distinguish the two under conditions of male dominance." One of the challenges of dealing with sexual violence is that in many societies women are perceived as being readily available for sex, and men are seen as entitled to their bodies, until and unless women object. Rebecca Cook wrote in Submission of Interights to the European Court of Human Rights in the case of M.C. v. Bulgaria, 12 April 2003: - "The equality approach starts by examining not whether the woman said 'no', but whether she said 'yes'. Women do not walk around in a state of constant consent to sexual activity unless and until they say 'no', or offer resistance to anyone who targets them for sexual activity. The right to physical and sexual autonomy means that they have to affirmatively consent to sexual activity." Freedom of movement The degree to which women can participate (in law and in practice) in public life varies and has varied by culture, historical era, social class and other other socioeconomic characteristics. Seclusion of women within the home was a common practise among the upper classes of many societies, and this still remains the case today in some societies. Before the 20th century it was also common in parts of Southern Europe, such as much of Spain. Women's freedom of movement continues to be legally restricted in some parts of the world. This restriction is often due to marriage laws. For instance, in Yemen, marriage regulations stipulate that a wife must obey her husband and must not leave home without his permission. In some countries, women must legally be accompanied by their male guardians (such as the husband or male relative) when they leave home. - Article 15 - "4. States Parties shall accord to men and women the same rights with regard to the law relating to the movement of persons and the freedom to choose their residence and domicile." In addition to laws, women's freedom of movement is also restricted by social and religious norms - for example purdah, a religious and social practice of female seclusion prevalent among some Muslim communities in Afghanistan and Pakistan as well as upper-caste Hindus in Northern India, such as the Rajputs, which often leads to the minimizing of the movement of women in public spaces and restrictions on their social and professional interactions; or namus, a cultural concept strongly related to family honor. The custom of bride price can also curtail the free movement of women: if a wife wants to leave her husband, he may demand back the bride price that he had paid to the woman's family; and the woman's family often cannot or does not want to pay it back, making it difficult for women to move out of violent husbands' homes. Restrictions on freedom of movement also exist due to traditional practices such as baad, swara, or vani, common especially among Pashtun tribes in Pakistan and Afghanistan, whereby a girl is given from one family to another (often though a marriage), in order to settle the disputes and feuds between the families. The girl, who now belongs to the second family, has very little autonomy and freedom, her role being to serve the new family. Gendered arrangements of work and care Since the 1950s, social scientists as well as feminists have increasingly criticized gendered arrangements of work and care and the male breadwinner role. Policies are increasingly targeting men as fathers as a tool of changing gender relations. Shared earning/shared parenting marriage, that is, a relationship where the partners collaborate at sharing their responsibilities inside and outside of the home, is often encouraged in Western countries. Western countries with a strong emphasis on women fulfilling the role of homemakers, rather than a professional role, include parts of German speaking Europe - parts of Germany, Austria and Switzerland; as well as the Netherlands and Ireland. In 2011, Jose Manuel Barroso, then president of the European Commission, stated "Germany, but also Austria and the Netherlands, should look at the example of the northern countries [...] that means removing obstacles for women, older workers, foreigners and low-skilled job-seekers to get into the workforce". The Netherlands and Ireland are among the last Western countries to accept women as professionals; despite the Netherlands having an image as progressive on gender issues, women in the Netherlands work less in paid employment than women in other comparable Western countries. In the early 1980s, the Commission of the European Communities report Women in the European Community, found that the Netherlands and Ireland had the lowest labour particupation of married women and the most public disapproval of it. In Ireland, until 1973, there was a marriage bar. In the Netherlands, from the 1990s onwards, the numbers of women entering the workplace have increased, but with most of the women working part time. As of 2014, the Netherlands and Switzerland were the only OECD members where most employed women worked part-time, while in the United Kingdom, women made up two thirds of workers on long term sick leave, despite making up only half of the workforce and even after excluding maternity leave. A key issue towards insuring gender equality in the workplace is the respecting of maternity rights and reproductive rights of women. Different countries have different rules regarding maternity leave, paternity leave and parental leave. In the European Union (EU) the policies vary significantly by country, but the EU members must abide by the minimum standards of the Pregnant Workers Directive and Parental Leave Directive. Another important issue refers to ensuring that employed women are not de jure or de facto prevented from having a child. For example, some countries have enacted legislation explicitly outlawing or restricting what they view as abusive clauses in employment contracts regarding reproductive rights (for example clauses which stipulate that a woman cannot get pregnant during a specified time) rendering such contracts void or voidable. In some countries, employers who request women to sign formal or informal documents stipulating that they will not get pregnant face legal punishment. Women often face severe violations of their reproductive rights at the hands of their employers; and the International Labour Organization classifies forced abortion coerced by the employer as labour exploitation. Being the victim of a forced abortion compelled by the employer was ruled a ground of obtaining political asylum in the US. Other abuses include routine virginity tests of unmarried employed women. Girls' access to education In many parts of the world, girls' access to education is very restricted. In developing parts of the world women are often denied opportunities for education as girls and women face many obstacles that include: early and forced marriages; early pregnancy; prejudice based on gender stereotypes at home, at school and in the community; violence on the way to school, or in and around schools; long distances to schools; vulnerability to the HIV epidemic; school fees, which often lead to parents sending only their sons to school; lack of gender sensitive approaches and materials in classrooms. According to OHCHR, there have been multiple attacks on schools worldwide during the period 2009-2014 with "a number of these attacks being specifically directed at girls, parents and teachers advocating for gender equality in education". The United Nations Population Fund says: - "About two thirds of the world's illiterate adults are women. Lack of an education severely restricts a woman's access to information and opportunities. Conversely, increasing women's and girls' educational attainment benefits both individuals and future generations. Higher levels of women's education are strongly associated with lower infant mortality and lower fertility, as well as better outcomes for their children." Political participation of women Women are underrepresented in most countries' National Parliaments. The 2011 UN General Assembly resolution on women’s political participation called for female participation in politics, and expressed concern about the fact that "women in every part of the world continue to be largely marginalized from the political sphere". The Council of Europe states that: - "Pluralist democracy requires balanced participation of women and men in political and public decision-making. Council of Europe standards provide clear guidance on how to achieve this." Institutions also play an essential role in achieving and enforcing gender equality. However, basic legal and human rights, access to and the control of resources, employment and earnings and social and political participation are still not guaranteed in many social and legal institutions. For example, only 22 per cent of parliamentarians globally are women and therefore, men continue to occupy most positions of political and legal authority. As of November 2014, women accounted for 28% of members of the single or lower houses of parliaments in the European Union member states. In some Western countries women have only recently obtained the right to vote, notably in Switzerland, where women gained the right to vote in federal elections in 1971; but in the canton of Appenzell Innerrhoden women obtained the right to vote on local issues only in 1991, when the canton was forced to do so by the Federal Supreme Court of Switzerland. In Liechtenstein, women were given the right to vote by the women's suffrage referendum of 1984. Three prior referendums held in 1968, 1971 and 1973 had failed to secure women's right to vote. Economic empowerment of women Female economic activity is a common measure of gender equality in an economy. UN Women states that: "Investing in women’s economic empowerment sets a direct path towards gender equality, poverty eradication and inclusive economic growth." The UN Population Fund says that, "Six out of 10 of the world’s poorest people are women. Economic disparities persist partly because much of the unpaid work within families and communities falls on the shoulders of women, and because women continue to face discrimination in the economic sphere." Gender biases also exist in product and service provision. The term "Women's Tax", also known as "Pink Tax", refers to gendered pricing in which products or services marketed to women are more expensive than similar products marketed to men. Gender-based price discrimination involves companies selling almost identical units of the same product or service at comparatively different prices, as determined by the target market. Studies have found that women pay about $1,400 a year more than men due to gendered discriminatory pricing. Although the "pink tax" of different goods and services is not uniform, overall women pay more for commodities that result in visual evidence of feminine body image. For example, studies have shown that women are charged more for services especially tailoring, hair cutting and laundering. A growing body of research documents what works to economically empower women, from providing access to formal financial services to training on agricultural and business management practices, though more research is needed across a variety of contexts to confirm the effectiveness of these interventions. Marriage, divorce and property laws and regulations Equal rights for women in marriage, divorce, and property/land ownership and inheritance are essential for gender equality. CEDAW has called for the end of discriminatory family laws. In 2013, UNWomen stated that "While at least 115 countries recognize equal land rights for women and men, effective implementation remains a major challenge". The legal and social treatment of married women has been often discussed as a political issue from the 19th century onwards. John Stuart Mill, in The Subjection of Women (1869) compared marriage to slavery and wrote that: "The law of servitude in marriage is a monstrous contradiction to all the principles of the modern world, and to all the experience through which those principles have been slowly and painfully worked out." In 1957, James Everett, then Minister for Justice in Ireland, stated: "The progress of organised society is judged by the status occupied by married women". Until the 1970s, legal subordination of married women was common across European countries, through marriage laws giving legal authority to the husband, as well as through marriage bars. In France, married women obtained the right to work without their husband's consent in 1965; while the paternal authority of a man over his family was ended in 1970 (before that parental responsibilities belonged solely to the father who made all legal decisions concerning the children); and a new reform in 1985 abolished the stipulation that the father had the sole power to administer the children's property. In Austria, the marriage law was overhauled between 1975 and 1983, abolishing the restrictions on married women's right to work outside the home, providing for equality between spouses, and for joint ownership of property and assets. Switzerland was one of the last countries in Europe to establish gender equality in marriage, in this country married women's rights were severely restricted until 1988, when legal reforms providing for gender equality in marriage, abolishing the legal authority of the husband, come into force (these reforms had been approved in 1985 by voters in a referendum, who narrowly voted in favor with 54.7% of voters approving). In the Netherlands, although the legal incapacity of a married woman was abolished in 1956, the marriage bar for women civil servants being lifted in 1957, it was only in 1984 that full legal equality between husband and wife was achieved - prior to 1984 the law stipulated that the husband's opinion prevailed over the wife's regarding issues such as decisions on children's education and the domicile of the family. In 1978, the Council of Europe passed the Resolution (78) 37 on equality of spouses in civil law. In the United States, the wife's legal subordination to her husband was fully ended by the case of Kirchberg v. Feenstra, 450 U.S. 455 (1981), a United States Supreme Court case in which the Court held a Louisiana Head and Master law, which gave sole control of marital property to the husband, unconstitutional. There have been and sometimes continue to be unequal treatment of married women in various aspects of everyday life. For example, in Australia, until 1983 a husband had to authorise the application by a married woman for a passport. Other practices have included, and in many countries continue to include, a requirement for a husband's consent for an application for bank loans and credit cards by a married woman, as well as restrictions on the wife's reproductive rights, such as a requirement that the husband consents to the wife's acquiring of contraception or having an abortion. In some places, although the law itself no longer requires the consent of the husband for various actions taken by the wife, the practice continues de facto, with the authorization of the husband being asked in practice. Although dowry is today associated with South Asia, the practice has been common until the mid-20th century in parts of Southeast Europe. For example, in Greece dowry was removed from family law only in 1983 through legal changes which reformed marriage law and provided gender equality in marriage. These changes also dealt with the practice of women changing their surnames to that of the husbands upon getting married, a practice which has been outlawed or restricted in some jurisdictions, because it is seen as contrary to women's rights. As such, women in Greece are required to keep their birth names for their whole life. Laws regulating marriage and divorce continue to discriminate against women in many countries. For example, in Yemen, marriage regulations state that a wife must obey her husband and must not leave home without his permission. In Iraq husbands have a legal right to "punish" their wives, with paragraph 41 of the criminal code stating that there is no crime if an act is committed while exercising a legal right. Examples of legal rights include: "The punishment of a wife by her husband, the disciplining by parents and teachers of children under their authority within certain limits prescribed by law or by custom". In the 1990s and the 21st century there has been progress in many countries in Africa: for instance in Namibia the marital power of the husband was abolished in 1996 by the Married Persons Equality Act; in Botswana it was abolished in 2004 by the Abolition of Marital Power Act; and in Lesotho it was abolished in 2006 by the Married Persons Equality Act. Violence and mistreatment of women in relation to marriage has come to international attention during the past decades. This includes both violence committed inside marriage (domestic violence) as well as violence related to marriage customs and traditions (such as dowry, bride price, forced marriage and child marriage). Violence against a wife continues to be seen as legally acceptable in some countries; for instance in 2010, the United Arab Emirates's Supreme Court ruled that a man has the right to physically discipline his wife and children as long as he does not leave physical marks. The criminalization of adultery has been criticized as being a prohibition, which, in law or in practice, is used primarily against women; and incites violence against women (crimes of passion, honor killings). A Joint Statement by the United Nations Working Group on discrimination against women in law and in practice in 2012 stated: "the United Nations Working Group on discrimination against women in law and in practice is deeply concerned at the criminalization and penalization of adultery whose enforcement leads to discrimination and violence against women." UN Women also stated that "Drafters should repeal any criminal offenses related to adultery or extramarital sex between consenting adults". Investigation and prosecution of crimes against women and girls Human rights organizations have expressed concern about the legal impunity of perpetrators of crimes against women, with such crimes being often ignored by authorities. This is especially the case with murders of women in Latin America. In particular, there is impunity in regard to domestic violence. High Commissioner for Human Rights, Navi Pillay, has stated on domestic violence against women: - "The reality for most victims, including victims of honor killings, is that state institutions fail them and that most perpetrators of domestic violence can rely on a culture of impunity for the acts they commit – acts which would often be considered as crimes, and be punished as such, if they were committed against strangers." Women are often, in law or in practice, unable to access legal institutions. UNWomen has said that, "Too often, justice institutions, including the police and the courts, deny women justice". Often, women are denied legal recourse because the state institutions themselves are structured and operate in ways incompatible with genuine justice for women who experience violence - according to Amnesty International, "Women who are victims of gender-related violence often have little recourse because many state agencies are themselves guilty of gender bias and discriminatory practices." Gender stereotypes arise from the socially approved roles of women and men in the private or public sphere, at home or in the workplace.In the household, women are typically seen as mother figures, which usually places them into a typical classification of being "supportive" or "nurturing". Women are expected to want to be mothers and take up primary responsibility for household needs. Their male counterparts are seen as being "assertive" or "ambitious" as men are usually seen in the workplace or as the primary beard winning for his family. Due to these views and expectations, women often face discrimination in the public sphere, such as the workplace. A gender role is a set of societal norms dictating the types of behaviors which are generally considered acceptable, appropriate, or desirable for people based on their sex. Gender roles are usually centered on conceptions of femininity and masculinity, although there are exceptions and variations. The Istanbul Convention contains a definition of "gender", stating that: "“gender” shall mean the socially constructed roles, behaviours, activities and attributes that a given society considers appropriate for women and men". (Article 3–Definitions (c)) Harmful traditional practices "Harmful traditional practices" refer to forms of violence which are committed in certain communities often enough to become cultural practice, and accepted for that reason. Young women are the main victims of such acts, although men can be affected. They occur in an environment where women and girls have unequal rights and opportunities. These practices include, according to the Office of the United Nations High Commissioner for Human Rights: - "female genital mutilation (FGM); forced feeding of women; early marriage; the various taboos or practices which prevent women from controlling their own fertility; nutritional taboos and traditional birth practices; son preference and its implications for the status of the girl child; female infanticide; early pregnancy; and dowry price" Female genital mutilation is defined as "procedures that intentionally alter or cause injury to the female genital organs for non-medical reasons". An estimated 125 million women and girls living today have undergone FGM in the 29 countries where data exist. Of these, about half live in two countries, Egypt and Ethiopia. It is most commonly carried out on girls between infancy and 15 years old. UNFPA and UNICEF state that, "In every society where it is practiced, FGM is a manifestation of deeply entrenched gender inequality. It persists for many reasons. In some societies, for example, it is considered a rite of passage. In others, it is seen as a prerequisite for marriage. In some communities, whether Christian, Jewish, Muslim, the practice may even be attributed to religious beliefs. Because FGM may be considered an important part of a culture or identity, it can be difficult for families to decide against having their daughters cut. People who reject the practice may face condemnation or ostracism. Even parents who do not want their daughters to undergo FGM may feel compelled to participate in the practice." Son preference refers to a cultural preference for sons over daughters, and manifests itself through practices such as sex selective abortion; female infanticide; or abandonment, neglect or abuse of girl-children. Early marriage, child marriage or forced marriage is prevalent in parts of Asia and Africa. The majority of victims seeking advice are female and aged between 18 and 23. Such marriages can have harmful effects on a girl's education and development, and may expose girls to social isolation or abuse. The 2013 UN Resolution on Child, Early and Forced Marriage calls for an end to the practice, and states that "Recognizing that child, early and forced marriage is a harmful practice that violates abuses, or impairs human rights and is linked to and perpetuates other harmful practices and human rights violations, that these violations have a disproportionately negative impact on women and girls [...]". Despite a near-universal commitment by governments to end child marriage, "one in three girls in developing countries (excluding China) will probably be married before they are 18." UNFPA states that, "over 67 million women 20-24 year old in 2010 had been married as girls. Half were in Asia, one-fifth in Africa. In the next decade 14.2 million girls under 18 will be married every year; this translates into 39,000 girls married each day. This will rise to an average of 15.1 million girls a year, starting in 2021 until 2030, if present trends continue." Women's ability to control their fertility is often reduced. For instance, in northern Ghana, the payment of bride price signifies a woman's requirement to bear children, and women using birth control face threats, violence and reprisals. Births in parts of Africa are often attended by traditional birth attendants (TBAs), who sometimes perform rituals that are dangerous to the health of the mother. In many societies, a difficult labour is believed to be a divine punishment for marital infidelity, and such women face abuse and are pressured to "confess" to the infidelity. The custom of bride price has been criticized as contributing to the mistreatment of women in marriage, and preventing them from leaving abusive marriages. UN Women recommended its abolition, and stated that: "Legislation should [...] State that divorce shall not be contingent upon the return of bride price but such provisions shall not be interpreted to limit women’s right to divorce; State that a perpetrator of domestic violence, including marital rape, cannot use the fact that he paid bride price as a defence to a domestic violence charge." The caste system in India which leads to untouchability (the practice of ostracizing a group by segregating them from the mainstream society) often interacts with gender discrimination, leading to a double discrimination faced by Dalit women. In a 2014 survey, 27% of Indians admitted to practicing untouchability. Tribal traditions can be harmful to males; for instance, the Satere-Mawe tribe use bullet ants as an initiation rite. Men must wear gloves with hundreds of bullet ants woven in for ten minutes: the ants' stings cause severe pain and paralysis. This experience must be completed twenty times for boys to be considered "warriors". Portrayal of women in the media The way women are represented in the media has been criticized as interfering with the aim of achieving gender equality by perpetuating negative gender stereotypes. The exploitation of women in mass media refers to the criticisms that are levied against the use or objectification of women in the mass media, when such use or portrayal aims at increasing the appeal of media or a product, to the detriment of, or without regard to, the interests of the women portrayed, or women in general. Concerns include the fact that all forms of media have the power to shape the population's perceptions and portray images of unrealistic stereotypical perceptions that falsely implies that women are unimportant or invisible. Criticisms of the way women are represented in the media is that the media reinforces stereotypical societal views of "what women are for", by portraying women either as submissive housewives or as sex objects. The media emphasizes traditional roles that normalize violence against women. According to a study, the way women are often portrayed by the media can lead to: "Women of average or normal appearance feeling inadequate or less beautiful in comparison to the overwhelming use of extraordinarily attractive women"; "Increase in the likelihood and acceptance of sexual violence"; "Unrealistic expectations by men of how women should look or behave"; "Psychological disorders such as body dysmorphic disorder, anorexia, bulimia and so on"; "The importance of physical appearance is emphasized and reinforced early in most girls' development." Studies have found that nearly half of females ages 6–8 have stated that they want to be slimmer. (Striegel-Moore & Franko, 2002)". Social constructs of gender (that is, cultural ideals of socially acceptable masculinity and femininity) often have a negative effect on health. The WHO cites the example of women not being allowed to travel alone outside the home (to go to the hospital), and women being prevented by cultural norms to ask their husbands to use a condom, in cultures which simultaneously encourage male promiscuity, as social norms that harm women's health. Teenage boys suffering accidents due to social expectations of impressing their peers through risk taking, and men dying at much higher rate from lung cancer due to smoking, in cultures which link smoking to masculinity, are cited by the WHO as examples of gender norms negatively affecting men's health. The WHO has also stated that there is a strong connection between gender socialization and transmission and lack of adequate management of HIV/AIDS. Informing women of their rights While in many countries, the problem lies in the lack of adequate legislation, in others the principal problem is not as much the lack of a legal framework, but the fact that most women do not know their legal rights. This is especially the case as many of the laws dealing with women's rights are of recent date. This lack of knowledge enables to abusers to lead the victims (explicitly or implicitly) to believe that their abuse is within their rights. This may apply to a wide range of abuses, ranging from domestic violence to employment discrimination. The United Nations Development Programme states that, in order to advance gender justice, "Women must know their rights and be able to access legal systems". The 1993 UN Declaration on the Elimination of Violence Against Women states at Art. 4 (d) [...] "States should also inform women of their rights in seeking redress through such mechanisms". Enacting protective legislation against violence has little effect, if women do not know how to use it: for example a study of Bedouin women in Israel found that 60% did not know what a restraining order was; or if they don't know what acts are illegal: a report by Amnesty International showed in Hungary, in a public opinion poll of nearly 1,200 people in 2006, a total of 62% did not know that marital rape was an illegal (it was outlawed in 1997) and therefore the crime was rarely reported. Ensuring women have a minim understanding of health issues is also important: lack of access to reliable medical information and available medical procedures to which they are entitled hurts women's health. Gender mainstreaming is the public policy of assessing the different implications for women and men of any planned policy action, including legislation and programmes, in all areas and levels, with the aim of achieving gender equality. The concept of gender mainstreaming was first proposed at the 1985 Third World Conference on Women in Nairobi, Kenya. The idea has been developed in the United Nations development community. Gender mainstreaming "involves ensuring that gender perspectives and attention to the goal of gender equality are central to all activities". According to the Council of Europe definition: "Gender mainstreaming is the (re)organization, improvement, development and evaluation of policy processes, so that a gender equality perspective is incorporated in all policies at all levels and at all stages, by the actors normally involved in policy-making." An integrated gender mainstreaming approach is "the attempt to form alliances and common platforms that bring together the power of faith and gender-equality aspirations to advance human rights." For example, "in Azerbaijan, UNFPA conducted a study on gender equality by comparing the text of the Convention on the Elimination of All Forms of Discrimination against Women with some widely recognized Islamic references and resources. The results reflect the parallels between the Convention and many tenets of Islamic scripture and practice. The study showcased specific issues, including VAW, child marriage, respect for the dignity of women, and equality in the economic and political participation of women. The study was later used to produce training materials geared towards sensitizing religious leaders." - Coloniality of gender - Special Measures for Gender Equality in The United Nations(UN) - Equal opportunity - Gender inequality - Gender mainstreaming - Gender neutrality - Gender role - Men's rights - Right to equal protection - Sex and gender distinction - Sex ratio - Women's rights - Bahá'í Faith and gender equality - Female economic activity - Female education - Gender-based price discrimination - Gender Parity Index (in education) - Gender polarization - Gender sensitization - Mixed-sex education - Quaker Testimony of Equality - Shared Earning/Shared Parenting Marriage (also known as Peer Marriage) - Women in Islam - Anti-discrimination law - Danish Act of Succession referendum, 2009 - Equal Pay Act of 1963 (United States) - Equality Act 2006 (UK) - Equality Act 2010 (UK) - European charter for equality of women and men in local life - Gender Equality Duty in Scotland - Gender Equity Education Act (Taiwan) - Lilly Ledbetter Fair Pay Act (United States, 2009) - List of gender equality lawsuits - Paycheck Fairness Act (in the US) - Title IX of the Education Amendments of 1972 (United States) - Uniform civil code (India) - United Nations Security Council Resolution 1325 - Women's Petition to the National Assembly (France, 1789) Organizations and ministries - Afghan Ministry of Women Affairs (Afghanistan) - Center for Development and Population Activities (CEDPA) - Christians for Biblical Equality - Committee on Women's Rights and Gender Equality (European Parliament) - Equal Opportunities Commission (UK) - Gender Empowerment Measure, a metric used by the United Nations - Gender-related Development Index, a metric used by the United Nations - Government Equalities Office (UK) - International Center for Research on Women - International Society for Peace - Ministry of Integration and Gender Equality (Sweden) - Ministry of Women, Family and Community Development (Malaysia) - Philippine Commission on Women (Philippines) - The Girl Effect, an organization to help girls, worldwide, toward ending poverty Historical anecdotal reports Other related topics - Global Gender Gap Report - International Men's Day - Potty parity - Women's Equality Day - Illustrators for Gender Equality - Gender apartheid - United Nations. Report of the Economic and Social Council for 1997. A/52/3.18 September 1997, at 28: "Mainstreaming a gender perspective is the process of assessing the implications for women and men of any planned action, including legislation, policies or programmes, in all areas and at all levels. It is a strategy for making women's as well as men's concerns and experiences an integral dimension of the design, implementation, monitoring and evaluation of policies and programmes in all political, economic and societal spheres so that women and men benefit equally and inequality is not perpetuated. The ultimate goal is to achieve gender equality." - "Universal Declaration of Human Rights" (PDF). wwda.org. United Nations. December 16, 1948. Retrieved October 31, 2016. - Riane Eisler (2007). The Real Wealth of Nations: Creating a Caring Economics. p. 72. - Evans, Frederick William (1859). Shakers: Compendium of the Origin, History, Principles, Rules and Regulations, Government, and Doctrines of the United Society of Believers in Christ's Second Appearing. New York: D. Appleton & Co. p. 34. - Glendyne R. Wergland, Sisters in the Faith: Shaker Women and Equality of the Sexes (Amherst: University of Massachusetts Press, 2011). - Wendy R. Benningfield, Appeal of the Sisterhood: The Shakers and the Woman’s Rights Movement (University of Kentucky Lexington doctoral dissertation, 2004), p. 73. - United Nations High Commissioner for Refugees. "Vienna Declaration and Programme of Action". United Nations High Commissioner for Refugees. Retrieved 14 June 2015. - Organization of American States. "Follow-up Mechanism to the Belém do Pará Convention (MESECVI): About the Belém do Pará Convention". Organization of American States. Retrieved 14 June 2015. - EUR-Lex (9 February 1976). Council directive (pdf). EUR-Lex Access to European Union law. - The Convention of Belém do Pará and the Istanbul Convention: a response to violence against women worldwide (pdf). Organization of American States, Council of Europe, Permanent Mission of France to the United Nations and Permanent Mission of Argentina to the United Nations. March 2014. CSW58 side event flyer 2014. - Council of Europe, Committee of Ministers, CM document (CM). "Committee of Ministers - Gender Equality Commission (GEC) - Gender Equality Strategy 2014-2017 [1183 meeting]". Retrieved 14 June 2015. - "Strong global support for gender equality, especially among women". 2016-03-08. Retrieved 2016-08-12. - "Why should women change their names on getting married?". BBC News. BBC. Retrieved 14 June 2015. - Clarke, Kris (August 2011). "The paradoxical approach to intimate partner violence in Finland". International Perspectives in Victimology. Tokiwa University via The Press at California State University. 6 (1): 9–19. doi:10.5364/ipiv.6.1.19. Available through academia.edu. - McKie, Linda; Hearn, Jeff (August 2004). "Gender-neutrality and gender equality: comparing and contrasting policy responses to 'domestic violence' in Finland and Scotland". Scottish Affairs. Edinburgh University Press. 48 (1): 85–107. doi:10.3366/scot.2004.0043. Pdf. - Danish, Swedish, Finnish and Norwegian sections of Amnesty International (March 2010), "Rape and human rights in Finland", in Danish, Swedish, Finnish and Norwegian sections of Amnesty International, Case closed: rape and human rights in the Nordic countries, Amnesty International, pp. 89–91, Finland is repeatedly reminded of its widespread problem of violence against women and recommended to take more efficient measures to deal with the situation. International criticism concentrates on the lack of measures to combat violence against women in general and in particular on the lack of a national action plan to combat such violence and on the lack of legislation on domestic violence. (...) Compared to Sweden, Finland has been slower to reform legislation on violence against women. In Sweden, domestic violence was already illegal in 1864, while in Finland such violence was not outlawed until 1970, over a hundred years later. In Sweden the punishment of victims of incest was abolished in 1937 but not until 1971 in Finland. Rape within marriage was criminalised in Sweden in 1962, but the equivalent Finnish legislation only came into force in 1994 — making Finland one of the last European countries to criminalise marital rape. In addition, assaults taking place on private property did not become impeachable offences in Finland until 1995. Only in 1997 did victims of sexual offences and domestic violence in Finland become entitled to government-funded counselling and support services for the duration of their court cases.Pdf. - Amnesty International (May 2011). Denmark: human rights violations and concerns in the context of counter-terrorism, immigration-detention, forcible return of rejected asylum-seekers and violence against women (pdf). Amnesty International. Amnesty International submission to the UN Universal Periodic Review, May 2011. - "Ny voldtægtslovgivning er en sejr for danske kvinders retssikkerhed". Amnesty.dk - Amnesty International. Retrieved 14 June 2015. - "Slut med "konerabat" for voldtægt". www.b.dk. 3 June 2013. Retrieved 14 June 2015. - "Straffeloven - Bekendtgørelse af straffeloven - retsinformation.dk". Retrieved 14 June 2015. - Featherstone, Brid; Rivett, Mark; Scourfield, Jonathan. Working with men in health and social care. p. 27. - Htun, Mala; Weldon, S. Laurel (2007). When and why do governments promote women's rights? Toward a comparative politics of states and sex equality. Work in progress pdf. Paper prepared for delivery at the American Political Science Association, Annual Meeting, Chicago, 29 August - 2 September 2007. - Jordan, Tim (2002). Social Change (Sociology and society). Blackwell. ISBN 0-631-23311-3. - Man's Dominion: The Rise of Religion and the Eclipse of Women's Rightsby Sheila Jeffreys, pg 94 - Emanuela Lombardo. "Multiple Meanings of Gender Equality A Critical Frame Analysis of Gender Policies in Europe Edited by". Retrieved 14 June 2015. - An Introduction to Feminist Philosophy, by Alison Stone, pp. 209-211 - In 1985, a referendum guaranteed women legal equality with men within marriage. The new reforms came into force in January 1988.Women's movements of the world: an international directory and reference guide, edited by Sally Shreir, p. 254 - In 1983, legislation was passed guaranteeing equality between spouses, abolishing dowry, and ending legal discrimination against illegitimate children Demos, Vasilikie. (2007) "The Intersection of Gender, Class and Nationality and the Agency of Kytherian Greek Women." Paper presented at the annual meeting of the American Sociological Association. August 11. - In 1981, Spain abolished the requirement that married women must have their husbands’ permission to initiate judicial proceedings - Although married women in France obtained the right to work without their husbands' permission in 1965, and the paternal authority of a man over his family was ended in 1970 (before that parental responsibilities belonged solely to the father who made all legal decisions concerning the children), it was only in 1985 that a legal reform abolished the stipulation that the husband had the sole power to administer the children's property. - "Thousands of Women Killed for Family "Honor"". Retrieved 14 June 2015. - Secular and Islamic Feminist Critiques in the Work of Fatima Mernissi, by Raja Rhouni, pg. 52 - Robert B. Holtman, The Napoleonic Revolution (Baton Rouge: Louisiana State University Press, 1981) - World Bank (September 2006). "Gender Equality as Smart Economics: A World Bank Group Gender Action Plan (Fiscal years 2007–10)" (PDF). - United Nations Millennium Campaign (2008). "Goal #3 Gender Equity". United Nations Millennium Campaign. Retrieved 2008-06-01. - UNICEF. "Promoting Gender Equality: An Equity-based Approach to Programming" (PDF). UNICEF. Retrieved 2011-01-28. - "Gender equality - UNFPA - United Nations Population Fund". Retrieved 14 June 2015. - Gender equality in Arab world critical for progress and prosperity, UN report warns, E-joussour (21 October 2008) - UNFPA. "Engaging Men and Boys: A Brief Summary of UNFPA Experience and Lessons Learned" (PDF). UNFPA. Retrieved 2015-05-06. - United Nations General Assembly. "A/RES/48/104 - Declaration on the Elimination of Violence against Women - UN Documents: Gathering a body of global agreements". Retrieved 14 June 2015. - "Gender equality - UNFPA - United Nations Population Fund". Retrieved 14 June 2015. - "Explanatory Report to the Council of Europe Convention on preventing and combating violence against women and domestic violence (CETS No. 210)". Retrieved 14 June 2015. - "HUDOC Search Page". Retrieved 14 June 2015. - "Council of Europe - Convention on preventing and combating violence against women and domestic violence (CETS No. 210)". Retrieved 14 June 2015. - "Many voices, one message". Amnesty Australia. - Country Comparison: Maternal Mortality Rate in The CIA World Factbook. - "WHO - World Health Organization". Retrieved 14 June 2015. - "Maternal health - UNFPA - United Nations Population Fund". Retrieved 14 June 2015. - "Reducing Maternal Mortality". Retrieved 14 June 2015. - "Sexual and reproductive rights under threat worldwide". - "Giving Special Attention to Girls and Adolescents". Retrieved 14 June 2015. - Sex and Reason, by Richard A. Posner, page 94. - "BBC - Ethics - Honour crimes". Retrieved 14 June 2015. - "Libya rape victims 'face honour killings'". BBC News. Retrieved 14 June 2015. - Toward a Feminist Theory of the State, by Catharine A. MacKinnon, pp 174 - "Error" (PDF). Retrieved 14 June 2015. - "Error" (PDF). Retrieved 14 June 2015. - Liberating Women's History:Theoretical and Critical Essays, edited by Berenice A. Carroll, pp. 161-162 - "Why can't women drive in Saudi Arabia?". Retrieved 14 June 2015. - "CEDAW 29th Session 30 June to 25 July 2003". Archived from the original on April 1, 2011. Retrieved 14 June 2015. - Papanek, Hanna. "Purdah: Separate Worlds and Symbolic Shelter." Comparative Studies in Society and History 15, no. 03 (1973): 289–325. doi:10.1017/S001041750000712X. - Equality Now (2007) Protecting the girl child: Using the law to end child, early and forced marriage and related human rights violations. Retrieved 17 April 2015 from http://www.equalitynow.org/sites/default/files/Protecting_the_Girl_Child.pdf - Lelieveld, M. (2011)Child protection in the Somali region of Ethiopia. A report for the BRIDGES project Piloting the delivery of quality education services in the developing regional states of Ethiopia. Retrieved 17 April 2015 from http://www.savethechildren.org.uk/sites/default/files/docs/FINALChild_Protection_in_the_Somali_Region_30511.pdf - Stange, Mary Zeiss, and Carol K. Oyster, Jane E. Sloan (2011). Encyclopedia of Women in Today's World, Volume 1. SAGE. p. 496. ISBN 9781412976855. - United Nations High Commissioner for Refugees. "Refworld - Afghan Girls Suffer for Sins of Male Relatives". Refworld. - "Vani: Pain of child marriage in our society". News Pakistan. - Nasrullah, M., Muazzam, S., Bhutta, Z. A., & Raj, A. (2013). Girl Child Marriage and Its Effect on Fertility in Pakistan: Findings from Pakistan Demographic and Health Survey, 2006–2007. Maternal and child health journal, pp 1-10 - Vani a social evil Anwar Hashmi and Rifat Koukab, The Fact (Pakistan), (July 2004) - Ahsan, I. (2009). PANCHAYATS AND JIRGAS (LOK ADALATS): Alternative Dispute Resolution System in Pakistan. Strengthening Governance Through Access To Justice - Bjørnholt, M. (2014). "Changing men, changing times; fathers and sons from an experimental gender equality study" (PDF). The Sociological Review. 62 (2): 295–315. doi:10.1111/1467-954X.12156. - Vachon, Marc and Amy (2010). Equally Shared Parenting. United States: Perigree Trade. ISBN 0-399-53651-5.; Deutsch, Francine (April 2000). Halving It All: How Equally Shared Parenting Works. Harvard University Press. ISBN 978-0-674-00209-8.; Schwartz, Pepper (September 1995). Love Between Equals: How Peer Marriage Really Works. Touchstone. ISBN 978-0-02-874061-4. - "it is in the Netherlands (17.6%) and in Ireland (13.6%) that we see the smallest numbers of married women working and the least acceptance of this phenomenon by the general public". (pg 14). - "Archived copy" (PDF). Archived from the original (PDF) on March 4, 2016. Retrieved April 4, 2016. - Watts, Joseph (11 February 2014). "Women make up two-thirds of workers on long-term sick leave". London Evening Standard. p. 10. - See for example Law no. 202/2002 - Art. 10 (4) and Art. 37, of Romania. - "Global issues affecting women and girls - NUT - The teachers' union". National Union of Teachers - NUT - The Teachers' Union. Retrieved 14 June 2015. - "Global Campaign For Education United States Chapter". Global Campaign For Education United States Chapter. Retrieved 14 June 2015. - "Progress and Obstacles to Girls' Education in Africa". Plan International. - "Gender equality". United Nations Population Fund. Retrieved 17 January 2015. - "Gender balance in decision-making positions". - "The Long Way to Women's Right to Vote in Switzerland: a Chronology". History-switzerland.geschichte-schweiz.ch. Retrieved 2011-01-08. - "United Nations press release of a meeting of the Committee on the Elimination of Discrimination against Women (CEDAW), issued on 14 January 2003". Un.org. Retrieved 2011-09-02. - European Commission. The situation in the EU. Retrieved on July 12, 2011. - "What we do: Economic empowerment - UN Women – Headquarters". headQuarters. Retrieved 14 June 2015. - Harvard Law Review Association (May 1996), CIVIL RIGHTS - GENDER DISCRIMINATION - CALIFORNIA PROHIBITS GENDER-BASED PRICING - Duesterhaus, Megan; Grauerholz, Liz; Weichsel, Rebecca; Guittar, Nicholas A. (2011). "The Cost of Doing Femininity: Gendered Disparities in Pricing of Personal Care Products and Services" (PDF). Gender Issues. 28: 175–191. doi:10.1007/s12147-011-9106-3. - "Equality in family relations: recognizing women's rights to property". - "The Subjection of Women by John Stuart Mill". Retrieved 14 June 2015. - "Married Women's Status Bill, 1956—Second Stage: Minister for Justice (Mr. Everett)". Oireachtas. 16 January 1957. Retrieved September 2015. Check date values in: - Women and Politics in Contemporary Ireland: From the Margins to the Mainstream, by Yvonne Galligan, pp.90 - "SWISS GRANT WOMEN EQUAL MARRIAGE RIGHTS". The New York Times. 23 September 1985. - "Switzerland profile - Timeline". BBC News. - Markus G. Jud, Lucerne, Switzerland. "The Long Way to Women's Right to Vote in Switzerland: a Chronology". - Women's movements of the world: an international directory and reference guide, edited by Sally Shreir, p. 254 - The Economics of Imperfect Labor Markets: Second Edition, by Tito Boeri, Jan van Ours, pp. 105 - "Kirchberg v. Feenstra :: 450 U.S. 455 (1981) :: Justia U.S. Supreme Court Center". Justia Law. - "The History of Passports in Australia". Archived from the original on 14 June 2006. - Education For Women, by Bhaskara Rao, pp. 161 - Demos, Vasilikie. (2007) "The Intersection of Gender, Class and Nationality and the Agency of Kytherian Greek Women." Paper presented at the annual meeting of the American Sociological Association. August 11. - Heather Long. "Should women change their names after marriage? Ask a Greek woman - Heather Long". the Guardian. - "Archived copy" (PDF). Archived from the original (PDF) on October 21, 2012. Retrieved October 21, 2012. - "Court in UAE says beating wife, child OK if no marks are left". Retrieved 14 June 2015. - "Statement by the United Nations Working Group on discrimination against women in law and in practice". - "Decriminalization of adultery and defenses". Retrieved 14 June 2015. - "Impunity for violence against women is a global concern". - "Femicide in Latin America". headQuarters. Retrieved 14 June 2015. - "Central America: Femicides and Gender-Based Violence". Retrieved 14 June 2015. - "High Commissioner speaks out against domestic violence and "honour killing" on occasion of International Women's Day"". - "Progress of the World's Women 2015-2016". My Favorite News. Retrieved 14 June 2015. - "Violence Against Women Information". Amnesty International USA. - Shaw and Lee, Susan and Janet. Women Voices and Feminist Visions. p. 450. Women are expected to want to be mothers - "How Does Gender Bias Really Affect Women in the Workplace?". 2016-03-24. Retrieved 2016-09-23. - "How Does Gender Bias Really Affect Women in the Workplace?". 2016-03-24. Retrieved 2016-09-23. - "Prevalence of FGM/C". UNICEF. Retrieved 18 August 2014. - "National Gender Based Violence & Health Programme". Retrieved 14 June 2015. - "WHO - Female genital mutilation". Retrieved 14 June 2015. - "UNFPA-UNICEF Joint Programme on Female Genital Mutilation/Cutting: Accelerating Change". Retrieved 14 June 2015. - "Child marriage". UNICEF. 22 October 2014. Retrieved 14 June 2015. - "Child Marriage". Retrieved 14 June 2015. - "End Child Marriage - UNFPA - United Nations Population Fund". Retrieved 14 June 2015. - "Biggest caste survey: One in four Indians admit to practising untouchability". The Indian Express. 29 November 2014. Retrieved 14 June 2015. - Backshall, Steve (6 January 2008). "Bitten by the Amazon". London: The Sunday Times. Retrieved 13 July 2013. - "The Myriad: Westminster's Interactive Academic Journal". - The Thing All Women Do That You Don't Know About, by Gretchen Kelly, Huffington Post, November 23, 2015 - "WHO - World Health Organization". Retrieved 14 June 2015. - Booth, C. and Bennett, (2002) ‘Gender Mainstreaming in the European Union’, European Journal of Women’s Studies 9 (4): 430–46. - "Definition of Gender Mainstreaming". Retrieved 14 June 2015. - "II. The Origins of Gender Mainstreaming in the EU", Academy of European Law online - "OSAGI Gender Mainstreaming". Retrieved 14 June 2015. - "Gender at the Heart of ICPD:". Retrieved 14 June 2015. |Wikimedia Commons has media related to Gender equality.| |Library resources about | - United Nations Rule of Law: Gender Equality, on the relationship between gender equality, the rule of law and the United Nations. - Women and Gender Equality, the United Nations Internet Gateway on Gender Equality and Empowerment of Women. - Gender Equality, an overview of the United Nations Development Program's work on Gender Equality. - Gender Equality as Smart Economics The World Bank. - GENDERNET International forum of gender experts working in support of Gender equality. Development Co-operation Directorate of the Organisation for Economic Co-operation and Development (OECD). - OECD's Gender Initiative, an overview page which also links to wikiGENDER, the Gender equality project of the OECD Development Centre. - The Local A news collection about Gender equality in Sweden. - Egalitarian Jewish Services A Discussion Paper. - End The Gender Pay Gap Project based in Palo Alto, CA.
Java Networking is a set of classes and APIs provided by the Java platform to facilitate network communication and programming. It allows developers to create networked applications that can communicate over various network protocols, such as TCP/IP and UDP. Let’s explore the network basics and socket overview in the context of Java programming: Network programming refers to the development of applications that communicate over a network. It involves writing code to establish connections, exchange data, and handle network-related tasks. Network programming enables applications to communicate with remote servers, access resources, and share information across different devices. Here are some key aspects of network programming: - Protocols: Network programming often involves working with various protocols such as TCP (Transmission Control Protocol), UDP (User Datagram Protocol), HTTP (Hypertext Transfer Protocol), FTP (File Transfer Protocol), and SMTP (Simple Mail Transfer Protocol). Understanding and implementing the relevant protocols is essential for effective network programming. - Sockets: Sockets are endpoints for network communication. They provide a programming interface for sending and receiving data over a network. Sockets can be either stream-based (TCP) or datagram-based (UDP). Network programming involves creating, configuring, and managing sockets to establish connections and exchange data. - Client-Server Model: Network applications are typically built using a client-server model. The server application listens for incoming client connections and provides services or resources. Clients connect to the server and interact with it by sending requests and receiving responses. Network programming involves developing both the client and server components to enable communication between them. - IP Addresses and Ports: IP addresses and ports are used to identify networked devices and applications. IP addresses uniquely identify devices on a network, and ports specify specific services or applications running on those devices. Network programming often involves working with IP addresses and ports to establish connections and direct data flow. - Data Serialization: Data serialization is the process of converting data objects into a format suitable for transmission over a network. It involves serializing objects into byte streams on the sender side and deserializing them back into objects on the receiver side. Network programming requires understanding serialization techniques to transfer complex data structures efficiently. - Error Handling and Exception Handling: Network programming involves handling various network-related errors and exceptions. This includes handling network failures, connection timeouts, data transmission errors, and handling exceptions thrown during network operations. Proper error and exception handling ensure the stability and reliability of networked applications. - Security and Encryption: Network programming often involves implementing security measures to protect sensitive data during transmission. This includes using encryption algorithms, secure protocols (such as SSL/TLS), and authentication mechanisms to ensure secure communication between networked applications. Programming languages like Java, Python, C++, and C# provide libraries and APIs specifically designed for network programming. These libraries offer functions and classes that abstract low-level network operations, making it easier to develop networked applications. Network programming is essential for building a wide range of applications, including web applications, chat systems, file transfer programs, IoT (Internet of Things) devices, and distributed systems. Computer network programming involves writing computer programs that Allow processes to communicate with each other across a computer network. Network Basics in Java: Below is the lists of Some Java.NET Package Classes |URLConnection||The abstract class URLConnection is the superclass of all classes that represent a communications link between the application and a URL.| |ServerSocket||This class implements server sockets.| |Socket||This class implements client sockets (also called just “sockets”).| |DatagramPacket||This class represents a datagram packet.| In Java, sockets are a fundamental part of network programming. They provide a mechanism for communication between applications running on different devices over a network. Here is an overview of sockets in Java: - Socket Class: The Socketclass represents a client-side endpoint of a connection. It allows a Java application to establish a connection to a server running on a specific IP address and port. The Socketclass provides methods for connecting to a server, sending and receiving data, and closing the connection. - ServerSocket Class: The ServerSocketclass represents a server-side endpoint that listens for incoming client connections. It binds to a specific IP address and port and waits for client connections. When a client connects, the ServerSocketaccepts the connection and returns a new Socketinstance representing the client connection. - TCP Sockets: Java’s socket implementation primarily focuses on TCP (Transmission Control Protocol), which provides reliable, connection-oriented communication. TCP sockets are stream-based, allowing continuous streams of data to be sent and received between client and server. - DatagramSocket Class: In addition to TCP sockets, Java also provides the DatagramSocketclass for UDP (User Datagram Protocol) communication. UDP sockets are datagram-based and provide connectionless, unreliable communication. They are suitable for applications where speed and reduced overhead are more important than reliability. - Input and Output Streams: Once a connection is established using a OutputStreamclasses can be used to read from and write to the socket. The InputStreamallows the application to read data sent by the other party, while the OutputStreamenables the application to send data to the other party. - Exception Handling: Socket programming involves handling various exceptions that may occur during network operations. Common exceptions include SocketTimeoutException. Proper exception handling ensures error recovery and graceful handling of network-related issues. - Multithreading: In many cases, networked applications need to handle multiple client connections simultaneously. Multithreading is often employed to handle concurrent connections efficiently. Each client connection is typically processed in a separate thread, allowing the server to handle multiple clients concurrently. A Socket is one end point of two way communication link between two programs running on the network A Socket is combination of an IP Address and a Port Number Client – Server Communication - Two Machines must Connect - Server waits for Connection - Client initiates Connection - Server respond to the client request The server is just like any ordinary program running on a computer Each Computer is equipped with some ports. The server connects to one of the ports This process is called Binding to a port The Connection is called a server Socket The Java server code that does this is as given below Here 2412 is the port number Server waits for the client machine to connect Client possibly running on different machine Client then connect to the connects to the port of the server’s computer Connection is called a Client Socket Clients already know the port 80 Now we can say connection is established between the client and server, Every time a client is found, its socket is extracted, and loop again waits for the next client. Java’s socket API provides a powerful and flexible way to implement networked applications. It allows developers to establish connections, send and receive data, and handle communication between client and server applications. With the socket classes and their associated streams, Java programmers can build a wide range of networked applications, including client-server systems, real-time communication applications, and distributed systems.
What is an OBS and why do we use it?Seismographs measure movement in the Earth's crust. About 90 percent of all natural earthquakes occur underwater, where great pressure and cold make measurements difficult. The ocean-bottom seismograph (OBS) was developed for this task. Scientists use seismograph data to calculate the energy released by earthquakes, like the massive one in December 2004 that caused the Indian Ocean tsunami. By using sensitive seismographs to study small earthquakes, researchers are working to predict large earthquakes or volcanic eruptions. Other scientists use seismographs to peer inside the Earth itself. The waves that earthquakes generate get deformed or slowed down as they pass through different materials inside the Earth. Seismographs equipped with precise clocks record the shape and speed of these waves when they arrive. After an earthquake, data from many widespread seismographs help geologists to calculate the structure of Earth’s mantle and crust. What are the components of an OBS?The seismometer itself is a small metal cylinder; the rest of the footlocker-sized ocean-bottom seismograph consists of equipment to run the seismometer and records its data (batteries and a data logger), weight to sink it to the sea floor, a remote-controlled acoustic release and flotation to bring the instrument back to the surface Why are there two main types of OBS and what are they?The ground motion caused by earthquakes can be extremely small (less than a millimeter) or large (several meters). Small motions have high frequencies, so monitoring them requires measuring movement many times per second and produces huge amounts of data. Large motions are much rarer, so instruments need to record data less frequently, to save memory space and battery power for longer deployments. Because of this variability, engineers have designed two basic kinds of seismographs: Short-period OBSs record high-frequency motions (up to hundreds of times per second). They can record small, short-period earthquakes and are also useful for studying the outer tens of kilometers of the seafloor. Technical details for two models: WHOI D2 and Scripps L-CHEAPO. Long-period OBSs record a much broader range of motions, with frequencies of about 10 per second to once or twice a minute. They are used for recording mid-sized earthquakes and seismic activity far from the instrument. Technical details for two models: WHOI long-deployment OBS and Scripps long-deployment OBS. What platforms are involved?All ocean-bottom seismographs are designed so they can be deployed and recovered from almost any research vessel. The main required piece of equipment is a winch for lifting the heavy instrument package (60 to 600 kg; 132 to 1320 pounds) into the water. Some ocean-bottom seismographs are linked to scientists in real time through connections to a mooring (such as the Nootka Buoy) or cabled observatory similar to Martha's Vineyard Coastal Observatory. AdvantagesVery stable clocks make comparable the readings from many far-flung seismographs. (Without reliable time-stamps, data from different machines would be unusable.) Development of these clocks was a crucial advance for seismologists studying the Earth's interior. After recovering an ocean-bottom seismograph, scientists can offload the instrument's data by plugging in a data cable. This feature saves the task of gingerly disassembling the instrument's protective casing while aboard a rolling ship. The ability to connect a seismograph to a mooring or observatory makes the instrument's data instantly available. This is a huge advantage for geologists scrambling to respond to a major earthquake. LimitationsOcean-bottom seismographs are hard to install with pinpoint accuracy (usually they are lowered into place through thousands of meters of water). They can wind up sitting on a cushion of sediment rather than on bedrock. That soft layer can dampen the very tremors the instrument is trying to measure. Short-period seismographs have short battery lives, so large numbers of them must be set out repeatedly during 30-day cruises. These instruments are designed to be small and light to make deployment and recovery easier. Seismographs record so much data that storing it requires writing to a disk drive (up to 27 Gb), which presents another drain on battery power. SourcesRobert S. Detrick, Jr., Vice President for Marine Facilities and Operations, Woods Hole Oceanographic Institution. John A. Collins. Research Specialist, Geology and Geophysics, Woods Hole Oceanographic Institution. Dorman, L. M. Seismology sensors. p. 2737-2744 in J. H. Steele, K. K. Turekian and S. A. Thorpe (eds.), Encyclopedia of Ocean Science, Academic Press, San Diego, CA. (2001)
A spaceplane is a vehicle that operates as an aircraft in Earth's atmosphere, as well as a spacecraft when it is in space. It combines features of an aircraft and a spacecraft, which can be thought of as an aircraft that can endure and maneuver in the vacuum of space or likewise a spacecraft that can fly like an airplane. Typically, it takes the form of a spacecraft equipped with wings, although lifting bodies have been designed and tested. The propulsion to reach space may be purely rocket based or may use the assistance of air-breathing engines. To date, only pure rocket spaceplanes have succeeded in reaching space, although several have been carried up to an altitude of several tens of thousands of feet by a purely atmospheric aircraft mothership before release. All spaceplanes have been vertical takeoff horizontal landing (VTHL) vehicles that use only rocket lift for the ascent phase in reaching space (excluding mothership first stage) and only used atmospheric lift for the reentry and landing phase. - 1 Description - 2 Flown spaceplanes - 3 Other projects - 4 See also - 5 References - 6 External links A spaceplane features some differences from rocket launch systems. All aircraft utilize aerodynamic surfaces in order to generate lift. For spaceplanes different shaped wings can be used, delta wings are common, but straight wings, lifting bodies and even rotorcraft have been proposed. Typically the force of lift generated by these surfaces is many times that of the drag that they induce. The ratio of these forces (the Lift-to-drag ratio or L/D) varies between different aircraft designs. It can be as high as 60 in high performance gliders, but is usually closer to 7 or less for typical supersonic aircraft configurations, but may be significantly lower for hypersonic aerospace planes. In practice a lift to drag ratio of 7 means that a thrust force equal to 1/7 of the weight of the aircraft is sufficient to support it in flight. This low thrust requirement significantly reduces the amount of fuel required to carry the weight of an aerospace plane in comparison to rocket launch systems which must provide thrust greater than the weight of the vehicle. A partially off-setting difference between these systems is that the aerospace plane would typically experience powered flight for much longer periods of time than a rocket. In addition winged vehicles need extra dry mass for the wings, and this penalizes vehicles towards the end of the flight. Rockets are also able to use their high thrust at an angle which gives reasonable lifting efficiency when burning for orbit. However, spaceplanes typically undergo what is called[by whom?] a "zoom maneuver" when transitioning from air-breathing flight to pure rocket propulsion to reach space, in which they change their attitude and climb rate significantly, translating some forward velocity into vertical velocity in order to get above the remaining atmosphere so the rocket engine can operate most efficiently. Because suborbital spaceplanes are designed for trajectories that do not reach orbital speed, they do not need the kinds of thermal protection orbital spacecraft required during the hypersonic phase of atmospheric reentry. The Space Shuttle thermal protection system, for example, protects the orbiter from surface temperatures that could otherwise reach as high as 3,000 °F (1,650 °C), well above the melting point of steel. All spaceplanes to date have used rocket engines with chemical fuels. Due to the orbital insertion burn necessarily being done in space, orbital spaceplanes require rocket engines for at least that portion of the flight. Air breathing engines A difference between rocket based and air-breathing aerospace plane launch systems is that aerospace plane designs typically include minimal oxidizer storage for propulsion. Air-breathing aerospace plane designs include engine inlets so they can use atmospheric oxygen for combustion. Since the mass of the oxidizer is, at takeoff, the single largest mass of most rocket designs (the Space Shuttle's liquid oxygen tank weighs 629,340 kg, more than one of its solid rocket boosters), this provides a huge potential weight savings benefit. However, air breathing engines are usually very much heavier than rocket engines and the empty weight of the oxidiser tank, and since, unlike oxidiser, this extra weight must be carried into space it greatly offsets the overall system performance. Types of air breathing engines proposed for spaceplanes include scramjet, liquid air cycle engines, precooled jet engines, pulse detonation engine and ramjets. Some engine designs combine several types of engines features into a combined cycle. For instance, the Rocket-based combined cycle (RBCC) engine uses a rocket engine inside a ramscoop so that at low speed, the rockets thrust is boosted by ejector augmented thrust. It then transitions to ramjet propulsion at near-supersonic speeds, then to supersonic combustion or scramjet propulsion, above Mach 6, then back to pure rocket propulsion above Mach 10. Harsh flight environment The flight trajectory required of air-breathing aerospace vehicles to reach orbit is to fly what is known as a 'depressed trajectory' which places the aerospace plane in the high-altitude hypersonic flight regime of the atmosphere. This environment induces high dynamic pressure, high temperature, and high heat flow loads particularly upon the leading edge surfaces of the aerospace plane. These loads typically require special advanced materials, active cooling, or both for the structures to survive the environment. However, even rocket-powered spaceplanes can face a significant thermal environment if they are burning for orbit, but this is nevertheless far less severe than air-breathing spaceplanes. Suborbital space planes designed to briefly reach space do not require significant thermal protection, as they experience peak heating for only a short time during re-entry. Intercontinental suborbital trajectories require much higher speeds and thermal protection more similar to orbital spacecraft reentry. Center of mass issues A wingless launch vehicle has lower aerodynamic forces affecting the vehicle, and attitude control can be active perhaps with some fins to aid stability. For a winged vehicle the centre of lift moves during the atmospheric flight as well as the centre of mass; and the vehicle spends longer in the atmosphere as well. Historically, the X-33 and HOTOL spaceplanes were rear engined and had relatively heavy engines. This puts a heavy mass at the rear of the aircraft with wings that had to hold up the vehicle. As the wet mass reduces, the centre of mass tends to move rearward behind the centre of lift, which tends to be around the centre of the wings. This can cause severe instability that is usually solved by extra fins which add weight and decrease performance. A vertically launched rocket forms the shape of a cylinder stood on end. This structure can be made very light and strong. A horizontally launched spaceplane approximates a cylinder on its side. This structure experiences greater bending forces, so must be strengthened. This makes it heavier, requiring advanced materials and design techniques to reduce weight. For example Burt Rutan of Scaled Composites recently patented a method of gluing the fuel tank directly to the vehicle skin, saving the weight of fasteners while also stiffening both parts. Single stage to orbit Future orbital spaceplanes may take off, ascend, descend, and land like conventional aircraft, providing true single stage to orbit (SSTO) capability. Proponents of scramjet technology often cite such a vehicle as being a possible application of that type of engine, however pure rocket and subsonic combustion jet designs have also been proposed and may be easier to design and build. The main problem with SSTO operation is overall weight. All four of the orbital spaceplanes successfully flown to date utilize a VTHL (vertical takeoff, horizontal landing) design. They include the piloted United States Space Shuttle and three unmanned spaceplanes: the early-1980s BOR-4 (subscale test vehicle for the Spiral spaceplane that was subsequently cancelled), the late-1980s Soviet Buran, and the early-2010s Boeing X-37. These vehicles have used wings to provide aerobraking to return from orbit and to provide lift, allowing them to land on a runway like conventional aircraft. These vehicles are still designed to ascend to orbit vertically under rocket power like conventional expendable launch vehicles. One drawback of spaceplanes is that they have a significantly smaller payload fraction than a ballistic design with the same takeoff weight. This is in part due to the weight of the wings — around 9-12% of the weight of the atmospheric flight weight of the vehicle. This significantly reduces the payload size, but the reusability is intended to offset this disadvantage. While all spaceplanes have used atmospheric lift for the reentry phase, none to date have succeeded in a design that relies on aerodynamic lift for the ascent phase in reaching space (excluding mothership first stage). Efforts such as the Sanger and X-30/X-33 have all failed to materialize into a vehicle capable of successfully reaching space. The Pegasus winged booster has had many successful flights to deploy orbital payloads, but since its aerodynamic vehicle component operates only as a booster, and not operate in space as a spacecraft, it is not typically considered to be a spaceplane. On the other hand, OREX is a test vehicle of HOPE-X and launched into 450 km LEO using H-II in 1994. OREX succeeded to reenter, but it was only hemispherical head of HOPE-X, that is, not plane-shaped. Other spaceplane designs are suborbital, requiring far less energy for propulsion, and can use the vehicle's wings to provide lift for the ascent to space in addition to the rocket. As of 2010, the only such craft to reach space have been the X-15, SpaceShipOne and ASSET (flown as a subscale precursor to the X-20 Dyna-Soar spaceplane program that was subsequently canceled). None of these craft were capable of entering orbit. The X-15 and SpaceShipOne both began their independent flight only after being lifted to high altitude by a carrier aircraft. Scaled Composites and Virgin Galactic unveiled on December 7, 2009, the SpaceShipTwo space plane, the VSS Enterprise, and its WhiteKnightTwo mothership, "Eve". SpaceShipTwo is designed to carry two pilots and six passengers on suborbital flights, with flight testing scheduled to be completed in the 2012 time frame. XCOR Aerospace signed a $30 million contract with Yecheon Astro Space Center to build and lease its Lynx Mark II spaceplane, which would be designed to take off from a runway under its own rocket power, and to reach the same altitude and speed range as SpaceShipOne and SpaceShipTwo, due to the fact that Lynx is propelled by higher specific impulse fuels. Lynx is designed to only carry a pilot and one passenger, although tickets are expected to be around half those quoted for Virgin Galactic services. Various types of spaceplanes have been suggested since the early twentieth century. Notable early designs include Friedrich Zander's spaceplane equipped with wings made of combustible alloys that it would burn during its ascent, and Eugen Sänger's Silbervogel bomber design. Also in Nazi Germany and then in the USA, winged versions of the V2 rocket were considered during and after World War II, and when public interest in space exploration was high in the 1950s and '60s, winged rocket designs by Wernher von Braun and Willy Ley served to inspire science fiction artists and filmmakers. The U.S. Air Force invested some effort in a paper study of a variety of spaceplane projects under their Aerospaceplane efforts of the late 1950s, but later ended these when they decided to use a modified version of Sänger's design. The result, Boeing X-20 Dyna-Soar, was to have been the first orbital spaceplane, but was canceled in the early 1960s in lieu of NASA's Project Gemini and the U.S. Air Force's Manned Orbiting Laboratory program. The Rockwell X-30 National Aero-Space Plane (NASP), begun in the 1980s, was an attempt to build a scramjet vehicle capable of operating like an aircraft and achieving orbit like the shuttle. It was canceled due to increasing technical challenges, growing budgets, and the loss of public interest. In 1994 Mitchell Burnside Clapp proposed a single stage to orbit peroxide/kerosene spaceplane called "Black Horse". This was notable in that it was to take off almost empty and undergo mid-air refueling before accelerating to orbit. The Lockheed Martin X-33 was a prototype made as part of an attempt by NASA to build a SSTO hydrogen-fuelled spaceplane VentureStar that failed when the hydrogen tank design proved to be unconstructable in the planned way. The March 5, 2006 edition of Aviation Week & Space Technology published a story purporting to be "outing" a highly classified U.S. military two-stage-to-orbit spaceplane system with the code name Blackstar, SR-3/XOV among other nicknames. The alleged system, using an XB-70-like first-stage mother ship, capable of Mach 3, is said to launch an upper-stage "waverider" spaceplane capable of carrying small payloads and crews near to or into orbit or on skip-diving flights, ostensibly for reconnaissance and other missions, achieving surprise that cannot be attained by satellite. There has been considerable controversy over this story and its claims. In 1999 NASA started the Boeing X-37 project, an unmanned, remote controlled spaceplane. The project was transferred to the U.S. Department of Defense in 2004. It had its first flight as a drop test on 7 April 2006, at Edwards Air Force Base. The spaceplane's first orbital mission, USA-212 was launched on 22 April 2010 using an Atlas V rocket, and the heat shield and hypersonic aerodynamic handling was tested. A second X-37B test flight was launched on 5 March 2011. Boeing has proposed that a larger variant of the X-37B, the X-37C could be built to carry up to six passengers up to LEO. The spaceplane would also be usable for carrying cargo, with both upmass and downmass (return to Earth) cargo capacity. The ideal size for the proposed derivative "is approximately 165 to 180 percent of the current X-37B." In December 2010, Orbital Sciences made a commercial proposal to NASA to develop the Prometheus, a lifting-body spaceplane vehicle about one-quarter the size of the Space Shuttle, in response to NASA's Commercial Crew Development (CCDev) phase 2 solicitation. The vehicle would be launched on a human-rated (upgraded) Atlas V rocket but would land on a runway. For the same solicitation, Sierra Nevada Corporation proposed phase 2 extensions of its Dream Chaser spaceplane technology, partially developed under the first phase of NASA's CCDev program. Both the Orbital Sciences proposal and the Dream Chaser are lifting body designs. Sierra Nevada will utilize Virgin Galactic to market Dream Chaser commercial services and may use "Virgin’s WhiteKnightTwo carrier aircraft as a platform for drop trials of the Dream Chaser atmospheric test vehicle" NASA expects to make approximately $200 million of phase 2 awards by March 2011, for technology development projects that could last up to 14 months. National Aerospace Plane President Ronald Reagan described NASP in his 1986 State of the Union address as "...a new Orient Express that could, by the end of the next decade, take off from Dulles Airport and accelerate up to twenty-five times the speed of sound, attaining low earth orbit or flying to Tokyo within two hours..." There were six identifiable technologies which were considered critical to the success of the NASP project. Three of these "enabling" technologies were related to the propulsion system, which would consist of a hydrogen-fueled scramjet. The NASP program became the Hypersonic Systems Technology Program (HySTP) in late 1994. HySTP was designed to transfer the accomplishments made in hypersonic technologies by the National Aero-Space Plane (NASP) program into a technology development program. On January 27, 1995 the Air Force terminated participation in (HySTP). Soviet Union and Russia The Soviet Union firstly considered a preliminary design of rocket-launch small spaceplane Lapotok in early 1960s. Then the Spiral airspace system with small orbital spaceplane and rocket as second stage was widely developed in the 1960s-1980s. Although test flights of prototypes of spaceplane were fulfilled in air (MiG-105) and space (BOR-4), program was canceled in 1987, a year before the first Buran flight. Project of Tupolev Design Bureau of military suborbital spaceplane-bomber Tu-136/139 Zvezda was canceled in early stage. Another project of Uragan spaceplane, a smaller sibling to Buran, launched by Proton and Zenit rockets, never been confirmed by Soviet or Russian authorities as really conducted, although an existence of similar project of Chelomei's LKS (Kosmolyot) spaceplane was confirmed. In recent times, an orbital spaceplane, called cosmoplane (Russian: космоплан) capable of transporting passengers has been proposed by Russia's Institute of Applied Mechanics. According to researchers, it could take about 20 minutes to fly from Moscow to Paris, using hydrogen and oxygen fueled engines. Initiated by France joint European program of ESA of Hermes manned spaceplane launched by Ariane rocket continued a few years before it was canceled in early 1990s. Earlier France Dassault-Avion company proposed Astrobus spaceplane and now develops ARES spaceplane as prototype for FLPP. Hopper was proposed as European spaceplane by EADS which also develops ARES spaceplane as prototype for ESA FLPP/FLTP program and commercial suborbital spaceplane for space tourism. HOPE was a Japanese experimental spaceplane project designed by a partnership between NASDA and NAL (both now part of JAXA), started in the 1980s. It was positioned for most of its lifetime as one of the main Japanese contributions to the International Space Station, the other being the Japanese Experiment Module. The project was eventually cancelled in 2003, by which point test flights of a sub-scale testbed had flown successfully. After the German Sänger-Bredt RaBo and Silbervogel of the 1930s and 1940s, Eugen Sänger worked for time on various space plane projects, coming up with several designs for Messerschmitt-Bölkow-Blohm such as the MBB Raumtransporter-8. In the 1980s, West Germany funded design work on the MBB Sänger II with the Hypersonic Technology Program. Development continued on MBB/Deutsche Aerospace Sänger II/HORUS until the last 1980s, when it was canceled. Germany went on to participate in the Ariane rocket, Columbus space station and Hermes spaceplane of ESA, Spacelab of ESA-NASA and Deutschland missions (non-U.S. funded Space Shuttle flights with Spacelab). The Sänger II had predicted cost savings of up to 30 percent over expendable rockets. The Daimler-Chrysler Aerospace RLV was a much later small reusable spaceplane prototype for ESA FLPP/FLTP program. The Multi-Unit Space Transport And Recovery Device (MUSTARD) was a concept explored by the British Aircraft Corporation (BAC) around 1964-1965 for launching payloads weighing as much as 5,000 lb into orbit. It was never constructed. The British Government also began development of a SSTO-spaceplane, called HOTOL, but the project was canceled due to technical and financial issues. The lead engineer from the HOTOL project has since set up a private company dedicated to creating a similar plane called Skylon with a different combined cycle rocket/turbine precooled jet engine called SABRE. This vehicle is intended to be capable of a single stage to orbit launch also and, if successful, would be far in advance of anything currently in operation. AVATAR (Sanskrit: अवतार) (from "Aerobic Vehicle for Hypersonic Aerospace TrAnspoRtation") is a single-stage reusable spaceplane capable of horizontal takeoff and landing, being developed by India's Defense Research and Development Organization along with Indian Space Research Organization and other research institutions; it could be used for cheaper military and civilian satellite launches. - Ansari X Prize - List of manned spacecraft - List of private spaceflight companies#Crew and cargo transport vehicles - ^ "ORBITER THERMAL PROTECTION SYSTEM". NASA KSC. 1989. http://www-pao.ksc.nasa.gov/kscpao/nasafact/tps.htm. - ^ Space Shuttle external tank#Technical data - ^ "OREX". Space Transportation System Research and Development Center, JAXA. http://www.rocket.jaxa.jp/fstrc/0c01.html. Retrieved 2011-05-15. - ^ Andy Pasztor (December 17, 2009). "XCOR Aerospace Gets First Lease Customer for Its Space Plane". The Wall Street Journal. http://online.wsj.com/article/SB10001424052748704238104574602492468293788.html. - ^ "Hyflex". astronautix.com. http://www.astronautix.com/craft/hyflex.htm. Retrieved 2011-05-15. - ^ "HYFLEX". Space Transportation System Research and Development Center, JAXA. http://www.rocket.jaxa.jp/fstrc/0c02.html. Retrieved 2011-05-15. - ^ Black Horse. astronautix.com - ^ David, Leonard (2011-10-07). "Secretive US X-37B Space Plane Could Evolve to Carry Astronauts". space.com. http://www.space.com/13230-secretive-37b-space-plane-future-astronauts.html. Retrieved 2011-10-13. - ^ "Orbital Proposes Spaceplan for Astronauts". Wall Street Journal, December 14, 2010. Accessed: December 15, 2010. - ^ a b Orbital Aims For Station With Lifting Body, Aviation Week, 2010-12-17, accessed 2010-12-20. "will use Virgin to market its services. But Sierra is also in discussions about using Virgin’s WhiteKnightTwo carrier aircraft as a platform for drop trials of the Dream Chaser atmospheric test vehicle" - ^ Companies submit plans for new NASA spacecraft, Daily Record, 2010-12-17, accessed 2010-12-20. - ^ Virgin joins forces with two companies on CCDev, NewSpace Journal, 2010-12-16, accessed 2010-12-18. - ^ "NASA Seeks More Proposals On Commercial Crew Development". press release 10-277. NASA. October 25, 2010. http://www.nasa.gov/home/hqnews/2010/oct/HQ_10-277_CCDev.html. - ^ a b c "X-30 National Aerospace Plane (NASP)". Federation of American Scientists. http://www.fas.org/irp/mystery/nasp.htm. Retrieved 2010-04-30. - ^ Russia Develops New Aircraft – Cosmoplane - ^ RusUsa.com Космоплан – самолет будущего - ^ http://www.astronautix.com/lvs/saengeri.htm - ^ http://www.astronautix.com/lvs/saegerii.htm - ^ http://www.fas.org/spp/guide/germany/piloted/index.html - ^ David Darling (2010). "MUSTARD INFO". http://www.daviddarling.info/encyclopedia/M/MUSTARD.html. Retrieved 29 September 2010. - ^ "HOTOL History". Reaction Engines Limited. 2010. http://www.reactionengines.co.uk/bkgrnd.html. Retrieved 29 September 2010. - ^ "Skylon FAQ". Reaction Engines Limited. 2010. http://www.reactionengines.co.uk/faq.html#q6. Retrieved 29 September 2010. - Encyclopedia Astronautica article on Uragan / Zenit - Russianspacweb: Russian Reusable Spacecraft - Popular Science article: Space Shuttle proposals written by Wernher von Braun - July 1970 - Popular Science article: VentureStar, X-34, MAKS, Burlak and other - October 1996 - Popular Science article: Space Access' Space Plane - January 1998 - Popular Science article: Space planes - May 1999 - Popular Science article: Space plane replacement of Space Shuttle and info on past designs including NASP and Clipper - May 2003 Spaceplanes CanadaCanceledFuture ChinaFutureProject 921-3 · Shenlong Europe (ESA)CanceledHermes · Hopper FranceCanceledAstrobusFutureARES GermanyCanceledSilbervogel · Sänger-Bredt RaBo · MBB Raumtransporter-8 · MBB Sänger II · MBB Deutsche Aerospace HORUSFutureDaimler-Chrysler Aerospace RLV · EADS Astrium Spaceplane IndiaCanceledFutureAVATAR · Indian Space Shuttle Program JapanHistoricalCanceled RomaniaFuture RussiaCanceledFuture Soviet UnionHistoricalCanceled United KingdomCanceledFuture United StatesActiveHistoricalCanceledFutureBlack projects Non-rocket spacelaunch Static structuresCompressiveSpace towerTensileSpace elevator · Hypersonic skyhook · SpaceShaftBolusRotovators · Hypersonic bolusOtherEndo-atmospheric tethers Dynamic structures Projectile launchersElectricalChemicalMechanicalSlingatron Reaction drives Buoyant lifting Space tourism Companies Organizations Successful spacecraft Living in space Space competitions Wikimedia Foundation. 2010. Look at other dictionaries: spaceplane — noun A rocket plane designed to pass the edge of space, combining certain features of aircraft and spacecraft … Wiktionary spaceplane — noun an aircraft that takes off and lands conventionally but is capable of entry into orbit or travel through space … English new terms dictionary spaceplane — spaceˈplane noun A craft (eg HOTOL and the space shuttle) designed to take off like a plane, enter into space orbit (eg to deliver payloads) and return to earth landing horizontally like a glider • • • Main Entry: ↑space … Useful english dictionary Blackstar (spaceplane) — Blackstar is the reported codename of a secret United States orbital spaceplane system. The possible existence of the Blackstar program was reported in March 2006 by Aviation Week Space Technology ( Aviation Week , AWST ) magazine; the magazine… … Wikipedia Suborbital spaceplane — A suborbital spaceplane is a spaceplane designed specifically for sub orbital spaceflight. From late 2007, it is expected that this type of spacecraft will play a key role in early space tourism.HistoryThe first ever true suborbital spaceplane… … Wikipedia ASSET (spaceplane) — infobox Aircraft name = ASSET type = Spaceplane manufacturer = McDonnell Aircraft caption = Preserved ASSET vehicle at USAF Museum, Dayton, Ohio designer = first flight = September 18, 1963 introduced = retired = produced = number built = 6… … Wikipedia Xerus (spaceplane) — Xerus (pronunciation: zEr us), is a suborbital spaceplane under development by XCOR Aerospace. It should be capable of transporting one pilot and one passenger as well as some science experiments and it should even be capable of carrying an upper … Wikipedia Private spaceflight — This article is about non governmental spaceflight. For paying space tourists, see Space tourism. For general commercial use of space, see Commercialization of space. Astronaut Dale A. Gardner holding a For Sale sign Private spaceflight is flight … Wikipedia Space Shuttle — STS redirects here. For other uses, see STS (disambiguation). This article is about the NASA Space Transportation System vehicle. For the associated NASA STS program, see Space Shuttle program. For other shuttles and aerospace vehicles, see… … Wikipedia Spacecraft — Spaceship redirects here. For other uses, see Spaceship (disambiguation). More than 100 Russian Soyuz manned spacecraft (TMA version shown) have flown since 1967, originally for a Soviet manned lunar program, but currently supporting the… … Wikipedia
Color vision, a feature of visual perception, is an ability to perceive differences between light composed of different frequencies independently of light intensity. Color perception is a part of the larger visual system and is mediated by a complex process between neurons that begins with differential stimulation of different types of photoreceptors by light entering the eye. Those photoreceptors then emit outputs that are propagated through many layers of neurons and then ultimately to the brain. Color vision is found in many animals and is mediated by similar underlying mechanisms with common types of biological molecules and a complex history of evolution in different animal taxa. In primates, color vision may have evolved under selective pressure for a variety of visual tasks including the foraging for nutritious young leaves, ripe fruit, and flowers, as well as detecting predator camouflage and emotional states in other primates. Isaac Newton discovered that white light after being split into its component colors when passed through a dispersive prism could be recombined to make white light by passing them through a different prism. The visible light spectrum ranges from about 380 to 740 nanometers. Spectral colors (colors that are produced by a narrow band of wavelengths) such as red, orange, yellow, green, cyan, blue, and violet can be found in this range. These spectral colors do not refer to a single wavelength, but rather to a set of wavelengths: red, 625–740 nm; orange, 590–625 nm; yellow, 565–590 nm; green, 500–565 nm; cyan, 485–500 nm; blue, 450–485 nm; violet, 380–450 nm. Wavelengths longer or shorter than this range are called infrared or ultraviolet, respectively. Humans cannot generally see these wavelengths, but other animals may. Sufficient differences in wavelength cause a difference in the perceived hue; the just-noticeable difference in wavelength varies from about 1 nm in the blue-green and yellow wavelengths to 10 nm and more in the longer red and shorter blue wavelengths. Although the human eye can distinguish up to a few hundred hues, when those pure spectral colors are mixed together or diluted with white light, the number of distinguishable chromaticities can be much higher. In very low light levels, vision is scotopic: light is detected by rod cells of the retina. Rods are maximally sensitive to wavelengths near 500 nm and play little, if any, role in color vision. In brighter light, such as daylight, vision is photopic: light is detected by cone cells which are responsible for color vision. Cones are sensitive to a range of wavelengths, but are most sensitive to wavelengths near 555 nm. Between these regions, mesopic vision comes into play and both rods and cones provide signals to the retinal ganglion cells. The shift in color perception from dim light to daylight gives rise to differences known as the Purkinje effect. The perception of "white" is formed by the entire spectrum of visible light, or by mixing colors of just a few wavelengths in animals with few types of color receptors. In humans, white light can be perceived by combining wavelengths such as red, green, and blue, or just a pair of complementary colors such as blue and yellow. There are a variety of colors in addition to spectral colors and their hues. These include grayscale colors, shades of colors obtained by mixing grayscale colors with spectral colors, violet-red colors, impossible colors, and metallic colors. Grayscale colors include white, gray, and black. Rods contain rhodopsin, which reacts to light intensity, providing grayscale coloring. Shades include colors such as pink or brown. Pink is obtained from mixing red and white. Brown may be obtain from mixing orange with gray or black. Navy is obtained from mixing blue and black. Violet-red colors include hues and shades of magenta. The light spectrum is a line on which violet is one end and the other is red, and yet we see hues of purple that connect those two colors. Impossible colors are a combination of cone responses that cannot be naturally produced. For example, medium cones cannot be activated completely on their own; if they were, we would see a 'hyper-green' color. Color vision is categorized foremost according to the dimensionality of the color gamut, which is defined by the number of primaries required to represent the color vision. This is generally equal to the number of photopsins expressed: a correlation that holds for vertebrates but not invertebrates. The common vertebrate ancestor possessed four photopsins (expressed in cones) plus rhodopsin (expressed in rods), so was tetrachromatic. However, many vertebrate lineages have lost one or many photopsin genes, leading to lower-dimension color vision. The dimensions of color vision range from 1-dimensional and up: - Monochromacy - 1D color vision - lack of any color perception - Dichromacy - 2D color vision - dimensionality of most mammals and a quarter of color blind humans - Trichromacy - 3D color vision - dimensionality of most humans - Tetrachromacy - 4D color vision - dimensionality of most birds, reptiles and fish - Pentachromacy and higher - 5D+ color vision - rare in vertebrates Physiology of color perception Perception of color begins with specialized retinal cells known as cone cells. Cone cells contain different forms of opsin – a pigment protein – that have different spectral sensitivities. Humans contain three types, resulting in trichromatic color vision. Each individual cone contains pigments composed of opsin apoprotein covalently linked to a light-absorbing prosthetic group: either 11-cis-hydroretinal or, more rarely, 11-cis-dehydroretinal. The cones are conventionally labeled according to the ordering of the wavelengths of the peaks of their spectral sensitivities: short (S), medium (M), and long (L) cone types. These three types do not correspond well to particular colors as we know them. Rather, the perception of color is achieved by a complex process that starts with the differential output of these cells in the retina and which is finalized in the visual cortex and associative areas of the brain. For example, while the L cones have been referred to simply as red receptors, microspectrophotometry has shown that their peak sensitivity is in the greenish-yellow region of the spectrum. Similarly, the S cones and M cones do not directly correspond to blue and green, although they are often described as such. The RGB color model, therefore, is a convenient means for representing color but is not directly based on the types of cones in the human eye. The peak response of human cone cells varies, even among individuals with so-called normal color vision; in some non-human species this polymorphic variation is even greater, and it may well be adaptive.[jargon] Two complementary theories of color vision are the trichromatic theory and the opponent process theory. The trichromatic theory, or Young–Helmholtz theory, proposed in the 19th century by Thomas Young and Hermann von Helmholtz, posits three types of cones preferentially sensitive to blue, green, and red, respectively. Others have suggested that the trichromatic theory is not specifically a theory of color vision but a theory of receptors for all vision, including color but not specific or limited to it. Equally, it has been suggested that the relationship between the phenomenal opponency described by Hering and the physiological opponent processes are not straightforward (see below), making of physiological opponency a mechanism that is relevant to the whole of vision, and not just to color vision alone. Ewald Hering proposed the opponent process theory in 1872. It states that the visual system interprets color in an antagonistic way: red vs. green, blue vs. yellow, black vs. white. Both theories are generally accepted as valid, describing different stages in visual physiology, visualized in the adjacent diagram.: 168 Green–magenta and blue—yellow are scales with mutually exclusive boundaries. In the same way that there cannot exist a "slightly negative" positive number, a single eye cannot perceive a bluish-yellow or a reddish-green. Although these two theories are both currently widely accepted theories, past and more recent work has led to criticism of the opponent process theory, stemming from a number of what are presented as discrepancies in the standard opponent process theory. For example, the phenomenon of an after-image of complementary color can be induced by fatiguing the cells responsible for color perception, by staring at a vibrant color for a length of time, and then looking at a white surface. This phenomenon of complementary colors demonstrates cyan, rather than green, to be the complement of red and magenta, rather than red, to be the complement of green, as well as demonstrating, as a consequence, that the reddish-green color proposed to be impossible by opponent process theory is, in fact, the color yellow. Although this phenomenon is more readily explained by the trichromatic theory, explanations for the discrepancy may include alterations to the opponent process theory, such as redefining the opponent colors as red vs. cyan, to reflect this effect. Despite such criticisms, both theories remain in use. A recent demonstration, using the Color Mondrian, has shown that, just as the color of a surface that is part of a complex 'natural' scene is independent of the wavelength-energy composition of the light reflected from it alone but depends upon the composition of the light reflected from its surrounds as well, so the after image produced by looking at a given part of a complex scene is also independent of the wavelength energy-composition of the light reflected from it alone. Thus, while the color of the after-image produced by looking at a green surface that is reflecting more "green" (middle-wave) than "red" (long-wave) light is magenta, so is the after image of the same surface when it reflects more "red" than "green" light (when it is still perceived as green). This would seem to rule out an explanation of color opponency based on retinal cone adaptation. Cone cells in the human eye A range of wavelengths of light stimulates each of these receptor types to varying degrees. The brain combines the information from each type of receptor to give rise to different perceptions of different wavelengths of light. |Cone type||Name||Range||Peak wavelength| |S||β||400–500 nm||420–440 nm| |M||γ||450–630 nm||534–555 nm| |L||ρ||500–700 nm||564–580 nm| Cones and rods are not evenly distributed in the human eye. Cones have a high density at the fovea and a low density in the rest of the retina. Thus color information is mostly taken in at the fovea. Humans have poor color perception in their peripheral vision, and much of the color we see in our periphery may be filled in by what our brains expect to be there on the basis of context and memories. However, our accuracy of color perception in the periphery increases with the size of stimulus. The opsins (photopigments) present in the L and M cones are encoded on the X chromosome; defective encoding of these leads to the two most common forms of color blindness. The OPN1LW gene, which encodes the opsin present in the L cones, is highly polymorphic; one study found 85 variants in a sample of 236 men. A small percentage of women may have an extra type of color receptor because they have different alleles for the gene for the L opsin on each X chromosome. X chromosome inactivation means that while only one opsin is expressed in each cone cell, both types may occur overall, and some women may therefore show a degree of tetrachromatic color vision. Variations in OPN1MW, which encodes the opsin expressed in M cones, appear to be rare, and the observed variants have no effect on spectral sensitivity. Color in the primate brain Color processing begins at a very early level in the visual system (even within the retina) through initial color opponent mechanisms. Both Helmholtz's trichromatic theory and Hering's opponent-process theory are therefore correct, but trichromacy arises at the level of the receptors, and opponent processes arise at the level of retinal ganglion cells and beyond. In Hering's theory, opponent mechanisms refer to the opposing color effect of red-green, blue-yellow, and light-dark. However, in the visual system, it is the activity of the different receptor types that are opposed. Some midget retinal ganglion cells oppose L and M cone activity, which corresponds loosely to red–green opponency, but actually runs along an axis from blue-green to magenta. Small bistratified retinal ganglion cells oppose input from the S cones to input from the L and M cones. This is often thought to correspond to blue–yellow opponency but actually runs along a color axis from yellow-green to violet. Visual information is then sent to the brain from retinal ganglion cells via the optic nerve to the optic chiasma: a point where the two optic nerves meet and information from the temporal (contralateral) visual field crosses to the other side of the brain. After the optic chiasma, the visual tracts are referred to as the optic tracts, which enter the thalamus to synapse at the lateral geniculate nucleus (LGN). The lateral geniculate nucleus is divided into laminae (zones), of which there are three types: the M-laminae, consisting primarily of M-cells, the P-laminae, consisting primarily of P-cells, and the koniocellular laminae. M- and P-cells receive relatively balanced input from both L- and M-cones throughout most of the retina, although this seems to not be the case at the fovea, with midget cells synapsing in the P-laminae. The koniocellular laminae receives axons from the small bistratified ganglion cells. After synapsing at the LGN, the visual tract continues on back to the primary visual cortex (V1) located at the back of the brain within the occipital lobe. Within V1 there is a distinct band (striation). This is also referred to as "striate cortex", with other cortical visual regions referred to collectively as "extrastriate cortex". It is at this stage that color processing becomes much more complicated. In V1 the simple three-color segregation begins to break down. Many cells in V1 respond to some parts of the spectrum better than others, but this "color tuning" is often different depending on the adaptation state of the visual system. A given cell that might respond best to long-wavelength light if the light is relatively bright might then become responsive to all wavelengths if the stimulus is relatively dim. Because the color tuning of these cells is not stable, some believe that a different, relatively small, population of neurons in V1 is responsible for color vision. These specialized "color cells" often have receptive fields that can compute local cone ratios. Such "double-opponent" cells were initially described in the goldfish retina by Nigel Daw; their existence in primates was suggested by David H. Hubel and Torsten Wiesel, first demonstrated by C.R. Michael and subsequently confirmed by Bevil Conway. As Margaret Livingstone and David Hubel showed, double opponent cells are clustered within localized regions of V1 called blobs, and are thought to come in two flavors, red–green and blue-yellow. Red-green cells compare the relative amounts of red-green in one part of a scene with the amount of red-green in an adjacent part of the scene, responding best to local color contrast (red next to green). Modeling studies have shown that double-opponent cells are ideal candidates for the neural machinery of color constancy explained by Edwin H. Land in his retinex theory. From the V1 blobs, color information is sent to cells in the second visual area, V2. The cells in V2 that are most strongly color tuned are clustered in the "thin stripes" that, like the blobs in V1, stain for the enzyme cytochrome oxidase (separating the thin stripes are interstripes and thick stripes, which seem to be concerned with other visual information like motion and high-resolution form). Neurons in V2 then synapse onto cells in the extended V4. This area includes not only V4, but two other areas in the posterior inferior temporal cortex, anterior to area V3, the dorsal posterior inferior temporal cortex, and posterior TEO. Area V4 was initially suggested by Semir Zeki to be exclusively dedicated to color, and he later showed that V4 can be subdivided into subregions with very high concentrations of color cells separated from each other by zones with lower concentration of such cells though even the latter cells respond better to some wavelengths than to others, a finding confirmed by subsequent studies. The presence in V4 of orientation-selective cells led to the view that V4 is involved in processing both color and form associated with color but it is worth noting that the orientation selective cells within V4 are more broadly tuned than their counterparts in V1, V2 and V3. Color processing in the extended V4 occurs in millimeter-sized color modules called globs. This is the part of the brain in which color is first processed into the full range of hues found in color space. Anatomical studies have shown that neurons in extended V4 provide input to the inferior temporal lobe. "IT" cortex is thought to integrate color information with shape and form, although it has been difficult to define the appropriate criteria for this claim. Despite this murkiness, it has been useful to characterize this pathway (V1 > V2 > V4 > IT) as the ventral stream or the "what pathway", distinguished from the dorsal stream ("where pathway") that is thought to analyze motion, among other features. Subjectivity of color perception Color is a feature of visual perception by an observer. There is a complex relationship between the wavelengths of light in the visual spectrum and human experiences of color. Although most people are assumed to have the same mapping, the philosopher John Locke recognized that alternatives are possible, and described one such hypothetical case with the "inverted spectrum" thought experiment. For example, someone with an inverted spectrum might experience green while seeing 'red' (700 nm) light, and experience red while seeing 'green' (530 nm) light. This inversion has never been demonstrated in experiment, though. Synesthesia (or ideasthesia) provides some atypical but illuminating examples of subjective color experience triggered by input that is not even light, such as sounds or shapes. The possibility of a clean dissociation between color experience from properties of the world reveals that color is a subjective psychological phenomenon. The Himba people have been found to categorize colors differently from most Westerners and are able to easily distinguish close shades of green, barely discernible for most people. The Himba have created a very different color scheme which divides the spectrum to dark shades (zuzu in Himba), very light (vapa), vivid blue and green (buru) and dry colors as an adaptation to their specific way of life. The perception of color depends heavily on the context in which the perceived object is presented. Psychophysical experiments have shown that color is perceived before the orientation of lines and directional motion by as much as 40ms and 80 ms respectively, thus leading to a perceptual asynchrony that is demonstrable with brief presentation times. In color vision, chromatic adaptation refers to color constancy; the ability of the visual system to preserve the appearance of an object under a wide range of light sources. For example, a white page under blue, pink, or purple light will reflect mostly blue, pink, or purple light to the eye, respectively; the brain, however, compensates for the effect of lighting (based on the color shift of surrounding objects) and is more likely to interpret the page as white under all three conditions, a phenomenon known as color constancy. In color science, chromatic adaptation is the estimation of the representation of an object under a different light source from the one in which it was recorded. A common application is to find a chromatic adaptation transform (CAT) that will make the recording of a neutral object appear neutral (color balance), while keeping other colors also looking realistic. For example, chromatic adaptation transforms are used when converting images between ICC profiles with different white points. Adobe Photoshop, for example, uses the Bradford CAT. Color vision in nonhumans Many species can see light with frequencies outside the human "visible spectrum". Bees and many other insects can detect ultraviolet light, which helps them to find nectar in flowers. Plant species that depend on insect pollination may owe reproductive success to ultraviolet "colors" and patterns rather than how colorful they appear to humans. Birds, too, can see into the ultraviolet (300–400 nm), and some have sex-dependent markings on their plumage that are visible only in the ultraviolet range. Many animals that can see into the ultraviolet range, however, cannot see red light or any other reddish wavelengths. For example, bees' visible spectrum ends at about 590 nm, just before the orange wavelengths start. Birds, however, can see some red wavelengths, although not as far into the light spectrum as humans. It is a myth that the common goldfish is the only animal that can see both infrared and ultraviolet light; their color vision extends into the ultraviolet but not the infrared. The basis for this variation is the number of cone types that differ between species. Mammals, in general, have a color vision of a limited type, and usually have red-green color blindness, with only two types of cones. Humans, some primates, and some marsupials see an extended range of colors, but only by comparison with other mammals. Most non-mammalian vertebrate species distinguish different colors at least as well as humans, and many species of birds, fish, reptiles, and amphibians, and some invertebrates, have more than three cone types and probably superior color vision to humans. In most Catarrhini (Old World monkeys and apes—primates closely related to humans), there are three types of color receptors (known as cone cells), resulting in trichromatic color vision. These primates, like humans, are known as trichromats. Many other primates (including New World monkeys) and other mammals are dichromats, which is the general color vision state for mammals that are active during the day (i.e., felines, canines, ungulates). Nocturnal mammals may have little or no color vision. Trichromat non-primate mammals are rare.: 174–175 Many invertebrates have color vision. Honeybees and bumblebees have trichromatic color vision which is insensitive to red but sensitive to ultraviolet. Osmia rufa, for example, possess a trichromatic color system, which they use in foraging for pollen from flowers. In view of the importance of color vision to bees one might expect these receptor sensitivities to reflect their specific visual ecology; for example the types of flowers that they visit. However, the main groups of hymenopteran insects excluding ants (i.e., bees, wasps and sawflies) mostly have three types of photoreceptor, with spectral sensitivities similar to the honeybee's. Papilio butterflies possess six types of photoreceptors and may have pentachromatic vision. The most complex color vision system in the animal kingdom has been found in stomatopods (such as the mantis shrimp) having between 12 and 16 spectral receptor types thought to work as multiple dichromatic units. Vertebrate animals such as tropical fish and birds sometimes have more complex color vision systems than humans; thus the many subtle colors they exhibit generally serve as direct signals for other fish or birds, and not to signal mammals. In bird vision, tetrachromacy is achieved through up to four cone types, depending on species. Each single cone contains one of the four main types of vertebrate cone photopigment (LWS/ MWS, RH2, SWS2 and SWS1) and has a colored oil droplet in its inner segment. Brightly colored oil droplets inside the cones shift or narrow the spectral sensitivity of the cell. Pigeons may be pentachromats. Reptiles and amphibians also have four cone types (occasionally five), and probably see at least the same number of colors that humans do, or perhaps more. In addition, some nocturnal geckos and frogs have the capability of seeing color in dim light. At least some color-guided behaviors in amphibians have also been shown to be wholly innate, developing even in visually deprived animals. In the evolution of mammals, segments of color vision were lost, then for a few species of primates, regained by gene duplication. Eutherian mammals other than primates (for example, dogs, mammalian farm animals) generally have less-effective two-receptor (dichromatic) color perception systems, which distinguish blue, green, and yellow—but cannot distinguish oranges and reds. There is some evidence that a few mammals, such as cats, have redeveloped the ability to distinguish longer wavelength colors, in at least a limited way, via one-amino-acid mutations in opsin genes. The adaptation to see reds is particularly important for primate mammals, since it leads to the identification of fruits, and also newly sprouting reddish leaves, which are particularly nutritious. However, even among primates, full color vision differs between New World and Old World monkeys. Old World primates, including monkeys and all apes, have vision similar to humans. New World monkeys may or may not have color sensitivity at this level: in most species, males are dichromats, and about 60% of females are trichromats, but the owl monkeys are cone monochromats, and both sexes of howler monkeys are trichromats. Visual sensitivity differences between males and females in a single species is due to the gene for yellow-green sensitive opsin protein (which confers ability to differentiate red from green) residing on the X sex chromosome. Several marsupials, such as the fat-tailed dunnart (Sminthopsis crassicaudata), have trichromatic color vision. Marine mammals, adapted for low-light vision, have only a single cone type and are thus monochromats. Color perception mechanisms are highly dependent on evolutionary factors, of which the most prominent is thought to be satisfactory recognition of food sources. In herbivorous primates, color perception is essential for finding proper (immature) leaves. In hummingbirds, particular flower types are often recognized by color as well. On the other hand, nocturnal mammals have less-developed color vision since adequate light is needed for cones to function properly. There is evidence that ultraviolet light plays a part in color perception in many branches of the animal kingdom, especially insects. In general, the optical spectrum encompasses the most common electronic transitions in the matter and is therefore the most useful for collecting information about the environment. The evolution of trichromatic color vision in primates occurred as the ancestors of modern monkeys, apes, and humans switched to diurnal (daytime) activity and began consuming fruits and leaves from flowering plants. Color vision, with UV discrimination, is also present in a number of arthropods—the only terrestrial animals besides the vertebrates to possess this trait. Some animals can distinguish colors in the ultraviolet spectrum. The UV spectrum falls outside the human visible range, except for some cataract surgery patients. Birds, turtles, lizards, many fish and some rodents have UV receptors in their retinas. These animals can see the UV patterns found on flowers and other wildlife that are otherwise invisible to the human eye. Ultraviolet vision is an especially important adaptation in birds. It allows birds to spot small prey from a distance, navigate, avoid predators, and forage while flying at high speeds. Birds also utilize their broad spectrum vision to recognize other birds, and in sexual selection. Mathematics of color perception A "physical color" is a combination of pure spectral colors (in the visible range). In principle there exist infinitely many distinct spectral colors, and so the set of all physical colors may be thought of as an infinite-dimensional vector space (a Hilbert space). This space is typically notated Hcolor. More technically, the space of physical colors may be considered to be the topological cone over the simplex whose vertices are the spectral colors, with white at the centroid of the simplex, black at the apex of the cone, and the monochromatic color associated with any given vertex somewhere along the line from that vertex to the apex depending on its brightness. An element C of Hcolor is a function from the range of visible wavelengths—considered as an interval of real numbers [Wmin,Wmax]—to the real numbers, assigning to each wavelength w in [Wmin,Wmax] its intensity C(w). A humanly perceived color may be modeled as three numbers: the extents to which each of the 3 types of cones is stimulated. Thus a humanly perceived color may be thought of as a point in 3-dimensional Euclidean space. We call this space R3color. Since each wavelength w stimulates each of the 3 types of cone cells to a known extent, these extents may be represented by 3 functions s(w), m(w), l(w) corresponding to the response of the S, M, and L cone cells, respectively. Finally, since a beam of light can be composed of many different wavelengths, to determine the extent to which a physical color C in Hcolor stimulates each cone cell, we must calculate the integral (with respect to w), over the interval [Wmin,Wmax], of C(w)·s(w), of C(w)·m(w), and of C(w)·l(w). The triple of resulting numbers associates with each physical color C (which is an element in Hcolor) a particular perceived color (which is a single point in R3color). This association is easily seen to be linear. It may also easily be seen that many different elements in the "physical" space Hcolor can all result in the same single perceived color in R3color, so a perceived color is not unique to one physical color. Thus human color perception is determined by a specific, non-unique linear mapping from the infinite-dimensional Hilbert space Hcolor to the 3-dimensional Euclidean space R3color. Technically, the image of the (mathematical) cone over the simplex whose vertices are the spectral colors, by this linear mapping, is also a (mathematical) cone in R3color. Moving directly away from the vertex of this cone represents maintaining the same chromaticity while increasing its intensity. Taking a cross-section of this cone yields a 2D chromaticity space. Both the 3D cone and its projection or cross-section are convex sets; that is, any mixture of spectral colors is also a color. In practice, it would be quite difficult to physiologically measure an individual's three cone responses to various physical color stimuli. Instead, a psychophysical approach is taken. Three specific benchmark test lights are typically used; let us call them S, M, and L. To calibrate human perceptual space, scientists allowed human subjects to try to match any physical color by turning dials to create specific combinations of intensities (IS, IM, IL) for the S, M, and L lights, resp., until a match was found. This needed only to be done for physical colors that are spectral, since a linear combination of spectral colors will be matched by the same linear combination of their (IS, IM, IL) matches. Note that in practice, often at least one of S, M, L would have to be added with some intensity to the physical test color, and that combination matched by a linear combination of the remaining 2 lights. Across different individuals (without color blindness), the matchings turned out to be nearly identical. By considering all the resulting combinations of intensities (IS, IM, IL) as a subset of 3-space, a model for human perceptual color space is formed. (Note that when one of S, M, L had to be added to the test color, its intensity was counted as negative.) Again, this turns out to be a (mathematical) cone, not a quadric, but rather all rays through the origin in 3-space passing through a certain convex set. Again, this cone has the property that moving directly away from the origin corresponds to increasing the intensity of the S, M, L lights proportionately. Again, a cross-section of this cone is a planar shape that is (by definition) the space of "chromaticities" (informally: distinct colors); one particular such cross-section, corresponding to constant X+Y+Z of the CIE 1931 color space, gives the CIE chromaticity diagram. This system implies that for any hue or non-spectral color not on the boundary of the chromaticity diagram, there are infinitely many distinct physical spectra that are all perceived as that hue or color. So, in general, there is no such thing as the combination of spectral colors that we perceive as (say) a specific version of tan; instead, there are infinitely many possibilities that produce that exact color. The boundary colors that are pure spectral colors can be perceived only in response to light that is purely at the associated wavelength, while the boundary colors on the "line of purples" can each only be generated by a specific ratio of the pure violet and the pure red at the ends of the visible spectral colors. The CIE chromaticity diagram is horseshoe-shaped, with its curved edge corresponding to all spectral colors (the spectral locus), and the remaining straight edge corresponding to the most saturated purples, mixtures of red and violet. - Color blindness - Color theory - Inverted spectrum - Primary color - The dress - Visual perception - ^ Vorobyev M (July 2004). "Ecology and evolution of primate colour vision". Clinical & Experimental Optometry. 87 (4–5): 230–8. doi:10.1111/j.1444-0938.2004.tb05053.x. PMID 15312027. S2CID 40234800. - ^ Carvalho LS, Pessoa D, Mountford JK, Davies WI, Hunt DM (26 April 2017). "The Genetic and Evolutionary Drives behind Primate Color Vision". Frontiers in Ecology and Evolution. 5. doi:10.3389/fevo.2017.00034. - ^ Hiramatsu C, Melin AD, Allen WL, Dubuc C, Higham JP (June 2017). "Experimental evidence that primate trichromacy is well suited for detecting primate social colour signals". Proceedings. Biological Sciences. 284 (1856): 20162458. doi:10.1098/rspb.2016.2458. PMC 5474062. PMID 28615496. - ^ Davson H, Perkins ES (7 August 2020). "Human eye". Encyclopedia Britannica. - ^ Nathans J, Thomas D, Hogness DS (April 1986). "Molecular genetics of human color vision: the genes encoding blue, green, and red pigments". Science. 232 (4747): 193–202. Bibcode:1986Sci...232..193N. doi:10.1126/science.2937147. JSTOR 169687. PMID 2937147. S2CID 34321827. - ^ Neitz J, Jacobs GH (1986). "Polymorphism of the long-wavelength cone in normal human colour vision". Nature. 323 (6089): 623–5. Bibcode:1986Natur.323..623N. doi:10.1038/323623a0. PMID 3773989. S2CID 4316301. - ^ Jacobs GH (January 1996). "Primate photopigments and primate color vision". Proceedings of the National Academy of Sciences of the United States of America. 93 (2): 577–81. Bibcode:1996PNAS...93..577J. doi:10.1073/pnas.93.2.577. PMC 40094. PMID 8570598. - ^ a b Zeki, Semir (2022-10-09). "The Paton prize lecture 2021: A colourful experience leading to a reassessment of colour vision and its theories". Experimental Physiology. 107 (11): 1189–1208. doi:10.1113/ep089760. ISSN 0958-0670. PMID 36114718. S2CID 252335063. - ^ Hering E (1872). "Zur Lehre vom Lichtsinne". Sitzungsberichte der Mathematisch–Naturwissenschaftliche Classe der Kaiserlichen Akademie der Wissenschaften. K.-K. Hof- und Staatsdruckerei in Commission bei C. Gerold's Sohn. LXVI. Band (III Abtheilung). - ^ a b Ali MA, Klyne MA (1985). Vision in Vertebrates. New York: Plenum Press. ISBN 978-0-306-42065-8. - ^ Zeki S, Cheadle S, Pepper J, Mylonas D (2017). "The Constancy of Colored After-Images". Frontiers in Human Neuroscience. 11: 229. doi:10.3389/fnhum.2017.00229. PMC 5423953. PMID 28539878. Text was copied from this source, which is available under a Creative Commons Attribution 4.0 International License. - ^ Wyszecki G, Stiles WS (1982). Color Science: Concepts and Methods, Quantitative Data and Formulae (2nd ed.). New York: Wiley Series in Pure and Applied Optics. ISBN 978-0-471-02106-3. - ^ Hunt RW (2004). The Reproduction of Colour (6th ed.). Chichester UK: Wiley–IS&T Series in Imaging Science and Technology. pp. 11–2. ISBN 978-0-470-02425-6. - ^ Purves D, Augustine GJ, Fitzpatrick D, Katz LC, LaMantia AS, McNamara JO, Williams SM (2001). "Anatomical Distribution of Rods and Cones". Neuroscience. 2nd Edition. - ^ Johnson MA (February 1986). "Color vision in the peripheral retina". American Journal of Optometry and Physiological Optics. 63 (2): 97–103. doi:10.1097/00006324-198602000-00003. PMID 3953765. - ^ Verrelli BC, Tishkoff SA (September 2004). "Signatures of selection and gene conversion associated with human color vision variation". American Journal of Human Genetics. 75 (3): 363–75. doi:10.1086/423287. PMC 1182016. PMID 15252758. - ^ Roth M (2006). "Some women may see 100 million colors, thanks to their genes". Post-Gazette.com. Archived from the original on 2006-11-08. - ^ Rodieck RW (1998). The First Steps in Seeing. Sunderland, Massachusetts, USA: Sinauer Associates, Inc. ISBN 978-0-87893-757-8. - ^ Hendry SH, Reid RC (1970-01-01). "The koniocellular pathway in primate vision". Annual Review of Neuroscience. 23: 127–53. doi:10.1146/annurev.neuro.23.1.127. PMID 10845061. - ^ Daw NW (November 1967). "Goldfish retina: organization for simultaneous color contrast". Science. 158 (3803): 942–4. Bibcode:1967Sci...158..942D. doi:10.1126/science.158.3803.942. PMID 6054169. S2CID 1108881. - ^ Conway BR (2002). Neural Mechanisms of Color Vision: Double-Opponent Cells in the Visual Cortex. Springer. ISBN 978-1-4020-7092-1. - ^ Michael, C. R. (1978-05-01). "Color vision mechanisms in monkey striate cortex: dual-opponent cells with concentric receptive fields". Journal of Neurophysiology. 41 (3): 572–588. doi:10.1152/jn.1922.214.171.1242. ISSN 0022-3077. PMID 96222. - ^ Conway BR (April 2001). "Spatial structure of cone inputs to color cells in alert macaque primary visual cortex (V-1)". The Journal of Neuroscience. 21 (8): 2768–83. doi:10.1523/JNEUROSCI.21-08-02768.2001. PMC 6762533. PMID 11306629. - ^ Dowling JE (2001). Neurons, and Networks: An Introduction to Behavioral Neuroscience. Harvard University Press. ISBN 978-0-674-00462-7. - ^ McCann M, ed. (1993). Edwin H. Land's Essays. Springfield, Va.: Society for Imaging Science and Technology. - ^ Judd DB, Wyszecki G (1975). Color in Business, Science and Industry. Wiley Series in Pure and Applied Optics (third ed.). New York: Wiley-Interscience. p. 388. ISBN 978-0-471-45212-6. - ^ a b c d Conway BR, Moeller S, Tsao DY (November 2007). "Specialized color modules in macaque extrastriate cortex". Neuron. 56 (3): 560–73. doi:10.1016/j.neuron.2007.10.008. PMC 8162777. PMID 17988638. S2CID 11724926. - ^ a b c Conway BR, Tsao DY (October 2009). "Color-tuned neurons are spatially clustered according to color preference within alert macaque posterior inferior temporal cortex". Proceedings of the National Academy of Sciences of the United States of America. 106 (42): 18034–9. Bibcode:2009PNAS..10618034C. doi:10.1073/pnas.0810943106. PMC 2764907. PMID 19805195. - ^ Zeki SM (April 1973). "Colour coding in rhesus monkey prestriate cortex". Brain Research. 53 (2): 422–7. doi:10.1016/0006-8993(73)90227-8. PMID 4196224. - ^ a b Zeki S (March 1983). "The distribution of wavelength and orientation selective cells in different areas of monkey visual cortex". Proceedings of the Royal Society of London. Series B, Biological Sciences. 217 (1209): 449–70. Bibcode:1983RSPSB.217..449Z. doi:10.1098/rspb.1983.0020. PMID 6134287. S2CID 39700958. - ^ Bushnell BN, Harding PJ, Kosai Y, Bair W, Pasupathy A (August 2011). "Equiluminance cells in visual cortical area v4". The Journal of Neuroscience. 31 (35): 12398–412. doi:10.1523/JNEUROSCI.1890-11.2011. PMC 3171995. PMID 21880901. - ^ Tanigawa H, Lu HD, Roe AW (December 2010). "Functional organization for color and orientation in macaque V4". Nature Neuroscience. 13 (12): 1542–8. doi:10.1038/nn.2676. PMC 3005205. PMID 21076422. - ^ Zeki S (June 2005). "The Ferrier Lecture 1995 behind the seen: the functional specialization of the brain in space and time". Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences. 360 (1458): 1145–83. doi:10.1098/rstb.2005.1666. PMC 1609195. PMID 16147515. - ^ Zeki, S. (1980). "The representation of colours in the cerebral cortex". Nature. 284 (5755): 412–418. Bibcode:1980Natur.284..412Z. doi:10.1038/284412a0. ISSN 1476-4687. PMID 6767195. S2CID 4310049. - ^ Roberson D, Davidoff J, Davies IR, Shapiro LR (November 2006). "Colour categories and category acquisition in Himba and English" (PDF). In Pitchford N, Biggam CP (eds.). Progress in Colour Studies. Vol. II. Psychological aspects. John Benjamins Publishing. pp. 159–72. ISBN 978-90-272-9302-2. - ^ Nänni, Jürg (2008). Visual perception - an interactive journey of discovery through our visual system (in German and English). Sulgen Zurich, Switzerland: Niggli AG. ISBN 978-3-7212-0618-0. - ^ Fairchild MD (2005). "8. Chromatic Adaptation". Color Appearance Models. Wiley. p. 146. ISBN 978-0-470-01216-1. - ^ Süsstrunk S. "Chromatic Adaptation". The Image and Visual Representation Lab (IVRL). Archived from the original on 2011-08-18. - ^ Lindbloom B. "Chromatic Adaptation". Lindbloom.com. Archived from the original on 2011-09-26. - ^ Cuthill IC (1997). "Ultraviolet vision in birds". In Slater PJ (ed.). Advances in the Study of Behavior. Vol. 29. Oxford, England: Academic Press. p. 161. ISBN 978-0-12-004529-7. - ^ Jamieson BG (2007). Reproductive Biology and Phylogeny of Birds. Charlottesville VA: University of Virginia. p. 128. ISBN 978-1-57808-386-2. - ^ Varela FJ, Palacios AG, Goldsmith TH (1993). "Color vision of birds". In Zeigler HP, Bischof HJ (eds.). Vision, Brain, and Behavior in Birds. MIT Press. pp. 77–94. ISBN 978-0-262-24036-9. - ^ "True or False? The common goldfish is the only animal that can see both infrared and ultra-violet light". Skeptive. Archived from the original on December 24, 2013. Retrieved September 28, 2013. - ^ Neumeyer C (2012). "Chapter 2: Color Vision in Goldfish and Other Vertebrates". In Lazareva O, Shimizu T, Wasserman E (eds.). How Animals See the World: Comparative Behavior, Biology, and Evolution of Vision. Oxford Scholarship Online. ISBN 978-0-195-33465-4. - ^ Jacobs GH (August 1993). "The distribution and nature of colour vision among the mammals". Biological Reviews of the Cambridge Philosophical Society. 68 (3): 413–71. doi:10.1111/j.1469-185X.1993.tb00738.x. PMID 8347768. S2CID 24172719. - ^ Menzel R, Steinmann E, De Souza J, Backhaus W (1988-05-01). "Spectral Sensitivity of Photoreceptors and Colour Vision in the Solitary Bee, Osmia Rufa". Journal of Experimental Biology. 136 (1): 35–52. doi:10.1242/jeb.136.1.35. ISSN 0022-0949. Archived from the original on 2016-03-04. - ^ a b Osorio D, Vorobyev M (September 2008). "A review of the evolution of animal colour vision and visual communication signals". Vision Research. 48 (20): 2042–51. doi:10.1016/j.visres.2008.06.018. PMID 18627773. S2CID 12025276. - ^ Arikawa K (November 2003). "Spectral organization of the eye of a butterfly, Papilio". Journal of Comparative Physiology A: Neuroethology, Sensory, Neural & Behavioral Physiology. 189 (11): 791–800. doi:10.1007/s00359-003-0454-7. PMID 14520495. S2CID 25685593. - ^ Cronin TW, Marshall NJ (1989). "A retina with at least ten spectral types of photoreceptors in a mantis shrimp". Nature. 339 (6220): 137–40. Bibcode:1989Natur.339..137C. doi:10.1038/339137a0. S2CID 4367079. - ^ Kelber A, Vorobyev M, Osorio D (February 2003). "Animal colour vision--behavioural tests and physiological concepts". Biological Reviews of the Cambridge Philosophical Society. 78 (1): 81–118. doi:10.1017/S1464793102005985. PMID 12620062. S2CID 7610125. - ^ Thompson E (1995). "Introducing Comparative Colour Vision". Colour vision : a study in cognitive science and the philosophy of perception. London: Routledge. p. 149. ISBN 978-0-203-41767-6. - ^ Roth LS, Lundström L, Kelber A, Kröger RH, Unsbo P (March 2009). "The pupils and optical systems of gecko eyes". Journal of Vision. 9 (3): 27.1–11. doi:10.1167/9.3.27. PMID 19757966. - ^ Yovanovich CA, Koskela SM, Nevala N, Kondrashev SL, Kelber A, Donner K (April 2017). "The dual rod system of amphibians supports colour discrimination at the absolute visual threshold". Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences. 372 (1717). doi:10.1098/rstb.2016.0066. PMC 5312016. PMID 28193811. - ^ Hunt JE, Bruno JR, Pratt KG (May 12, 2020). "An Innate Color Preference Displayed by Xenopus Tadpoles Is Persistent and Requires the Tegmentum". Frontiers in Behavioral Neuroscience. 14 (71): 71. doi:10.3389/fnbeh.2020.00071. PMC 7235192. PMID 32477078. - ^ Shozo Yokoyama and F. Bernhard Radlwimmera, "The Molecular Genetics of Red and Green Color Vision in Mammals", Genetics, Vol. 153, 919–932, October 1999. - ^ Jacobs GH, Deegan JF (April 2001). "Photopigments and colour vision in New World monkeys from the family Atelidae". Proceedings. Biological Sciences. 268 (1468): 695–702. doi:10.1098/rspb.2000.1421. PMC 1088658. PMID 11321057. - ^ Jacobs GH, Deegan JF, Neitz J, Crognale MA, Neitz M (September 1993). "Photopigments and color vision in the nocturnal monkey, Aotus". Vision Research. 33 (13): 1773–83. CiteSeerX 10.1.1.568.1560. doi:10.1016/0042-6989(93)90168-V. PMID 8266633. S2CID 3745725. - ^ Mollon JD, Bowmaker JK, Jacobs GH (September 1984). "Variations of colour vision in a New World primate can be explained by polymorphism of retinal photopigments". Proceedings of the Royal Society of London. Series B, Biological Sciences. 222 (1228): 373–99. Bibcode:1984RSPSB.222..373M. doi:10.1098/rspb.1984.0071. PMID 6149558. S2CID 24416536. - ^ Sternberg RJ (2006). Cognitive Psychology (4th ed.). Thomson Wadsworth. - ^ Arrese CA, Beazley LD, Neumeyer C (March 2006). "Behavioural evidence for marsupial trichromacy". Current Biology. 16 (6): R193-4. doi:10.1016/j.cub.2006.02.036. PMID 16546067. - ^ Steven P (1997). How the Mind Works. New York: Norton. p. 191. ISBN 978-0-393-04535-2. - ^ Koyanagi M, Nagata T, Katoh K, Yamashita S, Tokunaga F (February 2008). "Molecular evolution of arthropod color vision deduced from multiple opsin genes of jumping spiders". Journal of Molecular Evolution. 66 (2): 130–7. Bibcode:2008JMolE..66..130K. doi:10.1007/s00239-008-9065-9. PMID 18217181. S2CID 23837628. - ^ Hambling D (May 30, 2002). "Let the light shine in: You don't have to come from another planet to see ultraviolet light". EducationGuardian.co.uk. Archived from the original on November 23, 2014. - ^ Jacobs GH, Neitz J, Deegan JF (October 1991). "Retinal receptors in rodents maximally sensitive to ultraviolet light". Nature. 353 (6345): 655–6. Bibcode:1991Natur.353..655J. doi:10.1038/353655a0. PMID 1922382. S2CID 4283145. - ^ Varela FJ, Palacios AG, Goldsmith TM (1993). Bischof HJ, Zeigler HP (eds.). Vision, brain, and behavior in birds. Cambridge, Mass: MIT Press. pp. 77–94. ISBN 978-0-262-24036-9. - ^ Cuthill IC, Partridge JC, Bennett AT, Church SC, Hart NS, Hunt S (2000). "Ultraviolet Vision in Birds". Advances in the Study of Behavior. Vol. 29. pp. 159–214. - ^ Jacobs DE, Gallo O, Cooper EA, Pulli K, Levoy M (May 2015). "Simulating the Visual Experience of Very Bright and Very Dark Scenes". ACM Trans. Graph. 34 (3): 15. doi:10.1145/2714573. S2CID 14960893. - Biggs T, McPhail S, Nassau K, Patankar H, Stenerson M, Maulana F, Douma M. Smith SE (ed.). "What colors do animals see?". Web Exhibits. Institute for Dynamic Educational Advancement (IDEA). - Feynman RP (2015). "Color Vision". In Gottlieb MA, Pfeiffer R (eds.). Feynman lectures on physics. Volume, Mainly mechanics, radiation, and heat (New millennium ed.). New York: Basic Books. ISBN 978-0-465-04085-8 – via California Institute of Technology. - Gouras P (May 2009). "Color Vision". Webvision. University of Utah School of Medicine. PMID 21413395. - McEvoy B (2008). "Color vision". Retrieved 2012-03-30. - Rogers A (26 February 2015). "The Science of Why No One Agrees on the Color of This Dress". Wired.
What is User Datagram Protocol (UDP)? User Datagram Protocol, commonly known as UDP, operates at the transport layer and is responsible for delivering data packets from one source to a destination. Unlike TCP, UDP is a connectionless and unreliable protocol. This means that it does not establish a connection before sending data and does not guarantee the delivery of data packets. UDP is a connectionless and unreliable transport protocol that operates on the Internet Protocol (IP) network layer. Unlike its counterpart, Transmission Control Protocol (TCP), UDP does not establish a dedicated connection between two endpoints before transmitting data. Instead, it simply sends data packets, called datagrams, to the destination without any guarantee of delivery or order. What is UDP used for? UDP is commonly used for applications that prioritize speed and efficiency over reliability. It is ideal for real-time applications such as video streaming, online gaming, VoIP (Voice over IP), and DNS (Domain Name System) services. These applications can tolerate some packet loss or delay, making UDP a suitable choice. How Does UDP Work? Now that we’ve touched upon the technical aspects let’s explore how UDP actually works. UDP is connectionless and best suited for applications where speed and efficiency are more critical than reliability. UDP works by encapsulating data into datagrams and sending them across the network. The sender generates a datagram and attaches the UDP header. The datagram is then transmitted to the destination without establishing a connection. Upon receiving the datagram, the receiver extracts the data and uses the information in the header to process it. UDP does not provide any flow control or error recovery mechanisms, which means it is up to the application layer to handle these aspects. UDP works on a “fire and forget” principle. When a sender wants to transmit data, it simply packages the data into UDP packets and sends them off to the recipient’s IP address and port. No handshakes, acknowledgments, or retransmissions occur, making UDP incredibly fast. UDP Header Composition The UDP header consists of four main fields: source port, destination port, length, and checksum. The source port represents the port number on the sender’s machine, while the destination port indicates the port number on the receiver’s machine. The length field specifies the length of the UDP header and data, and the checksum field is used for error detection. - Source Port: The source port field in the header identifies the sender’s port number, ensuring that the recipient knows where the data originated. - Destination Port: This field specifies the port number at the receiving end, directing the data to the appropriate application or service. - Length: The length field indicates the length of the UDP header and data. It helps the recipient understand the size of the incoming packet. - Checksum: The UDP header includes a checksum that is used for error detection. While UDP does not guarantee data delivery, this checksum helps identify corrupted packets. Applications of UDP UDP is widely used in various applications. Some of the notable ones include: - Streaming media: UDP is commonly used for real-time streaming of audio and video content. Services like YouTube, Netflix, and Spotify utilize UDP for efficient media delivery. - Online gaming: UDP’s low latency and fast transmission make it ideal for online gaming, where real-time interaction is essential. - VoIP: Voice over IP services, such as Skype and WhatsApp, utilize UDP for real-time voice communication. - DNS: UDP is used for domain name resolution. DNS queries and responses are typically transmitted using UDP. UDP vs. TCP UDP and TCP are two different transport protocols with distinct characteristics. While UDP is connectionless and unreliable, TCP is connection-oriented and provides reliable data delivery. TCP guarantees data integrity, ordering, and flow control, making it suitable for applications that require reliable data transmission. UDP, on the other hand, sacrifices reliability for speed and efficiency. |UDP (User Datagram Protocol) |TCP (Transmission Control Protocol) |Connectionless – No formal connection setup. |Connection-oriented – Establishes a connection before data transfer. |Unreliable – Does not guarantee data delivery. |Reliable – Ensures data delivery with error-checking and retransmission. |Unordered – Packets may arrive out of order. |Ordered – Packets arrive in the correct order. |Low overhead – Minimal additional data in packets. |Higher overhead – Additional control information for reliability. |Low latency – Minimal delay in data transmission. |Higher latency – More time spent on connection setup and error-checking. |Real-time applications like video streaming, online gaming, VoIP. |Applications where data integrity is crucial, such as web browsing and file transfers. |Suitable for broadcasting data to multiple recipients. |Not suitable for broadcasting. |No automatic error correction or retransmission. |Automatic error correction and retransmission of lost packets. |Simple and lightweight. |Complex with extensive error-checking mechanisms. |Commonly uses ports like 53 (DNS) and 123 (NTP). |Commonly uses ports like 80 (HTTP) and 443 (HTTPS). UDP is preferred for applications that require speed and low latency, even if it means sacrificing reliability and data integrity. TCP, on the other hand, is ideal for applications where data must be delivered accurately and in the correct order, even if it introduces some latency and overhead. The choice between UDP and TCP depends on the specific needs of the application. Advantages & Disadvantages of UDP Advantages of UDP - Speed: UDP is lightning-fast. It doesn’t waste time establishing a connection or checking if data packets arrive in the correct order. This speed makes it ideal for real-time applications. - Low Overhead: Unlike other protocols that add extra information to each data packet for error-checking and reliability, UDP keeps things minimal. This means less data is added to each packet, allowing for faster transmission. - Low Latency: Latency refers to the delay between sending and receiving data. With UDP, this delay is minimal, making it perfect for applications where instant communication matters, like online gaming and video conferencing. - Efficiency: UDP doesn’t bog down the network with additional control messages and acknowledgments. It simply sends data packets, making it efficient for tasks like live streaming and voice calls. Disadvantages of UDP - Unreliable: UDP does not guarantee delivery or order of packets, which can be problematic for certain applications. - No congestion control: Unlike TCP, UDP does not have built-in congestion control mechanisms. This means that UDP packets can congest the network, leading to increased packet loss. - No error recovery: UDP does not provide error recovery mechanisms. If a packet is lost or damaged during transmission, it will not be retransmitted. UDP is a lightweight and efficient transport protocol used for real-time applications that prioritize speed over reliability. It is widely employed in streaming media, online gaming, VoIP, and DNS services. However, due to its unreliable nature, UDP may not be suitable for all types of applications. Understanding the strengths and limitations of UDP is essential for designing and implementing efficient network solutions.
Although the Spanish, French, and English all had major impacts on their respective settlements, the English had a substantial influence on what became the American colonies. From the early founding of Jamestown in 1607 until the calls for American independence in 1776, the people who settled in New England had a direct impact on the nation’s future. American culture and values changed over the course of the next one-and-a-half centuries, leading to uniqueness only present in the New World. Spanning from religious groups to scientific thinkers, developments in New England led to independent viewpoints that differed from Great Britain’s. Ultimately, this was a contributing factor for the American War of Independence in 1776. There were many groups living in the United States between 1860 and 1900 and most of the groups we can categorize people into had very different viewpoints. The main groups that most can be categorized into are the wealthy, the common people, the Indians and the Chinese. Starting with the wealthy they looked at the west as a big fat paycheck, the wealth folks went out west and bought up lots of land and production and monopolized, while the common people competed with these big companies for free and cheap land because they were the poor back east and came out west with hopes of finding gold while mining or getting rich off abundant crop land or maybe it was some common women who came out west because they were allowed to purchase land out west. The Indians were native to the west and after immigration started some of the bigger tribes fought back but most tribes were too small and were forced by the American The Navigation Acts restricted foreign trade to competition with other countries, while reducing the chances of the colonies becoming an independent nation; in addition, all British products that were to be sent to the colonies were heavily taxed in order to create more profit. The Sugar Act placed tax on sugar, wine, and coffee, and denied any colonist accused of smuggling trial by jury, eventually leading to a drastic plummet in the rum industry. Finally, the Stamp Act, an act that was passed without the consent of the colonists, that taxed any paper or document in order to gain money from the colonists for Britain, ultimately leading to the colonists revolting against Britain, and writing newspapers that promoted the idea of independence from the imperialist nation that had repeatedly denied them their liberty, democracy, and Falen Graham Prof. Dockswell AMH 2010 9/21/15 Research Paper 1: Question 3 The British enacted several suffocating taxes and regulations upon the American colonies. The First Navigation Act, established in 1660, mandated that all trading ships must be built in Britain, the ship’s passengers must be seventy-five percent American or British, and specific goods could only be exported to Britain (class notes). P1 Explain the issues organisations must consider when planning computer systems maintenance The issues that organisations must consider when planning a computer system are as follow; Employee and Employer Responsibilities - It is the Cortex’s’ responsibility to make sure the employee is allowed to have scheduled breaks when they require them. As part of their contract they also have to provide the employee with the correct ergonomic equipment so they can do their job and work efficiently, this means that the employee is more effective and reliable when working from their desk as they have the correct equipment. Employees also have a responsibility this is making sure they use the business network entirely for work related purposes and not for downloading anything that infringes the contractual agreement they have in place this is known as the Acceptable Usage Policy. Cortex is also protected by the Computer Misuse Act. More specifically, mercantilism stated that a nation’s exports should be higher than its imports. The British brought these policies together to form Navigation Acts for the colonies to follow such as exporting items like indigo, hemp and tobacco exclusively to Britain and where they are exported to. At first, the Navigation Acts made the colonists content because with the new regulations, colonists were able to import British goods such as tea and dishes, however, as time went on, British rule later tightened regulations using the colonies for its own economic advantage. Britain exploited the colonies by imposing a rule that colonial exports and imported goods would only be controlled by British merchants. Britain was able to profit off the colonial raw goods by setting fixed prices on crops sold by planters, forcing all planters to abide to fixed rates which they could have sold for more. 1.Great Britain controlled the economy in the colonies through trade. 2.Every culture or country traded so that they could receive all of the essential goods that they needed to survive. 3.Great Britain forced the colonies to trade only with them so they could make a profit, and also so they could obtain the things they needed from. 4.As a result of the New World not having all that the colonists needed , Great Britain would have those goods that the colonists could use to survive; so they traded their goods back and forth. All the taxation took its toll on the economic status between the colonists and Britain. Prior to the French and Indian War, the Wool, Hat, and Iron Acts forced Americans to ship materials to Britain to be built, but then later buying the products from them. However mercantilism was soon abandoned when the colonists decided to fight back. The Stamp Act enraged many of the elite colonists, and as Benjamin Franklin states, they wanted to "get it repeal'd" as soon as possible. They chose to boycott, and they chose not to import or consume the product Britain wanted them to, thus the relationship economically between the two places was significantly In around 1607 to 1763, the mother country, England began enforcing many political and economic goals in the American colonies. In order to establish dominance and superiority, the British government believed that by enforcing certain values and order into the American colonies, it would lead to the enrichment of the mother country. The English Government enforced strict values onto the American colonies, depriving the colonists from their right, which led to the increase of smuggling and the rebellion from the colonists. The English government had enforced Navigational Acts in order to control the Americans trading rights. . Maryland Toleration Act: Created in 1649 to ease tensions between Protestants and Catholics; ultimately failed and did not end bickering between the two religions . triangular trade: the trade between eastern colonies, Africa, and Europe; included an exchange of slaves to the colonies, manufactured items such as guns and alcohol from EUrope to colonies and West Africa, and crops to Europe . Mercantilism: foundation of the mercantilist theory is that a nation must export more than it imports; high value for gold, silver, and other precious metals . Navigation Acts: essentially a series of tariffs imposed upon the colonies beginning in 1651 to create an English monopoly over trade; colonists could only trade with England and had to use English With time, each colony managed to sustain itself and now each colony was in high demand of African slaves. The England’s Royal African Company was responsible for transporting thousands of slaves to English American colonies. The transporting of slaves was one of the greatest economic resources that the English relied on. However, competition was a big issue so in 1651 English Parliament passed its Navigation Act, which was to govern and control trade between England and its colonies. We see a similar technique as well as with the Spanish, “The Crown had barred the colony from producing finished goods, requiring that colonists purchase them from Spain” . The great victory of the British in the French and Indian war came with heavy debt after the war. Which made the Great Britain to more forcefully control its colonies and dropped its salutary neglect on her North American colonies. The series economic acts British enforced on North American colonies was the last straw that broke the camel’s back that incite the colonies’ anger toward the British Parliament. Which suggests the conflict between the Great Britain and her North American colonies was more economic rather than rooted in political and social controversies and differences. Initially, the conflict between the Great Britain and her colonies was mainly economic in origin due to the taxes that the British imposed and British’s view of M2 - Explain the fundamental principles which have been applied to the designs I have created an interactive HCI which has a lot of colours, I have made sure the colours used don’t clash and that they are easy on the eye for everyone, so the black writing on the blue background can be read easily without any Struggle. The buttons down the left hand side of the page are perceived as being together because they are all the same size, they all rectangle shaped and the text is all the same size apart from the language changer as the text is too big for the box size. With the help of cheap-slave workers, Southern plantations made their profit-margin greater. Exporting goods to Great Britain. Such as, cotton and tobacco. This was worthwhile, profitable for the Southern, “aristocrats”. The British thought of a plan to mess up the trade due to the fact that they were holding a large amount of British wealth.
The researcher sometimes unintentionally or actively affects the process while executing a systematic inquiry. It is known as research bias, and it can affect your results just like any other sort of bias. When it comes to studying bias, there are no hard and fast guidelines, which simply means that it can occur at any time. Experimental mistakes and a lack of concern for all relevant factors can lead to research bias. One of the most common causes of study results with low credibility is study bias. Because of its informal nature, you must be cautious when characterizing bias in research. To reduce or prevent its occurrence, you need to be able to recognize its characteristics. This article will cover what it is, its type, and how to avoid it. What is research bias? Research bias is a technique in which the researchers conducting the experiment modify the findings to present a specific consequence. It is often known as experimenter bias. Bias is a characteristic of the research technique that makes it rely on experience and judgment rather than data analysis. The most important thing to know about bias is that it is unavoidable in many fields. Understanding research bias and reducing the effects of biased views is an essential part of any research planning process. For example, it is much easier to become attracted to a certain point of view when using social research subjects, compromising fairness. How does research bias affect the research process? Research bias can majorly affect the research process, weakening its integrity and leading to misleading or erroneous results. Here are some examples of how this bias might affect the research process: Distorted research design When bias is present, study results can be skewed or wrong. It can make the study less trustworthy and valid. If bias affects how a study is set up, how data is collected, or how it is analyzed, it can cause systematic mistakes that move the results away from the true or unbiased values. It can make it hard to believe that the findings of a study are correct. Biased research can lead to unjustified or wrong claims because the results may not reflect reality or give a complete picture of the research question. Bias can lead to inaccurate interpretations of research findings. It can alter the overall comprehension of the research issue. Researchers may be tempted to interpret the findings in a way that confirms their previous assumptions or expectations, ignoring alternate explanations or contradictory evidence. This bias poses ethical considerations. It can have negative effects on individuals, groups, or society as a whole. Biased research can misinform decision-making processes, leading to ineffective interventions, policies, or therapies. Research bias undermines scientific credibility. Biased research can damage public trust in science. It may reduce reliance on scientific evidence for decision-making. Types of research bias with examples Bias can be seen in practically every aspect of quantitative research and qualitative research, and it can come from both the survey developer and the participants. The sorts of biases that come directly from the survey maker are the easiest to deal with out of all the types of bias in research. Let’s look at some of the most typical research biases. Design bias happens when a researcher fails to capture biased views in most experiments. It has something to do with the organization and its research methods. The researcher must demonstrate that they realize this and have tried to mitigate its influence. Another design bias develops after the research is completed and the results are analyzed. It occurs when the researchers’ original concerns are not reflected in the exposure, which is all too often these days. For example, a researcher working on a survey containing questions concerning health benefits may overlook the researcher’s awareness of the sample group’s limitations. It’s possible that the group tested was all male or all over a particular age. Selection bias or sampling bias Selection bias occurs when volunteers are chosen to represent your research population, but those with different experiences are ignored. In research, selection bias manifests itself in a variety of ways. When the sampling method puts preference into the research, this is known as sampling bias. Selection bias is also referred to as sampling bias. For example, research on a disease that depended heavily on white male volunteers cannot be generalized to the full community, including women and people of other races or communities. Procedural bias is a sort of research bias that occurs when survey respondents are given insufficient time to complete surveys. As a result, participants are forced to submit half-thoughts with misinformation, which does not accurately reflect their thinking. Another sort of study bias is using individuals who are forced to participate, as they are more likely to complete the survey fast, leaving them with enough time to accomplish other things. For Example, If you ask your employees to survey their break, they may be pressured, which may compromise the validity of their results. Publication or reporting bias A sort of bias that influences research is publication bias. It is also known as reporting bias. It refers to a condition in which favorable outcomes are more likely to be reported than negative or empty ones. Analysis bias can also make it easier for reporting bias to happen. The publication standards for research articles in a specific area frequently reflect this bias on them. Researchers sometimes choose not to disclose their outcomes if they believe the data do not reflect their theory. As an example, there was seven research on the antidepressant drug Reboxetine. Among them, only one got published, and the others were unpublished. Measurement of data collecting bias A defect in the data collection process and measuring technique causes measurement bias. Data collecting bias is also known as measurement bias. It occurs in both qualitative and quantitative research methodologies. Data collection methods might occur in quantitative research when you use an approach that is not appropriate for your research population. Instrument bias is one of the most common forms of measurement bias in quantitative investigations. A defective scale would generate instrument bias and invalidate the experimental process in a quantitative experiment. For example, you may ask those who do not have internet access to survey by email or on your website. Data collection bias occurs in qualitative research when inappropriate survey questions are asked during an unstructured interview. Bad survey questions are those that lead the interviewee to make presumptions. Subjects are frequently hesitant to provide socially incorrect responses for fear of criticism. For example, a topic can avoid coming across as homophobic or racist in an interview. Some more types of bias in research include the ones listed here. Researchers must understand these biases and reduce them through rigorous study design, transparent reporting, and critical evidence review: - Confirmation bias: Researchers often search for, evaluate, and prioritize material that supports their existing hypotheses or expectations, ignoring contradictory data. This can lead to a skewed perception of results and perhaps biased conclusions. - Cultural bias: Cultural bias arises when cultural norms, attitudes, or preconceptions influence the research process and the interpretation of results. - Funding bias: Funding bias takes place when powerful motives support research. It can bias research design, data collecting, analysis, and interpretation toward the funding source. - Observer bias: Observer bias arises when the researcher or observer affects participants’ replies or behavior. Collecting data might be biased by accidental clues, expectations, or subjective interpretations. How QuestionPro helps in reducing bias in a research process? QuestionPro offers several features and functionalities that can contribute to reducing bias in the research process. Here’s how QuestionPro can help: QuestionPro allows researchers to randomize the order of survey questions or response alternatives. Randomization helps to remove order effects and limit bias from the order in which participants encounter the items. Branching and skip logic Branching and skip logic capabilities in QuestionPro allow researchers to design customized survey pathways based on participants’ responses. It enables tailored questioning, ensuring that only pertinent questions are asked of participants. Bias generated by such inquiries is reduced by avoiding irrelevant or needless questions. Diverse question types QuestionPro supports a wide range of questions kinds, including multiple-choice, Likert scale, matrix, and open-ended questions. Researchers can choose the most relevant question kinds to get unbiased data while avoiding leading or suggestive questions that may affect participants’ responses. QuestionPro enables researchers to collect anonymous responses, protecting the confidentiality of participants. It can encourage participants to provide more unbiased and equitable feedback, especially when dealing with sensitive or contentious issues. Data analysis and reporting QuestionPro has powerful data analysis and reporting options, such as charts, graphs, and statistical analysis tools. These properties allow researchers to examine and interpret obtained data objectively, decreasing the role of bias in interpreting results. Collaboration and peer review QuestionPro supports peer review and researcher collaboration. It helps uncover and overcome biases in research planning, questionnaire formulation, and data analysis by involving several researchers and soliciting external opinions. You must comprehend biases in research and how to deal with them. Knowing the different sorts of biases in research allows you to readily identify them. It is also necessary to have a clear idea to recognize it in any form. QuestionPro provides many tools and settings that can assist you in dealing with research bias. Try QuestionPro today to undertake your original bias-free quantitative or qualitative research. Frequently Asking Questions Research bias affects the validity and dependability of your research’s findings, resulting in inaccurate interpretations of the data and incorrect conclusions. Bias should be avoided in research to ensure that findings are accurate, valid, and objective. To avoid research bias, researchers should take proactive steps throughout the research process, such as developing a clear research question and objectives, designing a rigorous study, following standardized protocols, and so on.
The Universe is 13.72 billion years old2. With scientific instruments we can directly see 400 billion observable galaxies in the Universe2. On average, each galaxy is 3 million light-years away from any other galaxy3, and most galaxies are collected together in super-clusters of thousands of galaxies4. A spiral galaxy such as ours typically has about 100 000 000 000 stars in it - 100 billion5. These stars, under the influence of gravity, consuming matter and contracting, sometimes explode into supernovae rather than collapse into a black hole. In any given galaxy, a star explodes about once every hundred years5. Our planet, the Earth, sits within The Milky Way galaxy, which is "part of the Virgo supercluster of galaxies, whose center is almost 60 million light-years away from us"4. Observational data of the speeds that galaxies are moving away from us led to the discovery that everything in the Universe appears to be getting further apart. And the further away things are, the faster they are moving away from us. This doesn't mean we are at the center of an expanding Universe and the easiest way to dispel that idea is to imagine a balloon with lots of dots drawn on it. One of those dots is our galaxy, the Milky Way. As the balloon expands the same phenomenon occurs: everything gets further away from any particular point. Nothing is at the center, because there is no center7. The fact that all galaxies are getting further apart was first realized by astronomer "Vesto Melvin Slipher (1875-1969), who calculated the speed at which 25 galaxies were moving" and then by Edwin Hubble8,9. It did not take long to realize that it must have been the case that if galaxies are all moving apart, then in history, they must all have been clumped together; whereupon the force of gravity will have been pulling on them intensely, as in a black hole. The force which propelled the Universe into expansion must have been radically powerful. And hence, the Big Bang Theory was born, in particular, the Belgian scientist Georges Lemaître (1894-1966) is given the credit for the idea when in 1931 he suggested that a "primeval atom" exploded8. Notable contributors to the theory have included "notable cosmologists such as George Gamow (1904-68) and Alan Guth (b. 1947)". Impressive advancements in technology have led to discovery after discovery which has proven the specific predictions of the Big Bang Theory, in particular from measurements of cosmic microwave background radiation (CMBR) levels, first discovered in 1965 by Arno Penzias (b. 1933) and Robert Wilson (b. 1936). Before the advent of looped advertisements, TV channels used to simply close out-of-hours. On TV signals received by satellite dish "the screen would revert to static. About 1 percent of that static you saw on the television screen was radiation left over from the Big Bang"10. When Carl Sagan wrote "Cosmos"11 in 1995, the best data pointed to a Universe that was between 15 and 20 billion years old. But measurements improvements and by 2006 we had narrowed this down to between 12 and 15 billion years old12. Since then measurements show it to be exactly 13.72 billion years old13 since the event that generated the dimensions of space and time and propelled the Universe into existence8. "According to the latest calculation of Hubble's constant, which was performed using data gathered by the Hubble Space Telescope, the universe is now expanding at a rate of 73 km (around 45 miles) per second per megaparsec"8. The Universe is not "flying apart" by pushing its boundaries further away. It is being forced apart from within, so that the very expanses of space itself are stretching. That's why the expansion is measured in km per second per parsec. Hence with the balloon analogy, the increasing separation of galaxies isn't caused by the galaxies themselves, but by changes in the fabric on which they sit. A singularity is a point in time or space where the physical laws themselves have broken down. This physical situation is itself impossible, and for a long time many physicists argued against the very concept of a singularity. We thought, and some still do, that the appearance of a singularity in a model of the Universe is an indicator that the theory's mathematical formulas are flawed, and that unknown factors are disturbing the results. It was gravity, and black holes, that always produced the greatest problems. "Then, in a series of brilliant and embracing mathematical theorems, Penrose and Hawking proved that singularities were quite general and, under all reasonable physical conditions, unavoidable, once gravity becomes strong enough"14. While this work went on, many theorists had already realized that the Big Bang must have itself started with something that seemed incredibly similar to a singularity. Instead of a black hole sucking in a huge quantity of mass until the compression due to gravity caused a singularity, the big bang itself looked like the end-result: If the entire Universe was pulled into a black hole, the resulting singularity may well look exactly like the beginning of the Big Bang. Hawking has long argued that as a singularity is by definition a state where the laws of physics are broken, what comes out of it does not have any rational or logical structure. "This accords well with the belief that the primeval universe was in a state of maximum disorder (thermodynamic equilibrium)" and in this case "the big bang singularity simply coughs out a randomly arranged universe displaying no particular order"14. Some scientists say that "the universe must be such as to admit conscious beings in it at some stage"15. On what grounds can scientists justify such a stance? In the early decades of quantum physics, many strange effects were observed that were poorly understood. Quantum physics is still supremely odd. One of the early strands of thought was made famous through Erwin Schrödinger's thought experiment, now known as Schrödinger's Cat. It appeared that fundamental aspects of reality only resolved themselves into concrete form if observed and until that moment, things remained mere statistical probability and sometimes even particles behave as if they are in two different states at once: Superposition remains a fundamental part of quantum physics. The Universe itself remains unstable, unset, volatile, until such a time as it is observed. And this means that it just wobbles around in all possible configurations until the configuration that spawns valid observers becomes dominant. Then, hey presto, what springs into solid existence is a Universe that will spawn intelligent beings. The "law of conservation of energy" holds that energy cannot be created nor destroyed, only changed in form. This means that although everything in the Universe may be destroyed, the total energy stays the same. This fundamental law of thermodynamics is one of the most important laws in physics, and is applicable at all scales, from the quantum, to the large scale sciences of engineering and chemistry. It is so important that most physics think that this law can simply never be broken. Many theists have argued that the scientific laws of thermodynamics, in particular the conservation of energy, imply that there must be a God. If energy cannot be created nor destroyed, they say, then this is proof that the entire spectacle of reality has to be the exercise of a creator God. I.e., a God that creates all of the energy, and then, who creates the laws of thermodynamics. On the face of it, it is certainly one way to explain away the existence of all energy. But it turns out that the explanation isn't necessary, because the current evidence shows that there is no contradiction to the law of conservation of energy, and it all comes down to gravity17. You expend energy to climb out of an energy well, and, if you add up all the energy of objects and masses and take into account gravity and balance this against the gravitational power of black holes and galaxies, we find that the net value of the energy involved is zero. In other words, if all of the energy in the universe can be accounted for as being matched equally by gravity, it appears that there is actually no created energy. “Astronomers can measure the masses of galaxies, their average separation, and their speeds of recession. Putting these numbers into a formula yields a quantity which some physicists have interpreted as the total energy of the universe. The answer does indeed come out to be zero within the observational accuracy. [...] The cosmos can [come] into existence without requiring any energy input at all.” “Physicist Frank Wilczek, who was one of the first theorists to explore these possibilities, has reminded me that he utilized precisely the same language I have used previously in this chapter, in the 1980 Scientific American article he wrote on the matter-antimatter asymmetry of the universe. After describing how a matter-antimatter asymmetry might plausibly be generated in the early universe based on our new understanding of particle physics, he added a note that this provided one way of thinking about the answer to the question of why there is something rather than nothing: nothing is unstable. [...] [We see that] the total energy of a closed universe is zero, and if the sum-over-paths formalism of quantum gravity is appropriate, then quantum mechanically such universes could appear spontaneously with impunity, carrying no net energy. I want to emphasize that these universes would be completely self-contained space-times, disconnected from our own. There is a hitch, however. A closed expanding universe filled with matter will in general expand to a maximum size and then recollapse just as quickly, ending up in a space-time singularity where the no-man's land of quantum gravity at present cannot tell us what its ultimate fate will be. The characteristic lifetime of tiny closed universes will therefore be microscopic, perhaps on the order of the Planck time, the characteristic scale over which quantum gravitational processes should operate, about 1044 seconds or so. [...] The very creative cosmologist I mentioned earlier, Alex Vilenkin, who has since become a friend, had actually just written a paper that described in exactly this fashion how quantum gravity indeed might create an inflating universe directly from nothing.” “The law of conservation of energy, also known as the first law of thermodynamics, requires that energy come from somewhere. In principle, the creation hypothesis could be confirmed by direct observation or theoretical requirement that conservation of energy was violated 13.7 billion year ago at the start of the big bang. However, neither observations nor theory indicates this to have been the case. [...] The total energy of the universe appears to be zero. As famed cosmologist Stephen Hawking said in his 1988 best seller, A Brief History of Time, [...] the negative gravitational energy exactly cancels the positive energy represented by the matter. So the total energy of the universe is zero. Specifically, within small measurement errors, the mean energy density of the universe is exactly what is should be for a universe that appeared from an initial state of zero energy, within a small quantum uncertainty.” "God, the Failed Hypothesis: How Science Shows That God Does Not Exist" Prof. Victor J. Stenger (2007)20 "According to this point of view, there is an ensemble of universes of which ours is but one member. The universe we perceive is only one of a huge, perhaps infinite, collection of universes. [...] Although the overwhelming majority of these universes are unsuitable for life"21. In an infinite set of universes, each with different laws and properties, then every possible Universe must exist. We are "guaranteed [...] that some universe would arise with the laws that we have discovered. No mechanism and no entity is required to fix the laws of nature to be what they are"22. There is no design-for-life - some versions of the Universe just happen to be able to support life, because all possible configurations of the Universe exist. It seems a safe bet that philosophically minded aliens have a tendency to sit in their Universes and think, "this one was designed for us!". They are clearly wrong. "The universe will eventually begin to contract, falling back on itself in a gigantic cataclysm known as the 'big crunch'. Some physicists have speculated that the highly compressed cosmos, rather than imploding to oblivion at a spacetime singularity, will 'bounce' at some enormous density" perhaps resetting universal constants and rules to a random value15. Carl Sagan writes that "scientists wonder about what happens in an oscillating universe at the cusps, at the transition. [...] Some think that the laws of nature are then randomly reshuffled [from] an infinite range of possible natural laws"23. This is like the multiverse theory except that there is only ever one Universe, which has existed in an infinite series of configurations. Some of those instances support life. There is no coincidence that we exist on one that allows life to exist - it was an eventual inevitability. No design was required, just pure randomness. But there are some problems with this idea. The resultant laws of physics cannot be completely random, in this arrangement. If the existence of a new Universe is dependent upon the collapse of another, then, it must be the case that every Universe eventually collapses. Carl Sagan worried about this too, saying that all it would take is for one Universe with a weak gravitation force to come to exist for the cycle to break, because a fly-away Universe that never contracts would never allow another big crunch to occur23. So if an oscillating universe scenario is correct, it seems perhaps that some law of gravity may remain constant, or that the possible range of laws is constrained. But such a constraint requires some arch-universal-laws to survive each singularity. The marsh of contradictions and complexities lead me away from the idea that an oscillating Universe can be used to explain the laws of any given Universe, but merely moves the goal-posts further away. An Oscillating Universe also requires that the Universe is "closed", that is, it will eventually stop expanding and gravity will make it contract, resulting in a "big crunch". But the evidence is not in favour of the Big Crunch theory. In 1997, calculations of the total mass of the Universe revealed that the Universe's mass was not enough to reverse acceleration and cause a contraction, and "in 1998, astronomers studying type 1 supernovae in distance galaxies concluded that the universe's rate of expansion is accelerating rather than decelerating"24. One particular multiverse theory results from our observations of the bubbling chaotic foam that appears the quantum level: the continual thick soup of spontaneously created particle and antiparticle pairs seems to indicate that creation is continually occurring all around us, on the tiniest scales imaginable. Bubbles of reality come into existence for a blink of a second, and then self-annihilate. “The laws of quantum mechanics imply that, on very small scales, for very short times, empty space can appear to be a boiling, bubbling brew of virtual particles and fields wildly fluctuating in magnitude. These quantum fluctuations may be important for determining the character of protons and atoms, but generally they are invisible on larger scales, which is one of the reasons why they appear so unnatural to us. [...] It can be truly said that we all are here today because of quantum fluctuations in what is essentially nothing.” Now, our Universe may well be such a bubble, with all the dimensions of time and space being a temporary blip on a radar. As subjects of that Universe, the flash in the pan appears to be a Universal lifetime. Perhaps what we are witnessing at the quantum level is an almost infinite series of Universes being created and destroyed; each with its own fundamental laws and unique Universal constants and properties. Our Universe is one such balloon, existing for a fraction of a moment inside another Universe, all of our energy and gravity being just one temporary fluctuation. In all of these myriad existences, some will find themselves suitable to harbour life, and many won't. From an insider's point of view, most of these Universes will be very short lived - "In order for the closed universes that might be created through such mechanisms to last for longer than infinitesimal times, something like inflation is necessary"26. Authors such as John Gribbin in "In Search of the Edge of Time" (1995)27 describe such quantum foam in great detail, and although it is highly speculative, there is nothing in the laws of physics which prevents such a kaleidoscope from existing, although it seems theoretically and practically impossible to ever investigate such a closed-off world. The undirected and chaotic bubbling froth of random fluctuations that occur at the quantum level and may themselves be responsible for the entire existence of our inflated Universe have no cause or reason; each Universe itself is random and unforseen. Given all of this, then there "is no prescribed cause for our universe"22. Dr Steven Weinberg wrote in 1977 that "the more the universe seems comprehensible, the more it also seems pointless"28. Conscious life may be its most wonderful asset and an amazing emergent property of its slow ageing, but the cruel gothic irony is that our lives are temporary and fleeting, and our understanding of the Universe, and even of ourselves, can only ever be superficial compared to all there is to know. These thoughts are the source of supreme sadness, but also should inspire us to strive harder, to live life better, for others, for the betterment of our species especially through science, so that we may one day fight back a little bit against the eternal ravages of death, by living lives worthy of sentient and benevolent beings, as best we can. “I don't try to imagine a personal God; it suffices to stand in awe at the structure of the world, insofar as it allows our inadequate senses to appreciate it.” “A quasi-mystical response to nature and the universe is common among scientists and rationalists.” That maths is both an artform, and a beautiful enterprise, is something that is often-repeated by those in-the-know. Likewise, it is a common theme that those who enjoy the sciences - the challenges of scientific theory - often have greater feelings towards their chosen fields than the cold experience of technical number-crunching pitted with moments of inspiration. The ongoing search for truth bestows upon its adherents a glowing satisfaction and awe at the wonder of the universe. Reality is simultaneously complex and simple, engaging and passive, black and white and colourful. Out of simple laws comes complexity, and out of the chaos of experimentation slowly comes understanding. The scientific methods of understanding the world can involve a person completely and fully; the intellectual and rational commitment to hard work and truth are obvious. Not so obvious is the emotional wonder and adoration that arises within those who seek the truth. Philosophers and scientists, as Dawkins' points out, have had a tendency towards an almost mystical and pantheistic love of the fabrics of reality. Steven Weinberg, professor of physics and astronomy, says in The First Three Minutes (1977) that "the effort to understand the universe is one of the very few things that lifts human life a little above the level of farce"31. Understanding gives meaning and value to life. The cold hard facts of science can seem inhuman, belittling and sometimes even demoralizing. The unfortunate truths of the ultimate facts of life, thought and teleology have led some laypeople to presume that such knowledge leads itself to nihilism. This concern is not, however, accurate. Scientists, cosmologists, physicists and other researchers often find that the universe is inspiring, the more they learn. Prof. Richard Dawkins, the foremost public evolutionary biologist, explains that such abstract concepts are irrelevant to our spiritual existential problems. “There is indeed no purpose in the ultimate fate of the cosmos, but do any of us really tie our life's hopes to the ultimate fate of the cosmos anyway? Of course we don't, not if we are sane. Our lives are ruled by all sorts of closer, warmer, human ambitions and perceptions. To accuse science of robbing life of the warmth that makes it worth living is so preposterously mistaken, so diametrically opposite to my own feelings and those of most working scientists.” Most scientists develop feelings that are the opposite of nihilistic. Some come to conduct the search for truth with heartfelt sanctity. “Let's teach our children from a very young age about the story of the universe and its incredible richness and beauty. It is already so much more glorious and awesome - and even comforting - than anything offered by any scripture of God concept I know.” Carolyn Porco (2007)33 “Carolyn Porco, a senior research scientist at the Space Science Institute in Boulder, Colo., called, half in jest, for the establishment of an alternative church, with Dr. Tyson, whose powerful celebration of scientific discovery had the force and cadence of a good sermon, as its first minister."” Skeptical Inquirer (2007) Prof. Porco's call is an echo of what an earlier esteemed astronomer and prominent scientific thinker was wondering about. Carl Sagan wrote, 13 years ago: “A religion old or new, that stressed the magnificence of the universe as revealed by modern science, might be able to draw forth reserves of reverence and awe hardly tapped by the conventional faiths. Sooner or later, such a religion will emerge.” Carl Sagan, Pale Blue Dot (1994) Such a universe-admiring and science-embracing religion does exist! Paul Harrison founded the World Pantheist Movement in 1997. Although the word 'theist' is used in the title, scientific pantheism is not a theistic religion. The title merely denotes that all of reality is a divine total - God is not a supernatural being in its own right. “Pantheism is a form of theism (god-belief) which equates the universe itself as god: be this in either a conscious or automatic sense. The whole system of physical laws, cause and effect, and time itself, are the internal workings of a perfect divine being. God isn't an external being to the Universe but is rather a non-personal, non-anthropomorphic and non-personified omnipresent being. Strato of the 4th century BCE may have been a pantheist34, Giordano Bruno is said to have professed pantheism and was burned at the stake by the Roman Catholic Church for it, in 1600. Despite that earlier event, pantheism is associated more strongly with Baruch Spinoza of later on in the 17th century. The World Pantheist Movement (WPM) is the principal organized form of this belief today35. Pantheism is similar to deism, especially in a practical sense, so much so that many call the WPM a philosophical society rather than a religious group. Pantheism has a very good and clean image, is accepting of science and human understanding because all such insights are pathways to the divine. [...] "Scientific Pantheism" is a science-first philosophy of life that embodies an emotional embrace of reality, including reverence for the universe itself. It was founded by Paul Harrison of Hampstead, London, who is the president of the World Pantheist Movement. On its good side, Pantheism is a wholesome embrace of the beauty of the natural universe and natural laws, and has a genuinely inspirational attitude towards scientific endeavours to understand nature better.” Religionists often struggle with the idea that Human life is the result of unguided natural processes. Some arguments that theists make for the existence of God center on the idea of God as the "First Cause". In other words, God is much like the uncaused big bang, but, with a great many human-like personality traits mixed in, alongside properties such as "all-knowing" and "all-powerful". Also see "Creationism and Intelligent Design: Christian Fundamentalism" by Vexen Crabtree (2015), contents:
The Pioneer anomaly or Pioneer effect was the observed deviation from predicted accelerations of the Pioneer 10 and Pioneer 11 spacecraft after they passed about 20 astronomical units (3×109 km; 2×109 mi) on their trajectories out of the Solar System. The apparent anomaly was a matter of tremendous interest for many years, but has been subsequently explained by an anisotropic radiation pressure caused by the spacecraft's heat loss. Both Pioneer spacecraft are escaping the Solar System, but are slowing under the influence of the Sun's gravity. Upon very close examination of navigational data, the spacecraft were found to be slowing slightly more than expected. The effect is an extremely small acceleration towards the Sun, of ±1.33)×10−10 m/s2, which is equivalent to a reduction of the outbound velocity by 1 (8.74kilometre per hour (0.6 mph), over a period of ten years. The two spacecraft were launched in 1972 and 1973 and the anomalous acceleration was first noticed as early as 1980, but not seriously investigated until 1994. The last communication with either spacecraft was in 2003, but analysis of recorded data continues. Various explanations, both of spacecraft behavior and of gravitation itself, were proposed to explain the anomaly. Over the period 1998–2012, one particular explanation became accepted. The spacecraft, which are surrounded by an ultra-high vacuum and are each powered by a radioisotope thermoelectric generator (RTG), can shed heat only via thermal radiation. If, due to the design of the spacecraft, more heat is emitted in a particular direction—what is known as a radiative anisotropy—then the spacecraft would accelerate slightly in the direction opposite of the excess emitted radiation due to radiation pressure. Because this force is due to the recoil of thermal photons, it is also called the thermal recoil force. If the excess radiation and attendant radiation pressure were pointed in a general direction opposite the Sun, the spacecraft's velocity away from the Sun would be decelerating at a greater rate than could be explained by previously recognized forces, such as gravity and trace friction, due to the interplanetary medium (imperfect vacuum). By 2012 several papers by different groups, all reanalyzing the thermal radiation pressure forces inherent in the spacecraft, showed that a careful accounting of this explains the entire anomaly, and thus the cause was mundane and did not point to any new phenomena or need for a different physical paradigm. The most detailed analysis to date, by some of the original investigators, explicitly looks at two methods of estimating thermal forces, then states "We find no statistically significant difference between the two estimates and conclude that once the thermal recoil force is properly accounted for, no anomalous acceleration remains." - 1 Description - 2 Explanation: thermal recoil force - 3 Indications from other missions - 4 Potential issues with the thermal solution - 5 Previously proposed explanations - 6 Further research avenues - 7 Meetings and conferences about the anomaly - 8 Notes - 9 References - 10 Further reading - 11 External links Pioneer 10 and 11 were sent on missions to Jupiter and Jupiter/Saturn respectively. Both spacecraft were spin-stabilised in order to keep their high-gain antennas pointed towards Earth using gyroscopic forces. Although the spacecraft included thrusters, after the planetary encounters they were used only for semiannual conical scanning maneuvers to track Earth in its orbit, leaving them on a long "cruise" phase through the outer Solar System. During this period, both spacecraft were repeatedly contacted to obtain various measurements on their physical environment, providing valuable information long after their initial missions were complete. Because the spacecraft were flying with almost no additional stabilization thrusts during their "cruise", it is possible to characterize the density of the solar medium by its effect on the spacecraft's motion. In the outer Solar System this effect would be easily calculable, based on ground-based measurements of the deep space environment. When these effects were taken into account, along with all other known effects, the calculated position of the Pioneers did not agree with measurements based on timing the return of the radio signals being sent back from the spacecraft. These consistently showed that both spacecraft were closer to the inner Solar System than they should be, by thousands of kilometres—small compared to their distance from the Sun, but still statistically significant. This apparent discrepancy grew over time as the measurements were repeated, suggesting that whatever was causing the anomaly was still acting on the spacecraft. As the anomaly was growing, it appeared that the spacecraft were moving more slowly than expected. Measurements of the spacecraft's speed using the Doppler effect demonstrated the same thing: the observed redshift was less than expected, which meant that the Pioneers had slowed down more than expected. When all known forces acting on the spacecraft were taken into consideration, a very small but unexplained force remained. It appeared to cause an approximately constant sunward acceleration of ±1.33)×10−10 m/s2 for both spacecraft. If the positions of the spacecraft were predicted one year in advance based on measured velocity and known forces (mostly gravity), they were actually found to be some 400 km closer to the sun at the end of the year. This anomaly is now believed to be accounted for by thermal recoil forces. (8.74 Explanation: thermal recoil force Starting in 1998, there were suggestions that the thermal recoil force was under-estimated, and perhaps could account for the entire anomaly. However, accurately accounting for thermal forces was hard, because it needed telemetry records of the spacecraft temperatures and a detailed thermal model, neither of which was available at the time. Furthermore, all thermal models predicted a decrease in the effect with time, which did not appear in the initial analysis. One by one these objections were addressed. Many of the old telemetry records were found, and converted to modern formats. This gave power consumption figures and some temperatures for parts of the spacecraft. Several groups built detailed thermal models, which could be checked against the known temperatures and powers, and allowed a quantitative calculation of the recoil force. The longer span of navigational records showed the acceleration was in fact decreasing. We investigate the possibility that the anomalous acceleration of the Pioneer 10 and 11 spacecraft is due to the recoil force associated with an anisotropic emission of thermal radiation off the vehicles. To this end, relying on the project and spacecraft design documentation, we constructed a comprehensive finite-element thermal model of the two spacecraft. Then, we numerically solve thermal conduction and radiation equations using the actual flight telemetry as boundary conditions. We use the results of this model to evaluate the effect of the thermal recoil force on the Pioneer 10 spacecraft at various heliocentric distances. We found that the magnitude, temporal behavior, and direction of the resulting thermal acceleration are all similar to the properties of the observed anomaly. As a novel element of our investigation, we develop a parameterized model for the thermal recoil force and estimate the coefficients of this model independently from navigational Doppler data. We find no statistically significant difference between the two estimates and conclude that once the thermal recoil force is properly accounted for, no anomalous acceleration remains. Although the above reference has the most detailed analysis to date, the explanation based on thermal recoil force has the support of other independent research groups, using a variety of computational techniques. Examples include "thermal recoil pressure is not the cause of the Rosetta flyby anomaly but likely resolves the anomalous acceleration observed for Pioneer 10." and "It is shown that the whole anomalous acceleration can be explained by thermal effects". Indications from other missions The Pioneers were uniquely suited to discover the effect because they have been flying for long periods of time without additional course corrections. Most deep-space probes launched after the Pioneers either stopped at one of the planets, or used thrusting throughout their mission. The Voyagers flew a mission profile similar to the Pioneers, but were not spin stabilized. Instead, they required frequent firings of their thrusters for attitude control to stay aligned with Earth. Spacecraft like the Voyagers acquire small and unpredictable changes in speed as a side effect of the frequent attitude control firings. This 'noise' makes it impractical to measure small accelerations such as the Pioneer effect; accelerations as large as 10−9 m/s2 would be undetectable. Newer spacecraft have used spin stabilization for some or all of their mission, including both Galileo and Ulysses. These spacecraft indicate a similar effect, although for various reasons (such as their relative proximity to the Sun) firm conclusions cannot be drawn from these sources. The Cassini mission has reaction wheels as well as thrusters for attitude control, and during cruise could rely for long periods on the reaction wheels alone, thus enabling precision measurements. It also had radioisotope thermoelectric generators (RTGs) mounted close to the spacecraft body, radiating kilowatts of heat in hard-to-predict directions. After Cassini arrived at Saturn, it shed a large fraction of its mass from the fuel used in the insertion burn and the release of the Huygens probe. This increases the acceleration caused by the radiation forces because they are acting on less mass. This change in acceleration allows the radiation forces to be measured independently of any gravitational acceleration. Comparing cruise and Saturn-orbit results shows that for Cassini, almost all the unmodelled acceleration was due to radiation forces, with only a small residual acceleration, much smaller than the Pioneer acceleration, and with opposite sign. Potential issues with the thermal solution There are two features of the anomaly, as originally reported, that are not addressed by the thermal solution: periodic variations in the anomaly, and the onset of the anomaly near the orbit of Saturn. First, the anomaly has an apparent annual periodicity and an apparent Earth sidereal daily periodicity with amplitudes that are formally greater than the error budget. However, the same paper also states this problem is most likely not related to the anomaly: "The annual and diurnal terms are very likely different manifestations of the same modeling problem. [...] Such a modeling problem arises when there are errors in any of the parameters of the spacecraft orientation with respect to the chosen reference frame." Second, the value of the anomaly measured over a period during and after the Pioneer 11 Saturn encounter had a relatively high uncertainty and a significantly lower value. The Turyshev, et al. 2012 paper compared the thermal analysis to the Pioneer 10 only. The Pioneer anomaly was unnoticed until after Pioneer 10 passed its Saturn encounter. However, the most recent analysis states: "Figure 2 is strongly suggestive that the previously reported "onset" of the Pioneer anomaly may in fact be a simple result of mis-modeling of the solar thermal contribution; this question may be resolved with further analysis of early trajectory data". Previously proposed explanations Before the thermal recoil explanation became accepted, other proposed explanations fell into two classes — "mundane causes" or "new physics". Mundane causes include conventional effects that were overlooked or mis-modeled in the initial analysis, such as measurement error, thrust from gas leakage, or uneven heat radiation. The "new physics" explanations proposed revision of our understanding of gravitational physics. If the Pioneer anomaly had been a gravitational effect due to some long-range modifications of the known laws of gravity, it did not affect the orbital motions of the major natural bodies in the same way (in particular those moving in the regions in which the Pioneer anomaly manifested itself in its presently known form). Hence a gravitational explanation would need to violate the equivalence principle, which states that all objects are affected the same way by gravity. It was therefore argued that increasingly accurate measurements and modelling of the motions of the outer planets and their satellites undermined the possibility that the Pioneer anomaly is a phenomenon of gravitational origin. However, others believed that our knowledge of the motions of the outer planets and dwarf planet Pluto was still insufficient to disprove the gravitational nature of the Pioneer anomaly. The same authors ruled out the existence of a gravitational Pioneer-type extra-acceleration in the outskirts of the Solar System by using a sample of Trans-Neptunian objects. The magnitude of the Pioneer effect (±1.33)×10−10 m/s2) is numerically quite close to the product ( (8.74±0.075)×10−10 m/s2) of the (6.59speed of light and the Hubble constant , hinting at a cosmological connection, but this is now believed to be of no particular significance. In fact the latest Jet Propulsion Laboratory review (2010) undertaken by Turyshev and Toth claims to rule out the cosmological connection by considering rather conventional sources whereas other scientists provided a disproof based on the physical implications of cosmological models themselves. Gravitationally bound objects such as the Solar System, or even the Milky Way, are not supposed to partake of the expansion of the universe—this is known both from conventional theory and by direct measurement. This does not necessarily interfere with paths new physics can take with drag effects from planetary secular accelerations of possible cosmological origin. The deceleration model It has been viewed as possible that a real deceleration is not accounted for in the current model for several reasons. It is possible that deceleration is caused by gravitational forces from unidentified sources such as the Kuiper belt or dark matter. However, this acceleration does not show up in the orbits of the outer planets, so any generic gravitational answer would need to violate the equivalence principle (see modified inertia below). Likewise, the anomaly does not appear in the orbits of Neptune's moons, challenging the possibility that the Pioneer anomaly may be an unconventional gravitational phenomenon based on range from the Sun. Observational or recording errors The possibility of observational errors, which include measurement and computational errors, has been advanced as a reason for interpreting the data as an anomaly. Hence, this would result in approximation and statistical errors. However, further analysis has determined that significant errors are not likely because seven independent analyses have shown the existence of the Pioneer anomaly as of March 2010. The effect is so small that it could be a statistical anomaly caused by differences in the way data were collected over the lifetime of the probes. Numerous changes were made over this period, including changes in the receiving instruments, reception sites, data recording systems and recording formats. Because the "Pioneer anomaly" does not show up as an effect on the planets, Anderson et al. speculated that this would be interesting if this was new physics. Later, with the Doppler shifted signal confirmed, the team again speculated that one explanation may lie with new physics, if not some unknown systemic explanation. Clock acceleration is an alternate explanation to anomalous acceleration of the spacecraft towards the Sun. This theory takes notice of an expanding universe, which creates an increasing background 'gravitational potential'. The increased gravitational potential then accelerates cosmological time. It is proposed that this particular effect causes the observed deviation from predicted trajectories and velocities of Pioneer 10 and Pioneer 11. From their data, Anderson's team deduced a steady frequency drift of 1.5 Hz over 8 years. This could be mapped on to a clock acceleration theory, which means all clocks would be changing in relation to a constant acceleration. In other words, that there would be a non-uniformity of time. Moreover, for such a distortion related to time, Anderson's team reviewed several models in which time distortion as a phenomenon is considered. They arrived at the "clock acceleration" model after completion of the review. Although the best model adds a quadratic term to defined International Atomic Time, the team encountered problems with this theory. This then led to non-uniform time in relation to a constant acceleration as the most likely theory.[note 1] Definition of gravity modified |This section's factual accuracy is disputed. (January 2014)| The Modified Newtonian dynamics or MOND hypothesis proposes that the force of gravity deviates from the traditional Newtonian value to a very different force law at very low accelerations on the order of 10−10 m/s2. Given the low accelerations placed on the spacecraft while in the outer Solar System, MOND may be in effect, modifying the normal gravitational equations. The Lunar Laser Ranging experiment combined with data of LAGEOS satellites refutes that simple gravity modification is the cause of the Pioneer anomaly. The precession of the longitudes of perihelia of the solar planets or the trajectories of long-period comets have not been reported to experience an anomalous gravitational field toward the Sun of the magnitude capable of describing the Pioneer anomaly. Definition of inertia modified MOND can also be interpreted as a modification of inertia, perhaps due to an interaction with vacuum energy, and such a trajectory-dependent theory could account for the different accelerations apparently acting on the orbiting planets and the Pioneer craft on their escape trajectories. A model of inertia using Unruh radiation and a Hubble-scale Casimir effect, which, unlike MOND, has no adjustable parameters, has been proposed to explain the Pioneer anomaly and the flyby anomaly. A possible terrestrial test for evidence of a different model of modified inertia has also been proposed. Another theoretical explanation is based on a possible non-equivalence of the atomic time and the astronomical time, which can give the same observational fingerprint as the anomaly. Celestial ephemerides in an expanding universe A rather straightforward explanation of Pioneer anomaly can be achieved if one takes into account that the background spacetime is described by cosmological Friedmann–Lemaître–Robertson–Walker metric that is not Minkowski flat. In this model of spacetime manifold, light moves uniformly with respect to the conformal cosmological time whereas physical measurements are performed with the help of atomic clocks that count the proper time of observer coinciding with the cosmic time. The difference between the conformal and cosmic times yields exactly the same numerical value and signature of the anomalous, blue Doppler shift effect that was measured in the Pioneer experiment. A small discrepancy between this theoretical prediction and the measured value of the Pioneer effect is a clear evidence of the presence of the thermal recoil that accounts up only to 10–20 percent of the overall effect. If the origin of the Pioneer effect is cosmological, it gives a direct access to measuring the numerical value of the Hubble constant independently of observations of the cosmic microwave background radiation or supernova explosions in distant galaxies (Supernova Cosmology Project). Further research avenues It is possible, but not proven, that this anomaly is linked to the flyby anomaly, which has been observed in other spacecraft. Although the circumstances are very different (planet flyby vs. deep space cruise), the overall effect is similar—a small but unexplained velocity change is observed on top of a much larger conventional gravitational acceleration. The Pioneer spacecraft are no longer providing new data (the last contact having been on 23 January 2003) and Galileo was deliberately burned up in Jupiter's atmosphere at the end of its mission. So far, attempts to use data from current missions such as Cassini have not yielded any conclusive results. There are several remaining options for further research: - Further analysis of the retrieved Pioneer data. This includes not only the data that was first used to detect the anomaly, but additional data that until recently was saved only in older, inaccessible computer formats and media. This data was recovered in 2006, converted to more modern formats, and is now available for analysis. - The New Horizons spacecraft to Pluto is spin-stabilised for much of its cruise, and there is a possibility that it can be used to investigate the anomaly. New Horizons may have the same problem that precluded good data from the Cassini mission—its RTG is mounted close to the spacecraft body, so thermal radiation from it, bouncing off the spacecraft, may produce a systematic thrust of a not-easily predicted magnitude, several times as large as the Pioneer effect. Nevertheless, efforts are underway to study the non-gravimetric accelerations on the spacecraft, in the hopes of having them well modeled for the long cruise to Pluto after the Jupiter fly-by that occurred in February 2007. In particular, despite any large systematic bias from the RTG, the 'onset' of the anomaly at or near the orbit of Saturn might be observed. - A dedicated mission has also been proposed. Such a mission would probably need to surpass 200 AU from the Sun in a hyperbolic escape orbit. - Observations of asteroids around 20 AU may provide insights if the anomaly's cause is gravitational. Meetings and conferences about the anomaly The Pioneer Explorer Collaboration was formed to study the Pioneer Anomaly and has hosted three meetings (2005, 2007, and 2008) at International Space Science Institute in Bern, Switzerland, to discuss the anomaly, and discuss possible means for resolving the source. - non-uniform time in relation to a constant acceleration is a summarized term derived from the source or sources used for this sub-section. - Nieto, M. M.; Turyshev, S. G. (2004). "Finding the Origin of the Pioneer Anomaly". Classical and Quantum Gravity 21 (17): 4005–4024. arXiv:gr-qc/0308017. Bibcode:2004CQGra..21.4005N. doi:10.1088/0264-9381/21/17/001. - "Pioneer Anomaly Solved By 1970s Computer Graphics Technique". The Physics arXiv Blog. 31 March 2011. Retrieved 2015-05-05. - Rievers, B.; Lämmerzahl, C. (2011). "High precision thermal modeling of complex systems with application to the flyby and Pioneer anomaly". Annalen der Physik 523 (6): 439. arXiv:1104.3985. Bibcode:2011AnP...523..439R. doi:10.1002/andp.201100081. - Turyshev, S. G.; Toth, V. T.; Kinsella, G.; Lee, S.-C.; Lok, S. M.; Ellis, J. (2012). "Support for the Thermal Origin of the Pioneer Anomaly". Physical Review Letters 108 (24): 241101. arXiv:1204.2507. Bibcode:2012PhRvL.108x1101T. doi:10.1103/PhysRevLett.108.241101. PMID 23004253. - "Pioneer 10". Weebau Spaceflight Encyclopedia. 9 November 2010. Retrieved 2012-01-11. - Murphy, E. M. (1999). "A Prosaic explanation for the anomalous accelerations seen in distant spacecraft". Physical Review Letters 83 (9): 1890. arXiv:gr-qc/9810015. Bibcode:1999PhRvL..83.1890M. doi:10.1103/PhysRevLett.83.1890. - Katz, J. I. (1999). "Comment on "Indication, from Pioneer 10/11, Galileo, and Ulysses data, of an apparent anomalous, weak, long-range acceleration"". Physical Review Letters 83 (9): 1892–1892. arXiv:gr-qc/9809070. Bibcode:1999PhRvL..83.1892K. doi:10.1103/PhysRevLett.83.1892. - Scheffer, L. (2003). "Conventional forces can explain the anomalous acceleration of Pioneer 10". Physical Review D 67 (8): 084021. arXiv:gr-qc/0107092. Bibcode:2003PhRvD..67h4021S. doi:10.1103/PhysRevD.67.084021. - See pp. 10–15 in Turyshev, S. G; Toth, V. T.; Kellogg, L.; Lau, E.; Lee, K. (2006). "A study of the pioneer anomaly: new data and objectives for new investigation". International Journal of Modern Physics D 15 (01): 1–55. arXiv:gr-qc/0512121. Bibcode:2006IJMPD..15....1T. doi:10.1142/S0218271806008218. - Bertolami, O.; Francisco, F.; Gil, P. J. S.; Páramos, J. (2008). "Thermal analysis of the Pioneer anomaly: A method to estimate radiative momentum transfer". Physical Review D 78 (10): 103001. arXiv:0807.0041. Bibcode:2008PhRvD..78j3001B. doi:10.1103/PhysRevD.78.103001. - Toth, V. T.; Turyshev, S. G. (2009). "Thermal recoil force, telemetry, and the Pioneer anomaly". Physical Review D 79 (4): 043011. arXiv:0901.4597. Bibcode:2009PhRvD..79d3011T. doi:10.1103/PhysRevD.79.043011. - Turyshev, S. G.; Toth, V. T.; Ellis, J.; Markwardt, C. B. (2011). "Support for temporally varying behavior of the Pioneer anomaly from the extended Pioneer 10 and 11 Doppler data sets". Physical Review Letters 107 (8): 81103. arXiv:1107.2886. Bibcode:2011PhRvL.107h1103T. doi:10.1103/PhysRevLett.107.081103. - Bertolami, O.; Francisco, F.; Gil, P. J. S.; Páramos, J. (2012). "The Contribution of Thermal Effects to the Acceleration of the Deep-Space Pioneer Spacecraft". Physical Review Letters 107 (8): 081103. arXiv:1107.2886. Bibcode:2011PhRvL.107h1103T. doi:10.1103/PhysRevLett.107.081103. - Turyshev, S. G.; Toth, V. T. (2010). "The Pioneer Anomaly". Living Reviews in Relativity 13: 4. arXiv:1001.3686. Bibcode:2010LRR....13....4T. doi:10.12942/lrr-2010-4. - Turyshev, S. G.; Nieto, M. M.; Anderson, J. D. (2005). "A Route to Understanding of the Pioneer Anomaly". In Chen, P.; Bloom, E.; Madejski, G.; Petrosian, V. The XXII Texas Symposium on Relativistic Astrophysics. pp. 13–17. arXiv:gr-qc/0503021. Bibcode:2005tsra.conf..121T. Stanford e-Conf #C04, paper #0310.In particular, Appendix C. - Di Benedetto, M.; Iess, L.; Roth, D. C. (2009). "The non-gravitational accelerations of the Cassini spacecraft" (PDF). Proceedings of the 21st International Symposium on Space Flight Dynamics. International Symposium on Space Flight Dynamics. - Iess, L. (January 2011). "Deep-Space Navigation: a Tool to Investigate the Laws of Gravity" (PDF). Institut des Hautes Études Scientifiques. - Anderson, J. D.; et al. (2002). "Study of the anomalous acceleration of Pioneer 10 and 11". Physical Review D 65 (8): 082004. arXiv:gr-qc/0104064. Bibcode:2002PhRvD..65h2004A. doi:10.1103/PhysRevD.65.082004. - Nieto, M. M.; Anderson, J. D. (2005). "Using early data to illuminate the Pioneer anomaly". Classical and Quantum Gravity 22: 5343. arXiv:gr-qc/0507052. Bibcode:2005CQGra..22.5343N. doi:10.1088/0264-9381/22/24/008. - Tangen, K. (2007). "Could the Pioneer anomaly have a gravitational origin?". Physical Review D 76 (4): 042005. arXiv:gr-qc/0602089. Bibcode:2007PhRvD..76d2005T. doi:10.1103/PhysRevD.76.042005. - Iorio, L.; Giudice, G. (2006). "What do the orbital motions of the outer planets of the Solar System tell us about the Pioneer anomaly?". New Astronomy 11 (8): 600–607. arXiv:gr-qc/0601055. Bibcode:2006NewA...11..600I. doi:10.1016/j.newast.2006.04.001. - Iorio, L. (2007). "Can the Pioneer anomaly be of gravitational origin? A phenomenological answer". Foundations of Physics 37 (6): 897–918. arXiv:gr-qc/0610050. Bibcode:2007FoPh...37..897I. doi:10.1007/s10701-007-9132-x. - Iorio, L. (2007). "Jupiter, Saturn and the Pioneer anomaly: a planetary-based independent test". Journal of Gravitational Physics 1 (1): 5–8. arXiv:0712.1273. Bibcode:2007JGrPh...1....5I. - Standish, E. M. (2008). "Planetary and Lunar Ephemerides: testing alternate gravitational theories". AIP Conference Proceedings 977: 254–263. doi:10.1063/1.2902789. - Iorio, L. (2008). "The Lense–Thirring Effect and the Pioneer Anomaly: Solar System Tests". Proceedings of the Marcel Grossmann Meeting 11: 2558–2560. arXiv:gr-qc/0608105. doi:10.1142/9789812834300_0458. - Iorio, L. (2009). "Can the Pioneer Anomaly be Induced by Velocity-Dependant Forces? Tests in the Outer Regions of the Solar System with Planetary Dynamics". International Journal of Modern Physics D 18 (6): 947–958. arXiv:0806.3011. Bibcode:2009IJMPD..18..947I. doi:10.1142/S0218271809014856. - Fienga, A.; et al. (2009). Gravity tests with INPOP planetary ephemerides (PDF). Proceedings of the Annual Meeting of the French Society of Astronomy and Astrophysics. pp. 105–109. Bibcode:2009sf2a.conf..105F. Also published in Proceedings of the International Astronomical Union 5: 159–169. 2010. arXiv:0906.3962. Bibcode:2010IAUS..261..159F. doi:10.1017/S1743921309990330. - Iorio, L. (2010). "Does the Neptunian system of satellites challenge a gravitational origin for the Pioneer anomaly?". Monthly Notices of the Royal Astronomical Society 405 (4): 2615–2622. arXiv:0912.2947. Bibcode:2010MNRAS.405.2615I. doi:10.1111/j.1365-2966.2010.16637.x. - Pitjeva, E. V. (2010). EPM ephemerides and relativity. Proceedings of the International Astronomical Union 5. pp. 170–178. Bibcode:2010IAUS..261..170P. doi:10.1017/S1743921309990342. - Page, G. L.; Wallin, J. F.; Dixon, D. S. (2009). "How Well do We Know the Orbits of the Outer Planets?". The Astrophysical Journal 697 (2): 1226–1241. arXiv:0905.0030. Bibcode:2009ApJ...697.1226P. doi:10.1088/0004-637X/697/2/1226. - Page, G. L.; Dixon, D. S.; Wallin, J. F. (2006). "Can Minor Planets Be Used to Assess Gravity in the Outer Solar System?". The Astrophysical Journal 642 (1): 606–614. arXiv:astro-ph/0504367. Bibcode:2006ApJ...642..606P. doi:10.1086/500796. - Wallin, J. F.; Dixon, D. S.; Page, G. L. (2007). "Testing Gravity in the Outer Solar System: Results from Trans-Neptunian Objects". The Astrophysical Journal 666 (2): 1296–1302. arXiv:0705.3408. Bibcode:2007ApJ...666.1296W. doi:10.1086/520528. - Mizony, M.; Lachièze-Rey, M. (2005). "Cosmological effects in the local static frame". Astronomy and Astrophysics 434 (1): 45–52. arXiv:gr-qc/0412084. Bibcode:2005A&A...434...45M. doi:10.1051/0004-6361:20042195. - Lachièze-Rey, M. (2007). "Cosmology in the solar system: the Pioneer effect is not cosmological". Classical and Quantum Gravity 24 (10): 2735–2742. arXiv:gr-qc/0701021. Bibcode:2007CQGra..24.2735L. doi:10.1088/0264-9381/24/10/016. - Noerdlinger, P. D.; Petrosian, V. (1971). "The Effect of Cosmological Expansion on Self-Gravitating Ensembles of Particles". Astrophysical Journal 168: 1. Bibcode:1971ApJ...168....1N. doi:10.1086/151054. - Williams, J. G.; Turyshev, S. G.; Boggs, D. H. (2004). "Progress in Lunar Laser Ranging Tests of Relativistic Gravity" (PDF). Physical Review Letters 93 (26): 261101. arXiv:gr-qc/0411113. Bibcode:2004PhRvL..93z1101W. doi:10.1103/PhysRevLett.93.261101. - Turyshev, S. G. (28 March 2007). "Pioneer Anomaly Project Update: A Letter From the Project Director". The Planetary Society. Retrieved 2011-02-12. - Rañada, A. F. (2004). "The Pioneer anomaly as acceleration of the clocks". Foundations of Physics 34 (12): 1955. arXiv:gr-qc/0410084. Bibcode:2004FoPh...34.1955R. doi:10.1007/s10701-004-1629-y. - Bekenstein, J. D. (2006). "The modified Newtonian dynamics (MOND) and its implications for new physics". Contemporary Physics 47 (6): 387. arXiv:astro-ph/0701848. Bibcode:2006ConPh..47..387B. doi:10.1080/00107510701244055. - Exirifard, Q. (2010). "Constraints on f(RijklRijkl) gravity: Evidence against the co-variant resolution of the Pioneer anomaly". Classical and Quantum Gravity 26 (2): 025001. arXiv:0708.0662. Bibcode:2009CQGra..26b5001E. doi:10.1088/0264-9381/26/2/025001. - Nieto, M. M.; Turyshev, S. G.; Anderson, J. D. (2005). "Directly measured limit on the interplanetary matter density from Pioneer 10 and 11". Physics Letters B 613 (1–2): 11. arXiv:astro-ph/0501626. Bibcode:2005PhLB..613...11N. doi:10.1016/j.physletb.2005.03.035. - Milgrom, M. (1999). "The Modified Dynamics as a vacuum effect". Physics Letters A 253 (5–6): 273. arXiv:astro-ph/9805346. Bibcode:1999PhLA..253..273M. doi:10.1016/S0375-9601(99)00077-8. - McCulloch, M. E. (2007). "Modelling the Pioneer anomaly as modified inertia". Monthly Notices of the Royal Astronomical Society 376 (1): 338–342. arXiv:astro-ph/0612599. Bibcode:2007MNRAS.376..338M. doi:10.1111/j.1365-2966.2007.11433.x. - McCulloch, M. E. (2008). "Modelling the flyby anomalies using a modification of inertia". Monthly Notices of the Royal Astronomical Society Letters 389 (1): L57–60. arXiv:0806.4159. Bibcode:2008MNRAS.389L..57M. doi:10.1111/j.1745-3933.2008.00523.x. - Ignatiev, A. Yu. (2007). "Is violation of Newton's second law possible?". Physical Review Letters 98 (10): 101101. arXiv:gr-qc/0612159. Bibcode:2007PhRvL..98j1101I. doi:10.1103/PhysRevLett.98.101101. - Rañada, A. F.; Tiemblo, A. (2012). "Parametric invariance and the Pioneer anomaly". Canadian Journal of Physics 90: 931–937. arXiv:1106.4400. Bibcode:2012CaJPh..90..931R. doi:10.1139/p2012-086. Antonio Fernández-Rañada and Alfredo Tiemblo-Ramos propose "an explanation of the Pioneer anomaly that is a refinement of a previous one and is fully compatible with the cartography of the solar system. It is based on the non-equivalence of the atomic time and the astronomical time that happens to have the same observational fingerprint as the anomaly." - Kopeikin, S. M. (2012). "Celestial Ephemerides in an Expanding Universe". Physical Review D 86: 064004. arXiv:1207.3873. Bibcode:2012PhRvD..86f4004K. doi:10.1103/PhysRevD.86.064004. - Choi, C. Q. (3 March 2008). "NASA Baffled by Unexplained Force Acting on Space Probes". Space.com. Retrieved 2011-02-12. - "The Pioneer Missions". NASA. 26 July 2003. Retrieved 2015-05-07. - "Data Saved!". Planetary Society. 1 June 2006. Archived from the original on 2012-04-18. - Nieto, M. M. (2008). "New Horizons and the Onset of the Pioneer Anomaly". Physics Letters B 659 (3): 483. arXiv:0710.5135. Bibcode:2008PhLB..659..483N. doi:10.1016/j.physletb.2007.11.067. - "Pioneer anomaly put to the test". Physics World. 1 September 2004. Retrieved 2009-05-17. - Clark, S. (10 May 2005). "Lost asteroid clue to Pioneer puzzle". New Scientist. Retrieved 2009-01-10. - "Conference on The Pioneer Anomaly - Observations, Attempts at Explanation, Further Exploration". Center of Applied Space Technology and Microgravity. Retrieved 2012-02-12. - "The Pioneer Explorer Collaboration: Investigation of the Pioneer Anomaly at ISSI". International Space Science Institute. 18 February 2008. Retrieved 2015-05-07. - Anderson, J D.; Laing, P. A.; Lau, E. L.; Liu, A. S.; Nieto, M. M.; Turyshev, S. G. (1998). "Indication, from Pioneer 10/11, Galileo, and Ulysses Data, of an Apparent Anomalous, Weak, Long-Range Acceleration". Physical Review Letters 81 (14): 2858–2861. arXiv:gr-qc/9808081. Bibcode:1998PhRvL..81.2858A. doi:10.1103/PhysRevLett.81.2858. - The original paper describing the anomaly - Anderson, J D.; Laing, P. A.; Lau, E. L.; Liu, A. S.; Nieto, M. M.; Turyshev, S. G. (2002). "Study of the anomalous acceleration of Pioneer 10 and 11". Physical Review D 65 (8): 082004. arXiv:gr-qc/0104064. Bibcode:2002PhRvD..65h2004A. doi:10.1103/PhysRevD.65.082004. - A lengthy survey of several years of debate by the authors of the original 1998 paper documenting the anomaly. The authors conclude, "Until more is known, we must admit that the most likely cause of this effect is an unknown systematic. (We ourselves are divided as to whether 'gas leaks' or 'heat' is this 'most likely cause.')" The ISSI meeting above has an excellent reference list divided into sections such as primary references, attempts at explanation, proposals for new physics, possible new missions, popular press, and so on. A sampling of these are shown here: - Musser, George (December 1998). "Pioneering Gas Leak?". Scientific American 279 (6): 26–27. Bibcode:1998SciAm.279f..26M. doi:10.1038/scientificamerican1298-26b. - Reardon, A. C. (2011). "Gravitational Analysis of V541 Cygni, DI Herculis, and the Pioneer anomaly". Astrophysics and Space Science, Vol. 336, No. 2 369–377, Theory establishes a gravitational connection between the unexplained periastron advance observed in two binary star systems and the Pioneer anomaly. - Anderson, J. D.; Turyshev, S. G.; Nieto, M. M. (2002). "A mission to test the Pioneer anomaly". International Journal of Modern Physics D 11 (10): 1545. arXiv:gr-qc/0205059. Bibcode:2002IJMPD..11.1545A. doi:10.1142/S0218271802002876. - Dittus H, et al. (2005). "A Mission to Explore the Pioneer Anomaly". ESA Spec.Publ. 588: 3–10. arXiv:gr-qc/0506139. Bibcode:2005gr.qc.....6139T. - Nieto, M. M.; Turyshev, S.G. (2004). "Finding the origin of the Pioneer anomaly". Classical and Quantum Gravity 21 (17): 4005. Bibcode:2004CQGra..21.4005N. doi:10.1088/0264-9381/21/17/001. - Further elaboration on a dedicated mission plan (restricted access) - Page, J. F.; Dixon, David S.; Wallin, John F. (2005). "Can Minor Planets be Used to Assess Gravity in the Outer Solar System?". The Astrophysical Journal 642: 606. arXiv:astro-ph/0504367. Bibcode:2006ApJ...642..606P. doi:10.1086/500796. - Johnson, J. (January 2, 2005). "Opening New Doors in Space". Seattle Times. Retrieved 2012-01-12. - Nieto, M. M.; Anderson, J. D. (2005). "Using Early Data to Illuminate the Pioneer Anomaly". Classical and Quantum Gravity 22 (24): 5343–5354. arXiv:gr-qc/0507052. Bibcode:2005CQGra..22.5343N. doi:10.1088/0264-9381/22/24/008. - Hellemans, Alexander (October 2005). "A Force to Reckon With". Scientific American 293 (4): 24–25. Bibcode:2005SciAm.293d..24H. doi:10.1038/scientificamerican1005-24. - Brownstein, J. R.; Moffat, J. W. (2006). "Gravitational solution to the Pioneer 10/11 anomaly". Classical and Quantum Gravity 23 (10): 3427–3436. arXiv:gr-qc/0511026. Bibcode:2006CQGra..23.3427B. doi:10.1088/0264-9381/23/10/013. - Anderson, J. (January 28, 2009). "March 2009: Is there something we don't know about gravity?". Astronomy Magazine 37 (3): 22–27. - "Pioneer Anomaly Solved By 1970s Computer Graphics Technique" (March 2011) - "Pioneering Gas Leak? The strange motions of two space probes have mundane explanations--probably" Scientific American (December 1998) - "A Force to Reckon With: What applied the brakes on Pioneer 10 and 11?" Scientific American (October 2005) - "Gravity theory dispenses with dark matter" - STVG (Scalar-tensor-vector gravity) theory claims to predict Pioneer anomaly - Planetary Society data recovery effort enables study - Shows number of publications about the Pioneer anomaly on arXiv.org, by year. - Space.com: The Problem with Gravity: New Mission Would Probe Strange Puzzle - "Wanted - Einstein Jr". the Economist. March 2008. - The Pioneer Anomaly, a 30-Year-Old Cosmic Mystery, May Be Resolved At Last - Popular Science (December 2010)[dead link] - Robbins, Stuart (May 2014). "Exposing PseudoAstronomy, Episode 110: Solar System Mysteries "Solved" by PseudoScience, Part 2 - The Pioneer Anomaly". Exposing PseudoAstronomy Podcast.
- 1 What is the meaning of probability in math? - 2 How do you explain probability? - 3 What is the formula of probability? - 4 What is probability simple words? - 5 How do you explain probability to students? - 6 What are some real life examples of probability? - 7 What is the best definition of probability? - 8 How do you find probability example? - 9 What are the 5 rules of probability? - 10 How do you calculate the probability of winning? - 11 How do I calculate mean? - 12 Why do we learn probability? - 13 What are the basic concepts of probability? - 14 What is another word for probability? What is the meaning of probability in math? Probability means possibility. It is a branch of mathematics that deals with the occurrence of a random event. The value is expressed from zero to one. Probability has been introduced in Maths to predict how likely events are to happen. How do you explain probability? Probability is the ratio of the times an event is likely to occur divided by the total possible events. In the case of our die, there are six possible events, and there is one likely event for each number with each roll, or 1/6. What is the formula of probability? P(A) is the probability of an event “A” n(A) is the number of favourable outcomes. n(S) is the total number of events in the sample space. Basic Probability Formulas. |All Probability Formulas List in Maths| |Conditional Probability||P(A | B) = P(A∩B) / P(B)| |Bayes Formula||P(A | B) = P(B | A) ⋅ P(A) / P(B)| What is probability simple words? Probability is simply how likely something is to happen. Whenever we’re unsure about the outcome of an event, we can talk about the probabilities of certain outcomes—how likely they are. The analysis of events governed by probability is called statistics. How do you explain probability to students? The probability of an event is the likelihood that the event will happen. If an event is sure to happen, then it has a certain probability, If an event is more likely to happen than not happen, then it has a likely probability. If the likelihood of two events happening is the same, then they have equal probability. What are some real life examples of probability? 8 Real Life Examples Of Probability - Weather Forecasting. Before planning for an outing or a picnic, we always check the weather forecast. - Batting Average in Cricket. - Flipping a coin or Dice. - Are we likely to die in an accident? - Lottery Tickets. - Playing Cards. What is the best definition of probability? 1: the quality or state of being probable. 2: something (such as an event or circumstance) that is probable. 3a(1): the ratio of the number of outcomes in an exhaustive set of equally likely outcomes that produce a given event to the total number of possible outcomes. How do you find probability example? Probability is the likelihood or chance of an event occurring. For example, the probability of flipping a coin and it being heads is ½, because there is 1 way of getting a head and the total number of possible outcomes is 2 (a head or tail). We write P(heads) = ½. What are the 5 rules of probability? Basic Probability Rules - Probability Rule One (For any event A, 0 ≤ P(A) ≤ 1) - Probability Rule Two (The sum of the probabilities of all possible outcomes is 1) - Probability Rule Three (The Complement Rule) - Probabilities Involving Multiple Events. - Probability Rule Four ( Addition Rule for Disjoint Events) - Finding P(A and B) using Logic. How do you calculate the probability of winning? To convert odds to probability, take the player’s chance of winning, use it as the numerator and divide by the total number of chances, both winning and losing. For example, if the odds are 4 to 1, the probability equals 1 / (1 + 4) = 1/5 or 20%. How do I calculate mean? The mean is the average of the numbers. It is easy to calculate: add up all the numbers, then divide by how many numbers there are. In other words it is the sum divided by the count. Why do we learn probability? Probability is an essential tool in applied mathematics and mathematical modeling. It is vital to have an understanding of the nature of chance and variation in life, in order to be a well-informed, (or “efficient”) citizen. One area in which this is extremely important is in understanding risk and relative risk. What are the basic concepts of probability? A probability is a number that reflects the chance or likelihood that a particular event will occur. Probabilities can be expressed as proportions that range from 0 to 1, and they can also be expressed as percentages ranging from 0% to 100%. What is another word for probability? Probability Synonyms – WordHippo Thesaurus. What is another word for probability?
After the Great Depression, economists realized that in order to avoid a similar economic downturn from happening again, they needed a better way to keep track of the U.S. economy. It is usual for our economy to go through fluctuations of growth and contraction in the short run, but economists started asking themselves the following questions: The answer was to calculate the gross domestic product or GDP. The definition of gross domestic product is the sum of all final goods and services sold or produced within a nation's domestic borders. It is a measurement of a country's economic activity in a certain amount of time, either quarter to quarter, or year to year. So, if the figure goes up from one year to the next, then we can feel confident that the economy is more productive than the year before. However, if GDP falls from one year to the next, it is an indication that the economy is slowing. There are two approaches to calculating GDP: the income approach, which is also known as the resource cost approach, and the expenditure approach. Now, both approaches are essentially two sides of the same coin, meaning they are both going to arrive at the same outcome, but by using different methods. This approach calculates economic activity by adding up the costs that go into producing goods and services. You may recall that our resources or inputs are land, labor, and capital. Therefore, we want to look at the cost of these or the income received from them. Think of this income using the acronym WRIP: Now, this process may sound rather complicated, but we do have data on the income that people make from tax returns, which is readily available to our government. Keep in mind, though, that not all income that people make equates to income for businesses. EXAMPLEFor example, equipment depreciates or loses value from year to year, so we need to make adjustments for that type of thing. We would include income that foreigners make here in our country because if they are working here, they are generating economic activity here. On the other hand, income that Americans make abroad would be excluded because that income is contributing to somebody else's gross domestic product, not ours. Here is a reminder of the circular flow model. Remember, the input market is where are factors of production--land, labor, and capital--are exchanged. The bottom part highlighted in yellow represents the income or resource cost approach in the circular flow model. Again, this approach calculates economic activity by adding up what people spend money to purchase. Referencing our expenditure approach formula again (C + I + G + (X - M), let's explore each component in a bit more detail: This approach reflects the top of the circular flow model here. As mentioned, we are subtracting what we are importing, or paying the rest of the world for, and adding our exports, because exports are items that we are producing here. So, why is it that the expenditure and income approaches arrive at the same GDP? Well, it is because one person's spending becomes another person's income, which you can see on the Circular Flow Model. This is why the model is circular in nature: the money that people are spending in the top, they get by working in the bottom, or in the input market. The whole point of calculating GDP is to show GDP growth, or how we measure the change in GDP over time. GDP is the most common way of measuring growth. If we are growing by GDP, that is an indication that our macro, or overall, economy is healthy. In the past, we have used a measure called gross national product (GNP), the only difference being that it calculates the value of goods and services produced by Americans instead of in America. Today, however, we mostly use GDP, being more concerned with what is being produced within our nation's borders. So, could anything ever really be a perfect measure of all economic activity or how people live in a country? Not really, no. GDP cannot capture any non-market activities, like cleaning our own homes, caring for our children, or changing the oil on our car. It is also not going to measure the value that we place on things, like leisure or safety. Crime and pollution cannot be accounted for, either. Also, GDP is an average. Certainly, if GDP per capita (person) rises from one year to the next, we can say overall that our standard of living has improved. However, that certainly does not mean that everyone is better off. Now, it is important to note, especially in the expenditure approach, that we only measure final goods and services, not intermediate ones. An intermediate good is something that is purchased in the production process to help us make a final good or service. Do we count those tires in our GDP? The answer is no because those tires count in the final purchase price of the vehicle. If we counted them when the manufacturer purchased them to put on the car and when you bought the car, that would be double counting. The tires are already counted in the final selling price of the vehicle, and we want only to count things that are being produced new. However, if you need new tires for your car this winter, then the tires you purchase to replace your old ones will count in this year's GDP. Those new tires are being produced this year, and are being purchased as a new good or service. Hopefully, this helps you to understand the difference between an intermediate good--something used in the production process to make a final good--and a final good. Source: Adapted from Sophia instructor Kate Eskra.
Chapter 1 - Objectives and Summary for the OUP text Senior Physics - Concepts in Context by Walding, Rapkins and Knowledge & Understanding - List the SI standard quantities together with their symbols, units and abbreviations. - Distinguish between a basic quantity and a derived quantity. - Convert from one unit to another. - List and classify the possible sources of errors encountered when making a measurement. - Find systematic and random errors. Calculate the error in an experiment. - Convert from exponential to decimal and vice versa. - Arrange a set of numbers in order of magnitude. - Use significant figures in calculations. - State simple error combination rules. - Read linear, vernier and micrometer scales. - Estimate length, time, mass and number. - Determine the error in the value of pi by experiment. - Collect and analyse primary data by experiment. - Locate and comprehend relevant information from secondary data sources. CHAPTER 1 SUMMARY - Early measurements were based on body or heavenly features and differed from country to - There is an international system of units called SI which is most commonly used around the world and by scientists. - Measurable features or properties of objects are often called physical quantities. All physical quantities should be quoted with their numerical value and their unit. - Fundamental quantities (or base quantities) are those which are used to define all other quantities (derived quantities). - All measurements include errors or uncertainties, either systematic or random. - Powers of 10 are called exponential notation. Scientific notation includes exponents in the form M x 10n, where M is a number having a single non-zero digit to the left of the decimal point and n is a positive or negative exponent. - The order of magnitude is the power of 10 closest to to the number. - Significant figures are those digits in a number that are known with certainty plus the first digit that is uncertain. - Instruments used in measuring length include the ruler, the micrometer and the vernier calliper. Rotational speeds can be measured with a xenon stroboscope but there can be - Digital measurements in the on/off form can be taken with simple counters or computer - Ideal measuring devices have no effect on the measurement itself. Return to Objectives-Summary Menu Page page.
This lesson allows students to explore the concept of multiplication hands on. Kids will also come to understand why multiplication is important as well as how to solve multiplication problems using array models. Hop into multiplication! Introduce your class to times tables with this engaging math lesson. Students will read about the correlation between repeated addition and multiplication, and then create hands-on book pages to illustrate it. Lay the foundation for multiplication by introducing your second graders to the concepts of skip counting and repeated addition. This lesson can be used alongside Up, Up, and Array, or separately to reinforce these important skills.
Start a 10-Day Free Trial to Unlock the Full Review Why Lesson Planet? Find quality lesson planning resources, fast! Share & remix collections to collaborate. Organize your curriculum with collections. Easy! Have time to be more creative & energetic with your students! System of Measurement For this geometry worksheet, students measure using different units. They use multiplication and division to convert between units, depending on if they are going from large to small or the other way. There are 24 problems. 3 Views 0 Downloads Introduction to Systems of Linear Equations Here is a lesson that really delivers! Middle schoolers collaborate to consider pizza prices from four different pizza parlors. Using systems of simultaneous equations, they graph each scenario to determine the best value. Developed for... 7th - 9th Math CCSS: Designed Mayan Mathematics and Architecture Take young scholars on a trip through history with this unit on the mathematics and architecture of the Mayan civilization. Starting with a introduction to their base twenty number system and the symbols they used, this eight-lesson unit... 4th - 8th Math CCSS: Adaptable Round and Round We Go — Exploring Orbits in the Solar System Math and science come together in this cross-curricular astronomy lesson plan on planetary motion. Starting off with a hands-on activity that engages the class in exploring the geometry of circles and ellipses, this lesson plan then... 5th - 8th Math CCSS: Adaptable Relationships Between Formal Measurement Units: Measure and Record Mass in Kilograms and Grams Teach the masses about the metric system with this hands-on measurement lesson. Given a fruit or vegetable, learners estimate, measure, and convert its mass using the metric units gram and kilogram. 3rd - 8th Math CCSS: Adaptable
We all know that a line segment, or a line, is straight, right? What if somebody told you that you could make curves entirely out of straight lines? With line design (also known as "string art" and "curve stitching") you can arrange a series of straight lines in a systematic way so that they create the appearance of a smooth curve, forming what is called an "envelope" in mathematics. These curves are based on mathematical formulas and can result in many complex and intriguing curves. Don't worry, though, it's much easier than it looks... 1Make an angle. It can be any angle you want, but to follow along with this example, use a 30 to 150 degree angle.Ad 2Divide one side of the angle into equal parts. Mark each division so you can tell where they are. 3Repeat on the other side. The segments should be evenly spaced, but the spacing doesn't have to be the same as on the other side of the angle. For example, one line could have the segments separated at 1 centimeter (0.4 in) intervals, while the other side has the same number of segments but separated at 2 centimeter (0.8 in) intervals. Here they are shown at equal spacing, but experimenting with the spacing can create different curves. 4On one side of the marked angle, starting from the vertex, or corner, number each division. In this case, we are counting from 1 through 10. 5Starting at the other side, from the vertex/corner, mark the segments 10 through 1. 6With a ruler, connect the number ones together (the one at the top of the line to the one near the corner). 7Repeat for all of the numbers (2 to 2, 3 to 3, 4 to 4, and so on). 8Make additional angles next to each other to make complex shapes. Give us 3 minutes of knowledge! - Line design can be observed in nature, especially in spider webs. The strands are pulled straight but woven together in such a way that they approximate a curve to the human eye. - If you would like to arrange the angles in circle, this is a way to make sure the angles are all the same. - Find out how many angles you would like to use. We will use 5 angles. - Since a circle has 360 degrees in it, we will use it as the dividend and 5 as the divisor. - Divide 360 by 5. - Your quotient is how many degrees each angle should be. The quotient of this situation is 72. Each angle should be 72 degrees to form a circle. - When you are experienced, you will not need numbers anymore. - The more even the divisions, the more centered and crisp the line design will be. If you have trouble with consistency, make the curves on graph paper. - Use a straight edge or else it will look sloppy. - If you mess up, start all over, because if you don't, one line will be out of proportion. Sources and Citations In other languages: Español: crear un diseño con líneas, Deutsch: Ein Liniendesign anfertigen, Português: Criar um Desenho com Linhas, Italiano: Creare una Curva Usando delle Linee Rette, Русский: рисовать в технике изонити Thanks to all authors for creating a page that has been read 252,752 times.
Back To CourseMath 104: Calculus 13 chapters | 105 lessons Erin has taught math and science from grade school up to the post-graduate level. She holds a Ph.D. in Chemical Engineering. I love riding on roller coasters. One of my favorite roller coasters of all time is Space Mountain. On this roller coaster you're kept in the dark the entire time. So picture it: you're going along in the dark, and suddenly you're jerked to one side, then jerked to the other as you careen around this Space Mountain. It's fantastic! But as I'm being jerked around, I always think about one thing: Why do we use the term 'jerk'? What in the world does that mean? Well, it goes back to derivatives and the rate of change. If you have some function like y=f(t), the position, y, is a function of time, t. Say this is my height on the roller coaster. Then I can look at y`(dy/dt), which is the rate of change, d/dt, of my position, y. So this rate of change is my velocity, it's how fast my height is changing as a function of time. I could take the derivative of that, y``, or ((d^2)y)/dt^2, as the derivative of the rate of change of position, so it's the derivative of the velocity. And the derivative of the velocity is the acceleration. Well, the acceleration can also change, so I can write y```, and that's the rate of change of the acceleration. And how fast my acceleration changes is known as the jerk. So you know how on a roller coaster you're completely stopped at first? There's no acceleration, no velocity, no nothing. Then all of a sudden you jerk, or lurch, forward. That's a change in your acceleration; that's d/dt of your acceleration. So the key here is that the derivative is just a rate of change. But in the real world, nothing is static. Everything is dynamic; everything changes. Static means stationary and unchanging, and dynamic means changing. You measure this change using derivatives. Let's do an example. Let's say we have position, f(t), as a function of time, t, and it equals sin(t) + t^3. I know that the velocity is the derivative of the position, so f`(t) is d/dt sin(t) + t^3. That's my position. I can find this derivative by first dividing and conquering, so d/dt sin(t) + d/dt(t^3). Well using my derivative rules here, d/dt sin(t)=cos(t), and d/dt(t^3) is 3t^2, so my velocity is cos(t) + 3t^2. My acceleration is the derivative of the velocity - it's how fast my velocity is changing - and that's f``, or d/dt f`(t). I can calculate this by finding the derivative, d/dt, of my velocity, which is cos(t) + 3t^2. Again I can divide and conquer to get d/dt cos(t) + 3(d/dt)t^2. Then using my derivative rules, I find that the acceleration is -sin(t) + 3(2t), or -sin(t) + 6t. Now that I know the acceleration, I can find the jerk, which is just the derivative or the rate of change of the acceleration. This is f```, or d/dt f``(t), so that's d/dt of the acceleration, the rate of change of the acceleration. I can calculate that by finding d/dt of the acceleration, which is -sin(t) + 6t. Divide and conquer; that equals -(d/dt)sin(t) + 6(d/dt)t. Again using my rules I know that this equals -cos(t) + 6. So in this case, where my position was originally sin(t) + t^3, the jerk as a function of time - that's how fast my acceleration is changing as a function of time - is equal to -cos(t) + 6. We can use these same principles to find any higher-order derivative. So, for example, we can find the fourth-order derivative of f(x) = x^(-1) + cos(4x). This fourth-order derivative is f````. Mathematicians kind of get lazy after the first three, so we write f^4. Let's find the fourth-order derivative of this function, f(x) = x^(-1) + cos(4x). f`(x) is the derivative of this function, so I'm going to divide and conquer and find that f`(x)= -x^(-2) - 4(sin(4x)), because here I have to use the chain rule. Once I have the first derivative, f`(x), I can find the second derivative, f``(x), and that's the derivative, d/dx of my first derivative, or d/dx(-x^(-2) - 4(sin(4x)). I can divide and conquer and use my differentiation rules to find that this second-order derivative is 2x^(-3) - 16cos(4x). So now we can keep going. f```(x) is the derivative of f``(x), so that's d/dx(2x^(-3) - 16cos(4x)). Calculating this out using our divide and conquer/differentiation rules, we find that this derivative, this f```(x), equals -6x^(-4) + 64sin(4x). I've got the third-order derivative, but I still need the fourth-order derivative, so let's take one more differentiation. f^4(x) is the derivative d/dx(f```(x)), or -6x^(-4) + 64sin(4x). I can divide this and conquer, so take the derivative of -6x^(-4), and I can add that to the derivative of 64sin(4x). I have to use the chain rule, and I find that my fourth-order derivative is 64x^(-5) + 256cos(4x). Let's review. The jerk is kind of like what you have on Space Mountain; it's the derivative of the acceleration or the rate of change of the acceleration, how your acceleration is changing as a function of time. If you graph this, it's the slope of the tangent of your acceleration as a function of time. It is also the third derivative, f```(t), of your position, f(t). In calculating the jerk, we also learn some important things about higher-order derivatives. To find higher-order derivatives - the second derivative, the third derivative, the fourth derivative, etc. - just keep differentiating. You differentiate f to get f`, you differentiate f` to get f``, you differentiate that to get f```, and so on and so forth. You can calculate as high as you want, even say up to the 47th-order derivative. Just keep differentiating. To unlock this lesson you must be a Study.com Member. Create your account Did you know… We have over 100 college courses that prepare you to earn credit by exam that is accepted by over 2,900 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level. To learn more, visit our Earning Credit Page Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you. Back To CourseMath 104: Calculus 13 chapters | 105 lessons
Salmonella is a type of bacteria that can cause severe diarrhea in humans. On entering the digestive tract, these bacteria will not proliferate unchallenged: The immune system attacks the intruders – with peptides, for instance. These peptides are small proteins, which tear holes in the envelope of the bacteria. The salmonella react immediately to such envelope damage: Among other things, they produce a small RNA molecule (RybB-sRNA), which promptly prevents the synthesis of about ten proteins in the bacterial cell. All of the proteins in question fulfill biological functions on the envelope of the bacteria. A reasonable mechanism: "In this way, the salmonella bacteria quickly help themselves. Since the outer membrane is full of holes, the proteins would not be able to persist there and fulfill their function," explains Kai Papenfort of the Institute for Molecular Infection Biology at the University of Würzburg. Thus, the small RNA molecule avoids a waste of protein resources. RNA start region binds precursors for proteins But how does the small RNA manage to regulate the production of multiple proteins all at the same time? An answer to this question is given by the Würzburg researchers in the current issue of the scientific journal PNAS: "The start region of the sRNA molecule binds the transcripts, which are a kind of precursor for all these proteins," says Professor Jörg Vogel, the head of the institute. "As soon as this happens, the protein production stops." To prove this, the researchers transferred this start region to other RNA molecules. As a result, the modified molecules also brought the production of the ten proteins to a halt. Without change in the evolution of the bacteria With this research, the Würzburg scientists have shown for the first time: Even small RNA molecules possess clearly defined regions to which a regulatory function can be attributed. Previously, this was known to be true only for proteins, but not for "simpler" molecules such as RNA. "RNA also consists of functional units, which can be newly arranged on the basis of a modular design principle," explains Professor Vogel. Furthermore, the regulatory region represents an RNA section, which has not changed in the evolution of the bacteria. This means: "This RNA is present not only in salmonella, but also in many other pathogenic bacteria and it always has the same function," explains Kai Papenfort. A molecular structure, which has not undergone any evolutionary change – this suggests that it must be essential. It may be a factor, which is indispensable to the bacteria for the infection process and could play a role in triggering the disease. To clarify whether this is the case is the next objective of the Würzburg researchers. Ultimately, the start region of RybB-sRNA might even become a starting point for new drugs. Basic research on small RNA The team of Professor Jörg Vogel conducts basic research on small RNA molecules, the chains of which consist of about 100 components (small RNA, short: sRNA). This particular type of RNA regulates life processes in bacteria and more highly developed cells. Besides salmonella, the Würzburger scientists also used helicobacter as a model organism – a bacterium, which can cause stomach cancer. "Evidence for an autonomous 5‘ target recognition domain in an Hfq-associated small RNA", Kai Papenfort, Marie Bouvier, Franziska Mika, Cynthia M. Sharma, and Jörg Vogel; PNAS, published online on 8 November 2010, doi 10.1073/pnas.1009784107 Dr. Kai Papenfort, Institute for Molecular Infection Biology at the University of Würzburg, T +49 (0)931 31-81230, firstname.lastname@example.org Robert Emmerich | idw Discovery of a Key Regulatory Gene in Cardiac Valve Formation 24.05.2017 | Universität Basel Carcinogenic soot particles from GDI engines 24.05.2017 | Empa - Eidgenössische Materialprüfungs- und Forschungsanstalt Physicists from the University of Würzburg are capable of generating identical looking single light particles at the push of a button. Two new studies now demonstrate the potential this method holds. The quantum computer has fuelled the imagination of scientists for decades: It is based on fundamentally different phenomena than a conventional computer.... An international team of physicists has monitored the scattering behaviour of electrons in a non-conducting material in real-time. Their insights could be beneficial for radiotherapy. We can refer to electrons in non-conducting materials as ‘sluggish’. Typically, they remain fixed in a location, deep inside an atomic composite. It is hence... Two-dimensional magnetic structures are regarded as a promising material for new types of data storage, since the magnetic properties of individual molecular building blocks can be investigated and modified. For the first time, researchers have now produced a wafer-thin ferrimagnet, in which molecules with different magnetic centers arrange themselves on a gold surface to form a checkerboard pattern. Scientists at the Swiss Nanoscience Institute at the University of Basel and the Paul Scherrer Institute published their findings in the journal Nature Communications. Ferrimagnets are composed of two centers which are magnetized at different strengths and point in opposing directions. Two-dimensional, quasi-flat ferrimagnets... An Australian-Chinese research team has created the world's thinnest hologram, paving the way towards the integration of 3D holography into everyday... In the race to produce a quantum computer, a number of projects are seeking a way to create quantum bits -- or qubits -- that are stable, meaning they are not much affected by changes in their environment. This normally needs highly nonlinear non-dissipative elements capable of functioning at very low temperatures. In pursuit of this goal, researchers at EPFL's Laboratory of Photonics and Quantum Measurements LPQM (STI/SB), have investigated a nonlinear graphene-based... 24.05.2017 | Event News 23.05.2017 | Event News 22.05.2017 | Event News 24.05.2017 | Physics and Astronomy 24.05.2017 | Physics and Astronomy 24.05.2017 | Event News
The Pythagoreans dealt with the regular solids, but the pyramid, prism, cone and cylinder were not studied until the Platonists. Eudoxus established their measurement, proving the pyramid and cone to have one-third the volume of a prism and cylinder on the same base and of the same height. He was probably also the discoverer of a proof that the volume of a sphere is proportional to the cube of its radius. Basic topics in solid geometry and stereometry include Advanced topics include - projective geometry of three dimensions (leading to a proof of Desargues' theorem by using an extra dimension) - further polyhedra - descriptive geometry. Various techniques and tools are used in solid geometry. Among them, analytic geometry and vector techniques have a major impact by allowing the systematic use of linear equations and matrix algebra, which are important for higher dimensions. A major application of solid geometry and stereometry is in computer graphics. - Kiselev, A. P. (2008). Geometry. Book II. Stereometry. Translated by Givental, Alexander. Sumizdat.
About This Chapter Below is a sample breakdown of the Acids, Bases and Chemical Reactions chapter into a 5-day school week. Based on the pace of your course, you may need to adapt the lesson plan to fit your needs. |Day||Topics||Key Terms and Concepts Covered| |Monday||Decomposition and synthesis reactions; the Arrhenius theory; Bronsted-Lowry and Lewis||Simple synthesis vs. decomposition reactions; how the Arrhenius theory defines bases and acids; Bronsted-Lowry and Lewis method of defining acids and bases| |Tuesday||Neutralization; dissociation/autoionization; the pH scale||The process of neutralization, how to spot an amphoteric compound and ways to write a base/acid reaction; explanation of water autoionization and how to compute the concentration of hydronium and hydroxide; definition and computation of the pH or pOH| |Wednesday||Weak acids/bases; coordination chemistry; precipitation reactions||Discussion of strong bases, buffers, weak bases, weak acids and strong acids; how coordination compounds are used in bonding; identifying net ionic equations and precipitates| |Thursday||Oxidation numbers; redox equations; the activity series||Explanation of oxidation number assignments in chemical formulas; meaning of reducing/oxidizing agents and how to use the half-reaction method when balancing redox equations; when to use the activity series in single displacement reactions| |Friday||Electrochemistry; cathode/anode reactions; combustion reactions||Function and parts of an electrochemical cell; half-cell reactions for anodes and cathodes, redox concepts and cell voltage potential; cause of combustion reactions| 1. Decomposition and Synthesis Reactions Learn how to write, identify and predict the products of simple synthesis and decomposition reactions. This includes the composition of reactions with oxygen, of two metals, and of metals with nonmetals, as well as the decomposition of metal carbonates, metal chlorates and metal hydroxides. 2. The Arrhenius Definition of Acids and Bases In this lesson, you will learn the definition of Arrhenius acids and bases, discover some of their chemical properties and learn some examples. You will also learn about the difference between strong and weak Arrhenius acids and bases. 3. The Bronsted-Lowry and Lewis Definition of Acids and Bases Learn the Bronsted-Lowry and Lewis definitions of an acid and base. Discover how these theories differ from each other and from the Arrhenius theory of an acid and base. Learn how to identify an acid in terms of proton donation and a base as a proton acceptor, and explain what a conjugate acid or base is. 4. Neutralization and Acid-Base Reactions From this lesson, you will understand the neutralization process between acids and bases. Learn how a hydroxide ion from a base reacts with a hydronium ion from an acid to neutralize each other and form water. Discover what conjugate acids and bases are and what the definition of amphoteric is. 5. Dissociation Constant and Autoionization of Water Learn the meaning of auto-ionization of water, sometimes called self-ionization, where water acts as a proton donor and acceptor to form both hydronium and hydroxide ions. Learn what the auto-ionization constant is and how to use it to determine the concentration of either hydroxide or hydronium ions in a solution when given the other value. 6. The pH Scale: Calculating the pH of a Solution Learn the history of the pH scale, how to describe it and why it is used by scientists. Discover how to calculate the pH of an acid or base solutions given either the hydroxide ion concentration or the hydronium ion concentration. Learn how to start with the pH and calculate the hydroxide and hydronium ion concentrations. 7. Weak Acids, Weak Bases, and Buffers This lesson covers both strong and weak acids and bases, using human blood as an example for the discussion. Other concepts discussed included conjugate acids and bases, the acidity constant, and buffer systems within the blood. 8. Coordination Chemistry: Bonding in Coordinated Compounds Discover what a coordinated compound is. Understand how bonding occurs in coordinated compounds and some of the possible shapes coordinated compounds can be. Learn the uses in nature and industry for coordinated compounds. 9. Precipitation Reactions: Predicting Precipitates and Net Ionic Equations Learn what a precipitate is and predict when it will form in an aqueous chemical reaction, usually a double-replacement reaction. Learn what an ionic equation is, how it differs from a net ionic equation and how to write a net ionic equation. 10. Assigning Oxidation Numbers to Elements in a Chemical Formula Learn the importance of oxidation in chemical reactions. Discover the rules for assigning oxidation numbers in both covalent compounds and ionic compounds. Learn how to assign the oxidation number for each element in a chemical formula. 11. Balancing Redox Reactions and Identifying Oxidizing and Reducing Agents Learn how to identify an oxidizing agent and a reducing agent and how the loss or gain of electrons applies to each one. Learn the relationship between an oxidized or reduced substance and the oxidizing or reducing agent associated with it. Discover what steps to take to balance a redox reaction. 12. The Activity Series: Predicting Products of Single Displacement Reactions Discover what a single replacement reaction is and how to identify it. Learn what chemical activity is, how that applies to an activity series table and how to predict the product of a single replacement reaction by referring to the activity series. 13. Electrochemical Cells and Electrochemistry Learn to identify the parts of and be able to describe an electrochemical cell, including the electrolyte, electrodes, anodes, and cathodes. Learn how to make a homemade lemon battery and how to diagram an electrochemical cell that will light a light bulb. 14. Cathode and Anode Half-Cell Reactions Learn how to write electrode half-reactions for cathodes and anodes. Discover how to calculate cell voltage potential when given a table of standard electrode potentials. Learn how to prevent corrosion using redox concepts and how to protect metal by cathodic protection. 15. Writing and Balancing Combustion Reactions Discover what a combustion reaction is as well as what reactants are needed and what products are produced. Learn to write and balance a combustion reaction. Through the concepts of bond energies, learn how to explain why combustion reactions are largely exothermic. Earning College Credit Did you know… We have over 200 college courses that prepare you to earn credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level. To learn more, visit our Earning Credit Page Transferring credit to the school of your choice Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you. Other chapters within the Physical Science Curriculum Resource & Lesson Plans course - Understanding Matter Lesson Plans - Understanding Gases Lesson Plans - Understanding the Atom & Atomic Structure Lesson Plans - The Periodic Table Lesson Plan Resource - Understanding Chemical Bonding Lesson Plans - Understanding Solutions Lesson Plans - Stoichiometry Lesson Plan Resource - Atomic and Nuclear Physics Lesson Plans - Understanding Motion Lesson Plans - Force, Motion, and Newton's Laws Lesson Plans - Work, Energy, Power, and Thermodynamics Lesson Plans - Waves and Sound in Physical Science Lesson Plan Resources - Light in Physical Science Lesson Plan Resources - Electricity Lesson Plan Resources - Thermal Physics Lesson Plan Resources - Magnetism Lesson Plan Resources - Intro to Organic Chemistry Lesson Plan Resources - The Universe Lesson Plan Resources - Atmospheric Science Lesson Plan Resources - Geologic Time Lesson Plan Resources - The Internal Structure of the Earth Lesson Plans - Plate Tectonics Lesson Plan Resources - Minerals and Rocks Lesson Plan Resources - Igneous Rocks Lesson Plan Resources - Sedimentary Rocks - A Deeper Look Lesson Plans - Volcanoes Lesson Plan Resources - Earthquakes Lesson Plan Resources - Weathering and Erosion Lesson Plan Resources - Water Balance Lesson Plan Resources - Ground Water Lesson Plan Resources - Coastal Hazards Lesson Plan Resources
Nov 20, 2015 · For every quadratic equation, there can be one or more than one solution. These are called the roots of the quadratic equation. For a quadratic equation ax 2 +bx+c = 0, the sum of its roots = –b/a and the product of its roots = c/a. A quadratic equation may be expressed as a product of two binomials. For example, consider the following equation ©c 72n0 V182R rK0u4t OaI BS5o QfPtGw fa UrZeX qL kL zCj. g W aANl0l 7 2r yi5g7hZt Ysy Rrzegs Le Jr xvce7dN.l J SM8a1dueD 8w ji ft Th 0 zI2nWfNi5nnift ke E cAwl1g5eDbfr faX A16.P Worksheet by Kuta Software LLC While it is true that three equations are needed to find the three coefficients, some conditions might help develop a specific equation. However, you are asking "a" quadratic equation. Assuming a vertical axis of symmetry, the equation would be of... 4. MODULE 1 Quadratic Equations and Inequalities I. INTRODUCTION AND FOCUS QUESTIONS Then the edges of the remaining cardboard will be turned up. Answer the following questions. 1. How are quadratic equations different from linear equations? The Quadratic Formula: The quadratic formula is a formula that you can substitute values into in order to find the solutions to any quadratic equation. This is just one method of solving quadratics, you will encounter more throughout the course. Make sure you are happy with the following topics before continuing: Rearranging Formulas; Surds; BIDMAS Dec 03, 2019 · Q. Sachin and Rahul attempted to solve a quadratic equation. Sachin made a mistake in writing down the constant term and ended up in roots (4, 3). Rahul made a mistake in writing down coefficient of x to get roots (3, 2). The correct roots of equation are: (1) –4, –3 (2) 6, 1 (3) 4, 3 (4) –6, –1 [AIEEE-2011] Exam Questions - Quadratic inequalities. 1). View Solution Helpful Tutorials. View Solution Helpful Tutorials. Roots and discriminant of a quadratic equation. Part (a) The second method, factoring, becomes much more difficult as the quadratic equation becomes more complex. For example, it is much easier to factor a quadratic equation in the form ax 2 + bx + c where a = 1 than it is to factor a quadratic equation in the form ax 2 + bx + c where a ≠ 1. Solving Quadratic Equations: The Quadratic Formula C AT Algebra questions from Linear equations and Quadratic equations that appear in the Quantitative Aptitude section of the CAT Exam consists of concepts from Equations and Algebra. Get as much practice as you can in these two topics because the benefits of being good at framing equations can be enormous and useful in other CAT topics as well.
Supply and demand |Part of a series on| In microeconomics, supply and demand is an economic model of price determination in a market. It concludes that in a competitive market, the unit price for a particular good, or other traded item such as labor or liquid financial assets, will vary until it settles at a point where the quantity demanded (at the current price) will equal the quantity supplied (at the current price), resulting in an economic equilibrium for price and quantity transacted. - If demand increases (demand curve shifts to the right) and supply remains unchanged, a shortage occurs, leading to a higher equilibrium price. - If demand decreases (demand curve shifts to the left) and supply remains unchanged, a surplus occurs, leading to a lower equilibrium price. - If demand remains unchanged and supply increases (supply curve shifts to the right), a surplus occurs, leading to a lower equilibrium price. - If demand remains unchanged and supply decreases (supply curve shifts to the left), a shortage occurs, leading to a higher equilibrium price. - 1 Graphical representation of supply and demand - 2 Microeconomics - 3 Other markets - 4 Empirical estimation - 5 Macroeconomic uses of demand and supply - 6 History - 7 Criticisms - 8 See also - 9 References - 10 Further reading - 11 External links Graphical representation of supply and demand Although it is normal to regard the quantity demanded and the quantity supplied as functions of the price of the goods, the standard graphical representation, usually attributed to Alfred Marshall, has price on the vertical axis and quantity on the horizontal axis, the opposite of the standard convention for the representation of a mathematical function. Since determinants of supply and demand other than the price of the goods in question are not explicitly represented in the supply-demand diagram, changes in the values of these variables are represented by moving the supply and demand curves (often described as "shifts" in the curves). By contrast, responses to changes in the price of the good are represented as movements along unchanged supply and demand curves. A supply schedule is a table that shows the relationship between the price of a good and the quantity supplied. Under the assumption of perfect competition, supply is determined by marginal cost. That is, firms will produce additional output while the cost of producing an extra unit of output is less than the price they would receive. A hike in the cost of raw goods would decrease supply, shifting costs up, while a discount would increase supply, shifting costs down and hurting producers as producer surplus decreases. By its very nature, conceptualizing a supply curve requires the firm to be a perfect competitor (i.e. to have no influence over the market price). This is true because each point on the supply curve is the answer to the question "If this firm is faced with this potential price, how much output will it be able to and willing to sell?" If a firm has market power, its decision of how much output to provide to the market influences the market price, therefor the firm is not "faced with" any price, and the question becomes less relevant. Economists distinguish between the supply curve of an individual firm and between the market supply curve. The market supply curve is obtained by summing the quantities supplied by all suppliers at each potential price. Thus, in the graph of the supply curve, individual firms' supply curves are added horizontally to obtain the market supply curve. Economists also distinguish the short-run market supply curve from the long-run market supply curve. In this context, two things are assumed constant by definition of the short run: the availability of one or more fixed inputs (typically physical capital), and the number of firms in the industry. In the long run, firms have a chance to adjust their holdings of physical capital, enabling them to better adjust their quantity supplied at any given price. Furthermore, in the long run potential competitors can enter or exit the industry in response to market conditions. For both of these reasons, long-run market supply curves are generally flatter than their short-run counterparts. The determinants of supply are: - Production costs: how much a goods costs to be produced. Production costs are the cost of the inputs; primarily labor, capital, energy and materials. They depend on the technology used in production, and/or technological advances. See: Productivity - Firms' expectations about future prices - Number of suppliers A demand schedule, depicted graphically as the demand curve, represents the amount of some goods that buyers are willing and able to purchase at various prices, assuming all determinants of demand other than the price of the good in question, such as income, tastes and preferences, the price of substitute goods, and the price of complementary goods, remain the same. Following the law of demand, the demand curve is almost always represented as downward-sloping, meaning that as price decreases, consumers will buy more of the good. Just like the supply curves reflect marginal cost curves, demand curves are determined by marginal utility curves. Consumers will be willing to buy a given quantity of a good, at a given price, if the marginal utility of additional consumption is equal to the opportunity cost determined by the price, that is, the marginal utility of alternative consumption choices. The demand schedule is defined as the willingness and ability of a consumer to purchase a given product in a given frame of time. It is aforementioned, that the demand curve is generally downward-sloping, there may be rare examples of goods that have upward-sloping demand curves. Two different hypothetical types of goods with upward-sloping demand curves are Giffen goods (an inferior but staple good) and Veblen goods (goods made more fashionable by a higher price). By its very nature, conceptualizing a demand curve requires that the purchaser be a perfect competitor—that is, that the purchaser has no influence over the market price. This is true because each point on the demand curve is the answer to the question "If this buyer is faced with this potential price, how much of the product will it purchase?" If a buyer has market power, so its decision of how much to buy influences the market price, then the buyer is not "faced with" any price, and the question is meaningless. Like with supply curves, economists distinguish between the demand curve of an individual and the market demand curve. The market demand curve is obtained by summing the quantities demanded by all consumers at each potential price. Thus, in the graph of the demand curve, individuals' demand curves are added horizontally to obtain the market demand curve. The determinants of demand are: - Tastes & preferences. - Prices of related goods and services. - Consumers' expectations about future prices and incomes that can be checked. - Number of potential consumers Generally speaking, an equilibrium is defined to be the price-quantity pair where the quantity demanded is equal to the quantity supplied. It is represented by the intersection of the demand and supply curves. The analysis of various equilibria is a fundamental aspect of microeconomics: Market Equilibrium: A situation in a market when the price is such that the quantity demanded by consumers is correctly balanced by the quantity that firms wish to supply. In this situation, the market clears. Changes in market equilibrium: Practical uses of supply and demand analysis often center on the different variables that change equilibrium price and quantity, represented as shifts in the respective curves. Comparative statics of such a shift traces the effects from the initial equilibrium to the new equilibrium. Demand curve shifts: When consumers increase the quantity demanded at a given price, it is referred to as an increase in demand. Increased demand can be represented on the graph as the curve being shifted to the right. At each price point, a greater quantity is demanded, as from the initial curve D1 to the new curve D2. In the diagram, this raises the equilibrium price from P1 to the higher P2. This raises the equilibrium quantity from Q1 to the higher Q2. A movement along the curve is described as a "change in the quantity demanded" to distinguish it from a "change in demand," that is, a shift of the curve. there has been an increase in demand which has caused an increase in (equilibrium) quantity. The increase in demand could also come from changing tastes and fashions, incomes, price changes in complementary and substitute goods, market expectations, and number of buyers. This would cause the entire demand curve to shift changing the equilibrium price and quantity. Note in the diagram that the shift of the demand curve, by causing a new equilibrium price to emerge, resulted in movement along the supply curve from the point (Q1, P1) to the point (Q2, P2). If the demand decreases, then the opposite happens: a shift of the curve to the left. If the demand starts at D2, and decreases to D1, the equilibrium price will decrease, and the equilibrium quantity will also decrease. The quantity supplied at each price is the same as before the demand shift, reflecting the fact that the supply curve has not shifted; but the equilibrium quantity and price are different as a result of the change (shift) in demand. Supply curve shifts: When technological progress occurs, the supply curve shifts. For example, assume that someone invents a better way of growing wheat so that the cost of growing a given quantity of wheat decreases. Otherwise stated, producers will be willing to supply more wheat at every price and this shifts the supply curve S1 outward, to S2—an increase in supply. This increase in supply causes the equilibrium price to decrease from P1 to P2. The equilibrium quantity increases from Q1 to Q2 as consumers move along the demand curve to the new lower price. As a result of a supply curve shift, the price and the quantity move in opposite directions. If the quantity supplied decreases, the opposite happens. If the supply curve starts at S2, and shifts leftward to S1, the equilibrium price will increase and the equilibrium quantity will decrease as consumers move along the demand curve to the new higher price and associated lower quantity demanded. The quantity demanded at each price is the same as before the supply shift, reflecting the fact that the demand curve has not shifted. But due to the change (shift) in supply, the equilibrium quantity and price have changed. The movement of the supply curve in response to a change in a non-price determinant of supply is caused by a change in the y-intercept, the constant term of the supply equation. The supply curve shifts up and down the y axis as non-price determinants of demand change. Partial equilibrium, as the name suggests, takes into consideration only a part of the market to attain equilibrium. Jain proposes (attributed to George Stigler): "A partial equilibrium is one which is based on only a restricted range of data, a standard example is price of a single product, the prices of all other products being held fixed during the analysis." The supply-and-demand model is a partial equilibrium model of economic equilibrium, where the clearance on the market of some specific goods is obtained independently from prices and quantities in other markets. In other words, the prices of all substitutes and complements, as well as income levels of consumers are constant. This makes analysis much simpler than in a general equilibrium model which includes an entire economy. Here the dynamic process is that prices adjust until supply equals demand. It is a powerfully simple technique that allows one to study equilibrium, efficiency and comparative statics. The stringency of the simplifying assumptions inherent in this approach make the model considerably more tractable, but may produce results which, while seemingly precise, do not effectively model real world economic phenomena. Partial equilibrium analysis examines the effects of policy action in creating equilibrium only in that particular sector or market which is directly affected, ignoring its effect in any other market or industry assuming that they being small will have little impact if any. Hence this analysis is considered to be useful in constricted markets. Léon Walras first formalized the idea of a one-period economic equilibrium of the general economic system, but it was French economist Antoine Augustin Cournot and English political economist Alfred Marshall who developed tractable models to analyze an economic system. The model of supply and demand also applies to various specialty markets. The model is commonly applied to wages, in the market for labor. The typical roles of supplier and demander are reversed. The suppliers are individuals, who try to sell their labor for the highest price. The demanders of labor are businesses, which try to buy the type of labor they need at the lowest price. The equilibrium price for a certain type of labor is the wage rate. A number of economists (for example Pierangelo Garegnani, Robert L. Vienneau, and Arrigo Opocher & Ian Steedman), building on the work of Piero Sraffa, argue that this model of the labor market, even given all its assumptions, is logically incoherent. Michael Anyadike-Danes and Wyne Godley argue, based on simulation results, that little of the empirical work done with the textbook model constitutes a potentially falsifying test, and, consequently, empirical evidence hardly exists for that model. This criticism of the application of the model of supply and demand generalizes, particularly to all markets for factors of production. In both classical and Keynesian economics, the money market is analyzed as a supply-and-demand system with interest rates being the price. The money supply may be a vertical supply curve, if the central bank of a country chooses to use monetary policy to fix its value regardless of the interest rate; in this case the money supply is totally inelastic. On the other hand, the money supply curve is a horizontal line if the central bank is targeting a fixed interest rate and ignoring the value of the money supply; in this case the money supply curve is perfectly elastic. The demand for money intersects with the money supply to determine the interest rate. Demand and supply relations in a market can be statistically estimated from price, quantity, and other data with sufficient information in the model. This can be done with simultaneous-equation methods of estimation in econometrics. Such methods allow solving for the model-relevant "structural coefficients," the estimated algebraic counterparts of the theory. The Parameter identification problem is a common issue in "structural estimation." Typically, data on exogenous variables (that is, variables other than price and quantity, both of which are endogenous variables) are needed to perform such an estimation. An alternative to "structural estimation" is reduced-form estimation, which regresses each of the endogenous variables on the respective exogenous variables. Macroeconomic uses of demand and supply Demand and supply have also been generalized to explain macroeconomic variables in a market economy, including the quantity of total output and the general price level. The Aggregate Demand-Aggregate Supply model may be the most direct application of supply and demand to macroeconomics, but other macroeconomic models also use supply and demand. Compared to microeconomic uses of demand and supply, different (and more controversial) theoretical considerations apply to such macroeconomic counterparts as aggregate demand and aggregate supply. Demand and supply are also used in macroeconomic theory to relate money supply and money demand to interest rates, and to relate labor supply and labor demand to wage rates. According to Hamid S. Hosseini, the power of supply and demand was understood to some extent by several early Muslim scholars, such as fourteenth-century Mamluk scholar Ibn Taymiyyah, who wrote: "If desire for goods increases while its availability decreases, its price rises. On the other hand, if availability of the good increases and the desire for it decreases, the price comes down." |“||If desire for goods increases while its availability decreases, its price rises. On the other hand, if availability of the good increases and the desire for it decreases, the price comes down.||”| John Locke's 1691 work Some Considerations on the Consequences of the Lowering of Interest and the Raising of the Value of Money. includes an early and clear description of supply and demand and their relationship. In this description demand is rent: “The price of any commodity rises or falls by the proportion of the number of buyer and sellers” and “that which regulates the price... [of goods] is nothing else but their quantity in proportion to their rent.” The phrase "supply and demand" was first used by James Denham-Steuart in his Inquiry into the Principles of Political Economy, published in 1767. Adam Smith used the phrase in his 1776 book The Wealth of Nations, and David Ricardo titled one chapter of his 1817 work Principles of Political Economy and Taxation "On the Influence of Demand and Supply on Price". In The Wealth of Nations, Smith generally assumed that the supply price was fixed but that its "merit" (value) would decrease as its "scarcity" increased, in effect what was later called the law of demand also. Ricardo, in Principles of Political Economy and Taxation, more rigorously laid down the idea of the assumptions that were used to build his ideas of supply and demand. Antoine Augustin Cournot first developed a mathematical model of supply and demand in his 1838 Researches into the Mathematical Principles of Wealth, including diagrams. During the late 19th century the marginalist school of thought emerged. This field mainly was started by Stanley Jevons, Carl Menger, and Léon Walras. The key idea was that the price was set by the most expensive price, that is, the price at the margin. This was a substantial change from Adam Smith's thoughts on determining the supply price. In his 1870 essay "On the Graphical Representation of Supply and Demand", Fleeming Jenkin in the course of "introduc[ing] the diagrammatic method into the English economic literature" published the first drawing of supply and demand curves therein, including comparative statics from a shift of supply or demand and application to the labor market. The model was further developed and popularized by Alfred Marshall in the 1890 textbook Principles of Economics. The philosopher Hans Albert who argued that the ceteris paribus conditions of the marginalist theory rendered the theory itself an empty tautology and completely closed to experimental testing. In essence, demand and supply curve (theoretical line of quantity of a product which would have been offered or requested for given price) is purely ontological. Cambridge economist Joan Robinson attacked the theory in similar line, arguing that the concept is circular: "Utility is the quality in commodities that makes individuals want to buy them, and the fact that individuals want to buy commodities shows that they have utility":48 Robinson also pointed out that because the theory assumes that preferences are fixed this means that utility is not a testable assumption. This is because if we take changes in peoples' behavior in relation to a change in prices or a change in the underlying budget constraint we can never be sure to what extent the change in behavior was due to the change in price or budget constraint and how much was due to a change in preferences. Even in practical terms, neither cardinal nor ordinal utility is empirically observable in the real world. In the case of cardinal utility, it is impossible to measure the level of satisfaction "quantitatively" when someone consumes or purchases an apple. In case of ordinal utility, it is impossible to determine what choices were made when someone purchases, for example, an orange. Any act would involve preference over a vast set of choices (such as apple, orange juice, other vegetable, vitamin C tablets, exercise, not purchasing, etc.). At least two assumptions are necessary for the validity of the standard model: first, that supply and demand are independent; second, that supply is "constrained by a fixed resource". If these conditions do not hold, then the Marshallian model cannot be sustained. Sraffa's critique focused on the inconsistency (except in implausible circumstances) of partial equilibrium analysis and the rationale for the upward slope of the supply curve in a market for a produced consumption good. The notability of Sraffa's critique is also demonstrated by Paul A. Samuelson's comments and engagements with it over many years, for example: - "What a cleaned-up version of Sraffa (1926) establishes is how nearly empty are all of Marshall's partial equilibrium boxes. To a logical purist of Wittgenstein and Sraffa class, the Marshallian partial equilibrium box of constant cost is even more empty than the box of increasing cost.". Aggregate excess demand in a market is the difference between the quantity demanded and the quantity supplied as a function of price. In the model with an upward-sloping supply curve and downward-sloping demand curve, the aggregate excess demand function only intersects the axis at one point, namely, at the point where the supply and demand curves intersect. The Sonnenschein–Mantel–Debreu theorem shows that the standard model cannot be rigorously derived in general from general equilibrium theory. The model of prices being determined by supply and demand assumes perfect competition. But: - "economists have no adequate model of how individuals and firms adjust prices in a competitive model. If all participants are price-takers by definition, then the actor who adjusts prices to eliminate excess demand is not specified". Goodwin, Nelson, Ackerman, and Weisskopf write: - "If we mistakenly confuse precision with accuracy, then we might be misled into thinking that an explanation expressed in precise mathematical or graphical terms is somehow more rigorous or useful than one that takes into account particulars of history, institutions or business strategy. This is not the case. Therefore, it is important not to put too much confidence in the apparent precision of supply and demand graphs. Supply and demand analysis is a useful precisely formulated conceptual tool that clever people have devised to help us gain an abstract understanding of a complex world. It does not—nor should it be expected to—give us in addition an accurate and complete description of any particular real world market." - Alpha consumer - Barriers to entry - Cambridge capital controversy - Consumer theory - Deadweight loss - Demand chain - Demand forecasting - Demand shortfall - Demand vacuum - Economic surplus - Effective demand - Effect of taxes and subsidies on price - Excess demand function - History of economic thought - Induced demand - Inverse demand function - Labor shortage - Law of supply - Neoclassical economics - Price discovery - Producer's surplus - Real prices and ideal prices - Say's Law - "Supply creates its own demand" - Supply shock - Besanko, David; Braeutigam, Ronald (2010). Microeconomics (4th ed.). Wiley. - Note that unlike most graphs, supply & demand curves are plotted with the independent variable (price) on the vertical axis and the dependent variable (quantity supplied or demanded) on the horizontal axis. - "Marginal Utility and Demand". Retrieved 2007-02-09. - "Microeconomics - Supply and Demand". Retrieved 2014-12-31. - Mankiw, N.G.; Taylor, M.P. (2011). Economics (2nd ed., revised ed.). Andover: Cengage Learning. - Jain, T.R. (2006–2007). Microeconomics and Basic Mathematics. New Delhi: VK Publications. p. 28. ISBN 81-87140-89-5. - Kibbe, Matthew B. "The Minimum Wage: Washington's Perennial Myth". Cato Institute. Retrieved 2007-02-09. - P. Garegnani, "Heterogeneous Capital, the Production Function and the Theory of Distribution", Review of Economic Studies, V. 37, N. 3 (Jul. 1970): 407–436 - Robert L. Vienneau, "On Labour Demand and Equilibria of the Firm", Manchester School, V. 73, N. 5 (Sep. 2005): 612–619 - Arrigo Opocher and Ian Steedman, "Input Price-Input Quantity Relations and the Numeraire", Cambridge Journal of Economics, V. 3 (2009): 937–948 - Michael Anyadike-Danes and Wyne Godley, "Real Wages and Employment: A Sceptical View of Some Recent Empirical Work", Manchester School, V. 62, N. 2 (Jun. 1989): 172–187 - Basij J. Moore, Horizontalists and Verticalists: The Macroeconomics of Credit Money, Cambridge University Press, 1988 - Ritter, Lawrence S.; Silber, William L.; Udell, Gregory F. (2000). Principles of Money, Banking, and Financial Markets (10th ed.). Addison-Wesley, Menlo Park C. pp. 431–438, 465–476. ISBN 0-321-37557-2. - Hosseini, Hamid S. (2003). "Contributions of Medieval Muslim Scholars to the History of Economics and their Impact: A Refutation of the Schumpeterian Great Gap". In Biddle, Jeff E.; Davis, Jon B.; Samuels, Warren J. A Companion to the History of Economic Thought. Malden, MA: Blackwell. pp. 28–45 [28 & 38]. doi:10.1002/9780470999059.ch3. ISBN 0-631-22573-0. (citing Hamid S. Hosseini, 1995. "Understanding the Market Mechanism Before Adam Smith: Economic Thought in Medieval Islam," History of Political Economy, Vol. 27, No. 3, 539–61). - John Locke (1691) Some Considerations on the consequences of the Lowering of Interest and the Raising of the Value of Money - Thomas M. Humphrey, 1992. "Marshallian Cross Diagrams and Their Uses before Alfred Marshall," Economic Review, Mar/Apr, Federal Reserve Bank of Richmond, pp. 3–23. - A.D. Brownlie and M. F. Lloyd Prichard, 1963. "Professor Fleeming Jenkin, 1833–1885 Pioneer in Engineering and Political Economy," Oxford Economic Papers, NS, 15(3), p. 211. - Fleeming Jenkin, 1870. "The Graphical Representation of the Laws of Supply and Demand, and their Application to Labour," in Alexander Grant, ed., (Scroll to chapter) Recess Studies, ch. VI, pp. 151–85. Edinburgh: Edmonston and Douglas - Pilkington, Philip. Fixing the Economists. 27 February 2014. Available at: Hans Albert Expands Robinson’s Critique of Marginal Utility Theory to the Law of Demand - Robinson, Joan (1962). Economic Philosophy. Harmondsworth, Middlesex, UK: Penguin Books. - Pilkington, Philip. Fixing the Economists. 17 February 2014. Available at: Joan Robinson’s Critique of Marginal Utility Theory - Archived July 16, 2011 at the Wayback Machine - Avi J. Cohen, "'The Laws of Returns Under Competitive Conditions': Progress in Microeconomics Since Sraffa (1926)?", Eastern Economic Journal, V. 9, N. 3 (Jul.-Sep.): 1983) - Paul A. Samuelson, "Reply" in Critical Essays on Piero Sraffa's Legacy in Economics (edited by H. D. Kurz) Cambridge University Press, 2000 - Alan Kirman, "The Intrinsic Limits of Modern Economic Theory: The Emperor has No Clothes", The Economic Journal, V. 99, N. 395, Supplement: Conference Papers (1989): pp. 126–139 - Alan P. Kirman, "Whom or What Does the Representative Individual Represent?" Journal of Economic Perspectives, V. 6, N. 2 (Spring 1992): pp. 117–136 - Goodwin, N, Nelson, J; Ackerman, F & Weissskopf, T: Microeconomics in Context 2d ed. Sharpe 2009 ISBN 978-0-7656-2301-0 - Ehrbar, Al (2008). "Supply". In David R. Henderson. Concise Encyclopedia of Economics (2nd ed.). Indianapolis: Library of Economics and Liberty. ISBN 978-0865976658. OCLC 237794267. - Henderson, David R. (2008). "Demand". Concise Encyclopedia of Economics (2nd ed.). Indianapolis: Library of Economics and Liberty. ISBN 978-0865976658. OCLC 237794267. - Foundations of Economic Analysis by Paul A. Samuelson - Price Theory and Applications by Steven E. Landsburg ISBN 0-538-88206-9 - An Inquiry into the Nature and Causes of the Wealth of Nations, Adam Smith, 1776 - Supply and Demand book by Hubert D. Henderson at Project Gutenberg. |Wikimedia Commons has media related to Supply and demand curves.| |Look up supply or demand in Wiktionary, the free dictionary.| - Nobel Prize Winner Prof. William Vickrey: 15 fatal fallacies of financial fundamentalism – A Disquisition on Demand Side Economics - "Marshallian Cross Diagrams and Their Uses before Alfred Marshall: The Origins of Supply and Demand Geometry" by Thomas Humphrey (via the Richmond Fed) - By what is the price of a commodity determined?, a brief statement of Karl Marx's rival account - Supply and Demand by Fiona Maclachlan and Basic Supply and Demand by Mark Gillis, Wolfram Demonstrations Project.
Helps students improve reading comprehension the evaluation versions provide full-working samples of activities of the developing critical thinking skills. The international critical thinking reading & writing test how to assess close reading and substantive writing use in conjunction with: the thinker’s guide to. Free esl efl topic-based lessons and teaching activities to help develop students’ critical thinking reading comprehension critical thinking activity. Reading comprehension critical thinking showing top 8 worksheets in the category - reading comprehension critical thinking once you find your worksheet, just click. Improve critical thinking skills and you're sure to improve reading comprehension, problem solving, writing skills, and more. Comprehension and critical thinking are so closely related, it’s fair to say you can’t fully comprehend without some type of critical thinking as teachers. Build grade 5 students' comprehension and critical thinking skills and prepare them for standardized tests with high read forever® activities, and questions. Comprehension strategies and activities for years 1–9 reading and thinking teaching reading and viewing comprehension strategies and activities for. Reading comprehension activities with critical thinking this is a great resource that helps students with reading comprehension it can be used at any grade level. Readers apply comprehension skills to determine what a text says they rely on critical thinking skills to tell them whether to believe it when readers. Comprehension & critical thinking for learning success with dynamic activities in reading rosa parks comprehension and critical thinking grade: 6. High quality reading comprehension worksheets for all ages the use of critical thinking skills at dedicated to improving reading comprehension for all ages. In this series, readers are tested on their ability to perform interpretations, make deductions, and infer the meaning of vocabulary words based on an informational. Work sheet library: critical thinking: grades students to build a wide variety of critical thinking criticalthinkingcom teacher-ready activities. Critical thinking critical thinking worksheets focus on higher level thinking skills worksheetplacecom for great educators reading comprehension worksheet. Critical thinking mission critical: reading together to build critical thinking skills by: reading comprehension parent tips. Comprehension – content area reading comprehension – critical thinking comprehension – analysing text this reading comprehension activity. Critical reading activities college students should learn that attentive reading habits can increase their retention and comprehension means thinking. 81 fresh & fun critical-thinking activities read each activity aloud or have a child read it aloud since critical thinking doesn’t end when an individual. Work sheet library: critical thinking: grades 6-8 use with your students to build a wide variety of critical thinking read the entire list or click one of. Learn critical thinking reading skills while practicing a variety of reading comprehension strategies with these interactive reading and critical thinking software. 81 fresh & fun critical-thinking activities engaging activities and reproducibles to develop kids’ higher-level thinking skills by laurie rozakis.
Loading in 2 Seconds... Loading in 2 Seconds... ACCURACY • Accuracy is the closeness of a measured value to the true value. • For example, the measured density of water has become more accurate with improved experimental design, technique, and equipment. ACCURACY • Percent error is used to estimate the accuracy of a measurement. • Percent error will always be a positive. • What is the percent error if the measured density of titanium (Ti) is 4.45 g/cm3 and the accepted density of Ti is 4.50 g/cm3? PRECISION • Precision is the agreement between repeated measurements of the same sample. Precision is usually expressed as a standard deviation. • For example, the precision of a method for measuring arsenic (As) was determined by measuring 7 different solutions each containing 14.3 μg/L of As. Average = 15.3 μg/L Standard Deviation = 2.1 μg/L What is the true concentration of As in this experiment? Estimate the accuracy of this method. How precise is this method? 14.3 μg/L 2.1 μg/L ACCURACY AND PRECISION • Describe the accuracy and precision of these 4 targets. Accurate, and precise Precise, but not accurate Accurate, but not precise Not accurate, and not precise ERRORS • Systematic (or determinate) errors are reproducible and cause a bias in the same direction for each measurement. • For example, a poorly trained operator that consistently makes the same mistake will cause systematic error. Systematic error can be corrected. • Random (or indeterminate) errors are caused by the natural uncertainty that occurs with any measurement. • Random errors obey the laws of probability. That is, random error might cause a value to be over predicted during its first measurement and under predicted during its second measurement. Random error cannot be corrected. INTERPOLATION AND SIGNIFICANT FIGURES • By convention, a measurement is recorded by writing all exactly known numbers and 1 number which is uncertain, together with a unit label. • All numbers written in this way, including the uncertain digit, are called significant figures. • For example, the blue line is 2.73 cm long. This measurement has 3 significant figures. The first 2 digits (2.7 cm) are exactly known. The third digit (0.03 cm) is uncertain because it was interpolated or estimated 1 digit beyond the smallest graduation. INTERPOLATION AND SIGNIFICANT FIGURES • What is the volume of water in this graduated cylinder? Always measure the volume of a liquid at the bottom of the meniscus. The units are mL. • The volume of water is 52.8 mL. The 52 mL are exactly known, and the 0.8 mL is uncertain because it was interpolated or estimated 1 digit beyond the smallest graduation. SIGNIFICANT FIGURES AND ZEROS • Zeros between nonzero digits are significant. That is, 508 cm has 3 significant figures. • Leading zeroes merely locate the decimal point and are never significant. That is, 0.0497 cm equals 4.97 x 10-2 cm and has 3 significant figures. • Trailing zeros are significant as follows: 50.0 mL has 3 significant figures, 50. mL has 2 significant figures, and 50 mL has 1 significant figure. 5 4 2 2 2 3 3 3 2 3 3 3 SIGNIFICANT FIGURES, ADDITION, AND SUBTRACTION • When adding or subtracting do NOT extend the result beyond the first column with a doubtful figure. For example, … SIGNIFICANT FIGURES, ADDITION, AND SUBTRACTION • What is 16.874 + 2.6? • What is 16.874 - 2.6? SIGNIFICANT FIGURES, MULTIPLICATION, AND DIVISION • When multiplying or dividing the answer will have the same number of significant digits as the least accurate number used to get the answer. For example, … • 2.005 g / 4.95 mL = 0.405 g/mL • What is 16.874 x 2.6? • What is 16.874 / 2.6? SIGNIFICANT FIGURES AND CALCULATIONS THAT REQUIRE MULTIPLE STEPS • An average is the best estimate of the true value of a parameter. • A standard deviation is a measure of precision. • Averages and standard deviations require several steps to calculate. You must keep track of the number of significant figures during each step. Do NOT discard or round any figures until the final number is reported. SIGNIFICANT FIGURES AND CALCULATIONS THAT REQUIRE MULTIPLE STEPS 2 Significant Figures ∞ Significant Figures 2 Significant Figures 1 Significant Figure 1 Significant Figure 1 Significant Figure 0 Significant Figures What is average and standard deviation for the following 3 measurements of the same sample? SOURCES • American Public Health Association, American Water Works Association, Water Environment Federation. 1995. Standard Methods for the Examination of Water and Wastewater. 19th ed. Washington, DC: American Public Health Association. • Barnes, D.S., J.A. Chandler. 1982. Chemistry 111-112 Workbook and Laboratory Manual. Amherst, MA: University of Massachusetts. • Christian, G.D. 1986. Analytical Chemistry, 3rd ed. New York, NY: John Wiley & Sons, Inc. • Frisbie, S.H., E.J. Mitchell, A.Z. Yusuf, M.Y. Siddiq, R.E. Sanchez, R. Ortega, D.M. Maynard, B. Sarkar. 2005. The development and use of an innovative laboratory method for measuring arsenic in drinking water from western Bangladesh. Environmental Health Perspectives. 113(9):1196-1204. • Morrison Laboratories. 2006. Meniscus Madness. Available: http://www.morrisonlabs.com/meniscus.htm [accessed 25 August 2006].