text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/Category:CS1:_unfit_URL] | [TOKENS: 174] |
Category:CS1: unfit URL This tracking category lists pages with CS1 citations that use |url-status=usurped or |url-status=unfit. The keywords unfit and usurped are intended to identify original URLs that point to live sites that are inappropriate: spam, advertising, porn, etc. A URL that returns a HTTP 404 error is not considered to be unfit and, in such cases, editors should set |url-status=dead. CS1 and CS2 templates in pages listed in this category should be checked to ensure that the unfit and usurped keywords are correctly applied. Only Module:Citation/CS1 should directly add pages to this category. Other values Pages in category "CS1: unfit URL" The following 200 pages are in this category, out of approximately 143,985 total. This list may not reflect recent changes. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/History_of_the_Jews_in_the_Dominican_Republic] | [TOKENS: 1911] |
Contents History of the Jews in the Dominican Republic The history of the Jews in the Dominican Republic goes back to the late 1400s, with the arrival of Sephardic Jews exiled from Spain and the Mediterranean area in 1492 and 1497. This was followed by new waves of migrants dating from the 1700s and again in the period before and during World War II, reaching a peak in the late 1930s and early 1940s, as Jewish refugees fled the conditions in Europe brought on by WWII. History The first Jews known to have reached the island of Hispaniola were Sephardic Jews who came from the Iberian Peninsula in the 1490s. The majority of them were fleeing from Spain, where conversion to Catholicism was being enforced. Despite this, when the island was divided by the French and Spanish Empires in the 17th century, most Jews settled on the Spanish side which would later become the Dominican Republic. Eventually, Sephardim from other countries also arrived. Most of them hid their Jewish identities or were unaffiliated with Jewish tradition by that time.[citation needed] Among their descendants were Dominican President Francisco Henríquez y Carvajal and his issue Pedro Henríquez Ureña, Max Henríquez Ureña, and Camila Henríquez Ureña. Before the Jews migrated and established the colony of Sosúa during WWII, there was an attempt to make a Jewish colony in the Dominican Republic in the late 19th century. This settlement was not as well documented as the one created in the 1940s. General Gregorio Luperón, who had served as President of the Dominican Republic and was living in exile in Paris in 1882, proposed the country as a refuge for Jews escaping pogroms in Russia. Luperón's motivations for proposing this plan seem to have stemmed from a combination of humanitarian concern and a desire to promote the economic development of the Dominican Republic. He believed that the Jewish refugees, with their skills and work ethic, could contribute to the prosperity of the country. Luperón initiated contact with several key figures and organizations in the Jewish world in order to circulate the idea. These figures included Alliance Israélite Universelle, The prominent Rothschild banking family, and the Jewish community in the United States, particularly in New York. While many Dominicans and Jews living in the Dominican Republic were already in favor of the idea, others opposed the plan. Others raised practical concerns about the plan, particularly the need for financial support, land allocation, and employment opportunities for potential settlers. It appears that a commission of Dominican landowners was formed to investigate the feasibility of the plan but that no concrete action was ultimately taken. While Luperón's plan for a Jewish colony in the Dominican Republic in the 1880s ultimately failed to materialize, it demonstrates the Dominican Republic's recurring role as a potential haven for Jewish refugees during times of crisis. The Dominican Republic was the only sovereign country willing to accept mass Jewish immigration immediately prior and during World War II, the only alternative being the Shanghai International Settlement. The United States government had attempted to also set up a Jewish colony in Alaska in order to populate the area. However, what would become known as The Alaska Plan, was effectively buried due to a lack of support and opposition from antisemitic and nativist groups. In turn, support for the Jews fell almost solely on the Dominican Republic. At the Évian Conference, convened to address the Jewish refugee crisis, the Dominican Republic, under the rule of dictator Rafael Trujillo, offered to accept 100,000 Jewish refugees. However, it is estimated that 5,000 visas were actually issued, and the vast majority of the recipients did not reach the country because of how hard it was to get out of occupied Europe. Trujillo then offered his personal estate in Sosúa to the Dominican Republic Settlement Association (DORSA), established by the American Jewish Joint Distribution Committee (JDC) to manage the resettlement project. In return for his land, Trujillo received $100,000 in DORSA stock. By February of 1940, DORSA had managed to get congressional approval for the settlement in Sosua and the plan began to move forward. By Spring of that year, the colony began receiving its first settlers. About 700 European Jews of Ashkenazi Jewish descent reached the settlement where each family received 33 hectares (82 acres) of land, 10 cows (plus 2 additional cows per child), a mule and a horse, and a US$10,000 loan (about 219,000 dollars at 2025 prices) at 1% interest. The colonists were expected to engage in communal agriculture, sharing work and profits equally. Dairying and poultry raising were also intended as complementary activities. However, crop-based agriculture proved largely unsuccessful due to poor soil, unpredictable rainfall, and limited market access.Due to the challenges of communal agriculture, the colony transitioned to a capitalist model by 1945, with individual families receiving their own farms. The only exception to this individualistic approach was the dairy and meat factories, which were run as cooperatives with profits divided according to investment. Those who did not travel to Sosúa usually settled in the capital, Santo Domingo. In 1943 the number of known Jews in the Dominican Republic peaked at 1000. At the conclusion of WWII, The Jewish population in Sosúa gradually declined as residents relocated, mostly to the United States. As a portion of the Jewish population left, Dominican residents began to move in Sosúa. Throughout the majority of the 20th century, Sosúa existed as a mixed community of Jewish and Dominican residents, with the Jewish population aging and shrinking. The Dominican influence, both economic and cultural, becomes increasingly prominent. This peaked in 1980 when Sosúa's Jewish community experienced a deep decline due to emigration during the touristic boom of Sosúa when most Jews sold their land to developers. Community The current population of known Jews in the Dominican Republic is close to 3,000, with the majority living in the capital, Santo Domingo, and others residing in Sosúa. However, while the Jewish community in Sosúa still exists, it has shrunk considerably. Many of the original settlers have died or emigrated, and their children often choose not to return. The community retains some of its unique character, with a mix of languages and cultural traditions, but the future of the Jewish community in Sosúa remains uncertain.Since Jews mixed with those already living in the Dominican Republic, the exact number of Dominicans with Jewish ancestry isn't known. In spite of the Jews intermarriage with the Dominican people already living there, some spouses have formalized their Judaism through conversions and participate in Jewish communal life while other Sephardic Jews converted to Catholicism, still maintaining their Sephardic culture. Some Dominican Jews have also made aliyah to Israel. There are three synagogues and one Sephardic Jewish Educational Center. One is the Centro Israelita de República Dominicana in Santo Domingo, another is a Chabad outreach center also in Santo Domingo, and another is in the country's first established community in Sosúa. Beth Midrash Eleazar , the Sephardic Educational Center, caters to those Jews who are descendants of the Sephardic Jews that migrated to Hispaniola in colonial times and later. In addition, they also provide kosher meat in the Beth Yoseph style, and supervise a small-scale kosher bakery. An "afterschool" at the Centro Israelita is active on a weekly basis and a chapter of the International Council of Jewish Women is also active. The Chabad outreach center focuses on assisting the local Jewish population reconnect with their Jewish roots and (because Chabad is of the Chassidic Jewish tradition) it is a source for traditional Judaism in the Dominican Republic. In Sosua, there is a small Jewish Museum next to the synagogue. On the High Holidays, the Sosúa community hires a cantor from abroad who comes to lead services.[citation needed] Research A great deal of research on the subject of Dominican Jewry was done by Rabbi Henry Zvi Ucko who had been a writer and teacher in Germany until political conditions and growing anti-Semitism forced him to emigrate[when?]. His travels eventually took him to the Dominican Republic, where he organized a congregation in Santo Domingo (Ciudad Trujillo) and began researching the history of Jews in the country. His research covered much of the history of the Sephardic Jews there and documented the assimilation that the population went through (and was going through) during his time. Included in his research is correspondence with Haim Horacio López Penha, a Dominican Jewish writer, who encouraged Ucko to write a history of the Jews in the Dominican Republic. More recently, the publication of the book "Once Jews" has made easily available information on many early Jewish settlers in the Dominican Republic. Scholars such as the historian of the town of Baní, Manuel Valera, as well as Dr. Yehonatan Demota, continue the study of Dominican Sephardic and converso ancestry, and the question of the Dominican anusim. References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Character_class] | [TOKENS: 1718] |
Contents Character class In tabletop games and video games, a character class is an occupation, profession or role assigned to a game character to highlight and differentiate their capabilities and specializations. In role-playing games (RPGs), character classes aggregate several abilities and aptitudes, and may also detail aspects of background and social standing, or impose behavior restrictions. Classes may be considered to represent archetypes, or specific careers. RPG systems that employ character classes often subdivide them into levels of accomplishment, to be attained by players during the course of the game. It is common for a character to remain in the same class for its lifetime, with restricted tech tree of upgrades and power-ups; although some games allow characters to change class or attain multiple classes, usually at the cost of game currency or special items. Some systems eschew the use of classes and levels entirely; others hybridize them with skill-based systems or emulate them with character templates.[citation needed] In shooter games and other cooperative video games, classes are generally distinct roles with specific mission goals, weapons, or tactical aptitudes and special abilities, with only tangential relation to the RPG context. Their differences may range from simple equipment changes, such as sharpshooter classes armed with sniper rifles, or heavy weapon classes with machine guns and rocket launchers; to unique gameplay changes, such as medic classes that are lightly armed but tasked with healing and reviving injured allied players. History Dungeons & Dragons (D&D), the first formalized roleplaying game, introduced the use of classes, which were inspired by the units in miniature wargames such as Chainmail. Many subsequent games adopted variations of the same idea. These games are sometimes referred to as 'class-based' systems. As well as tabletop games, character classes are found in many role-playing video games and live action role-playing games. Many of the most popular role-playing games, such as D20 system and White Wolf games still use character classes in one way or another. Most games offer additional ways to systematically differentiate characters, such as race or species, skills, or affiliations. In fantasy games and role-playing games In fantasy games, Fighter, Mage, and Thief form a common archetypal trio of basic classes, each ones' abilities offsetting the other's weakness. The Fighter is strong and focuses on weapon-based combat, the Mage, renamed Wizard in later editions of Dungeons & Dragons, is a ranged fighter equipped with a variety of magical abilities for combative and utilitarian purposes, and the Thief, renamed Rogue in later editions, is not physically strong but focuses on speed or stealth. Thus, it is usual to find one or more classes that excel in combat, several classes (called spell-casters) that are able to perform magic (often different kinds of magic), and one or more classes that deal with stealth. In its original release Dungeons & Dragons included three classes: fighting man, magic user, and Cleric (a class distinct from Mages or Wizards that channels divine power from deific sources to perform thaumaturgy and miracles rather than arcane magic drawn from cosmic sources to cast spells), while supplemental rules added the Thief class. In subsequent editions of the game, new classes were added individually, from spell-casting classes such as the Sorcerer, Warlock and Druid, to more combat-centered classes such as the Barbarian, Ranger and Monk, along with variant subclasses. In science fiction and other non-fantasy role-playing games, the role of magic user is often filled with a scientist or other intelligence-based class, while the Cleric becomes a medic or similarly supportive role, and the Rogue and/or Ranger with an explorer or assassin. Some science fiction and supernatural-themed RPGs also use psychic powers as a stand in for magic. There are also character classes that combine features of the classes listed above and are frequently called hybrid classes. Some examples include the Bard (a cross between the Thief and Mage with an emphasis on interpersonal skills, mental and visual spells, and supportive magical abilities), or the Paladin (a cross between the Fighter and Cleric with slightly decreased combat skills relative to a fighter but various innate abilities that are used to heal or protect allies and repel and/or smite evil opponents).[citation needed] Some RPGs feature another variation on the classes mechanic. For example, in Warhammer Fantasy Roleplay, players choose a career. The career works like a class with abilities (known in WFRP as skills and talents) added to the character based on the chosen career. However, as the player advances and gains more experience he or she may choose a new career according to a predefined career path or change to a completely different career. WFRP is also notable in that characters are encouraged to roll to determine their starting career which is compensated for by free XP which can be spent on more skills. As an alternative to class-based systems, skill-based systems are designed to give the player a stronger sense of control over how their character develops. In such systems, players can often choose the direction of their characters as they play, usually by assigning points to certain skills. Classless games often provide templates for the player to work from, many of which are based on traditional character classes. Many classless games' settings or rules systems lend themselves to the creation of character following certain archetypal trends.[citation needed] For example, in the role-playing video game Fallout, common character archetypes include the "shooter", "survivalist", "scientist", "smooth talker" and "sneaker", unofficial terms representing various possible means of solving or avoiding conflicts and puzzles in the game. GURPS, which inspired Fallout's system, also used a classless system. The original PlayStation 2 release of the role-playing video game Final Fantasy XII included a skill-based system in which as the player progressed, they would gain buffs and abilities (called licenses) via the game's License Board (of which each party member shared). Final Fantasy XII's re-release Final Fantasy XII International Zodiac Job System and high-definition remaster, Final Fantasy XII: The Zodiac Age changed this system by adding a class (or job) system in which classes could be changed, and they each had separate License Boards. In shooter games Many multiplayer shooter games use class systems to provide different tactics and styles of play and promote teamwork and cooperation. These classes may only have differences through equipment, or they may feature notable gameplay differences. Most games do not allow players to use elements of multiple classes at the same time, though they typically allow players to switch classes before or during a match through a menu. Some games have progression systems for each individual class with class-specific unlockable items. Examples of shooter games with classes include the Battlefield series, Star Wars Battlefront II, Rising Storm 2: Vietnam, and Insurgency: Sandstorm. All of these examples include a "heavy" or "support" class, a less-mobile class armed with some sort of machine gun that is focused around suppressive fire and team support; they also include classes that are simply the standard rifleman class with additional unique equipment (such as "demolitions" classes, typically riflemen with additional explosive items). One notable example is the 2007 team-based shooter Team Fortress 2, which features nine distinct classes divided into three categories: Offense, Defense, and Support. Offense classes (Scout, Soldier, Pyro) specialize in assaulting and overwhelming enemies to complete objectives; Defense classes (Demoman, Heavy, Engineer) specialize in defending positions and hindering enemy advances; and Support classes (Medic, Sniper, Spy) specialize in assisting their team in different ways. Each of these classes feature notable gameplay differences that are meant to suit their categories, yet do not limit them from being used for both offense and defense playstyles to varying degrees of effectiveness. They also all have strengths and weaknesses in a rock paper scissors-esque style; for example, the Spy is strong against slow or sedentary classes such as the Heavy and Sniper, with equipment that specifically counters the Engineer's constructions, but his stealth abilities are nullified by the Pyro's fire, and he is impractical against more mobile classes such as the Scout. Each class is also treated as its own character, with unique personalities, backstories, and interactions with other classes. A derivative of these types of classes are seen in hero shooters, where each hero has distinct abilities and weapons that often combine archetypical conventional classes or are unique on their own. See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Lod#cite_ref-:0_3-4] | [TOKENS: 4733] |
Contents Lod Lod (Hebrew: לוד, fully vocalized: לֹד), also known as Lydda (Ancient Greek: Λύδδα) and Lidd (Arabic: اللِّدّ, romanized: al-Lidd, or اللُّدّ, al-Ludd), is a city 15 km (9+1⁄2 mi) southeast of Tel Aviv and 40 km (25 mi) northwest of Jerusalem in the Central District of Israel. It is situated between the lower Shephelah on the east and the coastal plain on the west. The city had a population of 90,814 in 2023. Lod has been inhabited since at least the Neolithic period. It is mentioned a few times in the Hebrew Bible and in the New Testament. Between the 5th century BCE and up until the late Roman period, it was a prominent center for Jewish scholarship and trade. Around 200 CE, the city became a Roman colony and was renamed Diospolis (Ancient Greek: Διόσπολις, lit. 'city of Zeus'). Tradition identifies Lod as the 4th century martyrdom site of Saint George; the Church of Saint George and Mosque of Al-Khadr located in the city is believed to have housed his remains. Following the Arab conquest of the Levant, Lod served as the capital of Jund Filastin; however, a few decades later, the seat of power was transferred to Ramla, and Lod slipped in importance. Under Crusader rule, the city was a Catholic diocese of the Latin Church and it remains a titular see to this day.[citation needed] Lod underwent a major change in its population in the mid-20th century. Exclusively Palestinian Arab in 1947, Lod was part of the area designated for an Arab state in the United Nations Partition Plan for Palestine; however, in July 1948, the city was occupied by the Israel Defense Forces, and most of its Arab inhabitants were expelled in the Palestinian expulsion from Lydda and Ramle. The city was largely resettled by Jewish immigrants, most of them expelled from Arab countries. Today, Lod is one of Israel's mixed cities, with an Arab population of 30%. Lod is one of Israel's major transportation hubs. The main international airport, Ben Gurion Airport, is located 8 km (5 miles) north of the city. The city is also a major railway and road junction. Religious references The Hebrew name Lod appears in the Hebrew Bible as a town of Benjamin, founded along with Ono by Shamed or Shamer (1 Chronicles 8:12; Ezra 2:33; Nehemiah 7:37; 11:35). In Ezra 2:33, it is mentioned as one of the cities whose inhabitants returned after the Babylonian captivity. Lod is not mentioned among the towns allocated to the tribe of Benjamin in Joshua 18:11–28. The name Lod derives from a tri-consonantal root not extant in Northwest Semitic, but only in Arabic (“to quarrel; withhold, hinder”). An Arabic etymology of such an ancient name is unlikely (the earliest attestation is from the Achaemenid period). In the New Testament, the town appears in its Greek form, Lydda, as the site of Peter's healing of Aeneas in Acts 9:32–38. The city is also mentioned in an Islamic hadith as the location of the battlefield where the false messiah (al-Masih ad-Dajjal) will be slain before the Day of Judgment. History The first occupation dates to the Neolithic in the Near East and is associated with the Lodian culture. Occupation continued in the Levant Chalcolithic. Pottery finds have dated the initial settlement in the area now occupied by the town to 5600–5250 BCE. In the Early Bronze, it was an important settlement in the central coastal plain between the Judean Shephelah and the Mediterranean coast, along Nahal Ayalon. Other important nearby sites were Tel Dalit, Tel Bareqet, Khirbat Abu Hamid (Shoham North), Tel Afeq, Azor and Jaffa. Two architectural phases belong to the late EB I in Area B. The first phase had a mudbrick wall, while the late phase included a circulat stone structure. Later excavations have produced an occupation later, Stratum IV. It consists of two phases, Stratum IVb with mudbrick wall on stone foundations and rounded exterior corners. In Stratum IVa there was a mudbrick wall with no stone foundations, with imported Egyptian potter and local pottery imitations. Another excavations revealed nine occupation strata. Strata VI-III belonged to Early Bronze IB. The material culture showed Egyptian imports in strata V and IV. Occupation continued into Early Bronze II with four strata (V-II). There was continuity in the material culture and indications of centralized urban planning. North to the tell were scattered MB II burials. The earliest written record is in a list of Canaanite towns drawn up by the Egyptian pharaoh Thutmose III at Karnak in 1465 BCE. From the fifth century BCE until the Roman period, the city was a centre of Jewish scholarship and commerce. According to British historian Martin Gilbert, during the Hasmonean period, Jonathan Maccabee and his brother, Simon Maccabaeus, enlarged the area under Jewish control, which included conquering the city. The Jewish community in Lod during the Mishnah and Talmud era is described in a significant number of sources, including information on its institutions, demographics, and way of life. The city reached its height as a Jewish center between the First Jewish-Roman War and the Bar Kokhba revolt, and again in the days of Judah ha-Nasi and the start of the Amoraim period. The city was then the site of numerous public institutions, including schools, study houses, and synagogues. In 43 BC, Cassius, the Roman governor of Syria, sold the inhabitants of Lod into slavery, but they were set free two years later by Mark Antony. During the First Jewish–Roman War, the Roman proconsul of Syria, Cestius Gallus, razed the town on his way to Jerusalem in Tishrei 66 CE. According to Josephus, "[he] found the city deserted, for the entire population had gone up to Jerusalem for the Feast of Tabernacles. He killed fifty people whom he found, burned the town and marched on". Lydda was occupied by Emperor Vespasian in 68 CE. In the period following the destruction of Jerusalem in 70 CE, Rabbi Tarfon, who appears in many Tannaitic and Jewish legal discussions, served as a rabbinic authority in Lod. During the Kitos War, 115–117 CE, the Roman army laid siege to Lod, where the rebel Jews had gathered under the leadership of Julian and Pappos. Torah study was outlawed by the Romans and pursued mostly in the underground. The distress became so great, the patriarch Rabban Gamaliel II, who was shut up there and died soon afterwards, permitted fasting on Ḥanukkah. Other rabbis disagreed with this ruling. Lydda was next taken and many of the Jews were executed; the "slain of Lydda" are often mentioned in words of reverential praise in the Talmud. In 200 CE, emperor Septimius Severus elevated the town to the status of a city, calling it Colonia Lucia Septimia Severa Diospolis. The name Diospolis ("City of Zeus") may have been bestowed earlier, possibly by Hadrian. At that point, most of its inhabitants were Christian. The earliest known bishop is Aëtius, a friend of Arius. During the following century (200-300CE), it's said that Joshua ben Levi founded a yeshiva in Lod. In December 415, the Council of Diospolis was held here to try Pelagius; he was acquitted. In the sixth century, the city was renamed Georgiopolis after St. George, a soldier in the guard of the emperor Diocletian, who was born there between 256 and 285 CE. The Church of Saint George and Mosque of Al-Khadr is named for him. The 6th-century Madaba map shows Lydda as an unwalled city with a cluster of buildings under a black inscription reading "Lod, also Lydea, also Diospolis". An isolated large building with a semicircular colonnaded plaza in front of it might represent the St George shrine. After the Muslim conquest of Palestine by Amr ibn al-'As in 636 CE, Lod which was referred to as "al-Ludd" in Arabic served as the capital of Jund Filastin ("Military District of Palaestina") before the seat of power was moved to nearby Ramla during the reign of the Umayyad Caliph Suleiman ibn Abd al-Malik in 715–716. The population of al-Ludd was relocated to Ramla, as well. With the relocation of its inhabitants and the construction of the White Mosque in Ramla, al-Ludd lost its importance and fell into decay. The city was visited by the local Arab geographer al-Muqaddasi in 985, when it was under the Fatimid Caliphate, and was noted for its Great Mosque which served the residents of al-Ludd, Ramla, and the nearby villages. He also wrote of the city's "wonderful church (of St. George) at the gate of which Christ will slay the Antichrist." The Crusaders occupied the city in 1099 and named it St Jorge de Lidde. It was briefly conquered by Saladin, but retaken by the Crusaders in 1191. For the English Crusaders, it was a place of great significance as the birthplace of Saint George. The Crusaders made it the seat of a Latin Church diocese, and it remains a titular see. It owed the service of 10 knights and 20 sergeants, and it had its own burgess court during this era. In 1226, Ayyubid Syrian geographer Yaqut al-Hamawi visited al-Ludd and stated it was part of the Jerusalem District during Ayyubid rule. Sultan Baybars brought Lydda again under Muslim control by 1267–8. According to Qalqashandi, Lydda was an administrative centre of a wilaya during the fourteenth and fifteenth century in the Mamluk empire. Mujir al-Din described it as a pleasant village with an active Friday mosque. During this time, Lydda was a station on the postal route between Cairo and Damascus. In 1517, Lydda was incorporated into the Ottoman Empire as part of the Damascus Eyalet, and in the 1550s, the revenues of Lydda were designated for the new waqf of Hasseki Sultan Imaret in Jerusalem, established by Hasseki Hurrem Sultan (Roxelana), the wife of Suleiman the Magnificent. By 1596 Lydda was a part of the nahiya ("subdistrict") of Ramla, which was under the administration of the liwa ("district") of Gaza. It had a population of 241 households and 14 bachelors who were all Muslims, and 233 households who were Christians. They paid a fixed tax-rate of 33,3 % on agricultural products, including wheat, barley, summer crops, vineyards, fruit trees, sesame, special product ("dawalib" =spinning wheels), goats and beehives, in addition to occasional revenues and market toll, a total of 45,000 Akçe. All of the revenue went to the Waqf. In 1051 AH/1641/2, the Bedouin tribe of al-Sawālima from around Jaffa attacked the villages of Subṭāra, Bayt Dajan, al-Sāfiriya, Jindās, Lydda and Yāzūr belonging to Waqf Haseki Sultan. The village appeared as Lydda, though misplaced, on the map of Pierre Jacotin compiled in 1799. Missionary William M. Thomson visited Lydda in the mid-19th century, describing it as a "flourishing village of some 2,000 inhabitants, imbosomed in noble orchards of olive, fig, pomegranate, mulberry, sycamore, and other trees, surrounded every way by a very fertile neighbourhood. The inhabitants are evidently industrious and thriving, and the whole country between this and Ramleh is fast being filled up with their flourishing orchards. Rarely have I beheld a rural scene more delightful than this presented in early harvest ... It must be seen, heard, and enjoyed to be appreciated." In 1869, the population of Ludd was given as: 55 Catholics, 1,940 "Greeks", 5 Protestants and 4,850 Muslims. In 1870, the Church of Saint George was rebuilt. In 1892, the first railway station in the entire region was established in the city. In the second half of the 19th century, Jewish merchants migrated to the city, but left after the 1921 Jaffa riots. In 1882, the Palestine Exploration Fund's Survey of Western Palestine described Lod as "A small town, standing among enclosure of prickly pear, and having fine olive groves around it, especially to the south. The minaret of the mosque is a very conspicuous object over the whole of the plain. The inhabitants are principally Moslim, though the place is the seat of a Greek bishop resident of Jerusalem. The Crusading church has lately been restored, and is used by the Greeks. Wells are found in the gardens...." From 1918, Lydda was under the administration of the British Mandate in Palestine, as per a League of Nations decree that followed the Great War. During the Second World War, the British set up supply posts in and around Lydda and its railway station, also building an airport that was renamed Ben Gurion Airport after the death of Israel's first prime minister in 1973. At the time of the 1922 census of Palestine, Lydda had a population of 8,103 inhabitants (7,166 Muslims, 926 Christians, and 11 Jews), the Christians were 921 Orthodox, 4 Roman Catholics and 1 Melkite. This had increased by the 1931 census to 11,250 (10,002 Muslims, 1,210 Christians, 28 Jews, and 10 Bahai), in a total of 2475 residential houses. In 1938, Lydda had a population of 12,750. In 1945, Lydda had a population of 16,780 (14,910 Muslims, 1,840 Christians, 20 Jews and 10 "other"). Until 1948, Lydda was an Arab town with a population of around 20,000—18,500 Muslims and 1,500 Christians. In 1947, the United Nations proposed dividing Mandatory Palestine into two states, one Jewish state and one Arab; Lydda was to form part of the proposed Arab state. In the ensuing war, Israel captured Arab towns outside the area the UN had allotted it, including Lydda. In December 1947, thirteen Jewish passengers in a seven-car convoy to Ben Shemen Youth Village were ambushed and murdered.In a separate incident, three Jewish youths, two men and a woman were captured, then raped and murdered in a neighbouring village. Their bodies were paraded in Lydda’s principal street. The Israel Defense Forces entered Lydda on 11 July 1948. The following day, under the impression that it was under attack, the 3rd Battalion was ordered to shoot anyone "seen on the streets". According to Israel, 250 Arabs were killed. Other estimates are higher: Arab historian Aref al Aref estimated 400, and Nimr al Khatib 1,700. In 1948, the population rose to 50,000 during the Nakba, as Arab refugees fleeing other areas made their way there. A key event was the Palestinian expulsion from Lydda and Ramle, with the expulsion of 50,000-70,000 Palestinians from Lydda and Ramle by the Israel Defense Forces. All but 700 to 1,056 were expelled by order of the Israeli high command, and forced to walk 17 km (10+1⁄2 mi) to the Jordanian Arab Legion lines. Estimates of those who died from exhaustion and dehydration vary from a handful to 355. The town was subsequently sacked by the Israeli army. Some scholars, including Ilan Pappé, characterize this as ethnic cleansing. The few hundred Arabs who remained in the city were soon outnumbered by the influx of Jews who immigrated to Lod from August 1948 onward, most of them from Arab countries. As a result, Lod became a predominantly Jewish town. After the establishment of the state, the biblical name Lod was readopted. The Jewish immigrants who settled Lod came in waves, first from Morocco and Tunisia, later from Ethiopia, and then from the former Soviet Union. Since 2008, many urban development projects have been undertaken to improve the image of the city. Upscale neighbourhoods have been built, among them Ganei Ya'ar and Ahisemah, expanding the city to the east. According to a 2010 report in the Economist, a three-meter-high wall was built between Jewish and Arab neighbourhoods and construction in Jewish areas was given priority over construction in Arab neighborhoods. The newspaper says that violent crime in the Arab sector revolves mainly around family feuds over turf and honour crimes. In 2010, the Lod Community Foundation organised an event for representatives of bicultural youth movements, volunteer aid organisations, educational start-ups, businessmen, sports organizations, and conservationists working on programmes to better the city. In the 2021 Israel–Palestine crisis, a state of emergency was declared in Lod after Arab rioting led to the death of an Israeli Jew. The Mayor of Lod, Yair Revivio, urged Prime Minister of Israel Benjamin Netanyahu to deploy Israel Border Police to restore order in the city. This was the first time since 1966 that Israel had declared this kind of emergency lockdown. International media noted that both Jewish and Palestinian mobs were active in Lod, but the "crackdown came for one side" only. Demographics In the 19th century and until the Lydda Death March, Lod was an exclusively Muslim-Christian town, with an estimated 6,850 inhabitants, of whom approximately 2,000 (29%) were Christian. According to the Israel Central Bureau of Statistics (CBS), the population of Lod in 2010 was 69,500 people. According to the 2019 census, the population of Lod was 77,223, of which 53,581 people, comprising 69.4% of the city's population, were classified as "Jews and Others", and 23,642 people, comprising 30.6% as "Arab". Education According to CBS, 38 schools and 13,188 pupils are in the city. They are spread out as 26 elementary schools and 8,325 elementary school pupils, and 13 high schools and 4,863 high school pupils. About 52.5% of 12th-grade pupils were entitled to a matriculation certificate in 2001.[citation needed] Economy The airport and related industries are a major source of employment for the residents of Lod. Other important factories in the city are the communication equipment company "Talard", "Cafe-Co" - a subsidiary of the Strauss Group and "Kashev" - the computer center of Bank Leumi. A Jewish Agency Absorption Centre is also located in Lod. According to CBS figures for 2000, 23,032 people were salaried workers and 1,405 were self-employed. The mean monthly wage for a salaried worker was NIS 4,754, a real change of 2.9% over the course of 2000. Salaried men had a mean monthly wage of NIS 5,821 (a real change of 1.4%) versus NIS 3,547 for women (a real change of 4.6%). The mean income for the self-employed was NIS 4,991. About 1,275 people were receiving unemployment benefits and 7,145 were receiving an income supplement. Art and culture In 2009-2010, Dor Guez held an exhibit, Georgeopolis, at the Petach Tikva art museum that focuses on Lod. Archaeology A well-preserved mosaic floor dating to the Roman period was excavated in 1996 as part of a salvage dig conducted on behalf of the Israel Antiquities Authority and the Municipality of Lod, prior to widening HeHalutz Street. According to Jacob Fisch, executive director of the Friends of the Israel Antiquities Authority, a worker at the construction site noticed the tail of a tiger and halted work. The mosaic was initially covered over with soil at the conclusion of the excavation for lack of funds to conserve and develop the site. The mosaic is now part of the Lod Mosaic Archaeological Center. The floor, with its colorful display of birds, fish, exotic animals and merchant ships, is believed to have been commissioned by a wealthy resident of the city for his private home. The Lod Community Archaeology Program, which operates in ten Lod schools, five Jewish and five Israeli Arab, combines archaeological studies with participation in digs in Lod. Sports The city's major football club, Hapoel Bnei Lod, plays in Liga Leumit (the second division). Its home is at the Lod Municipal Stadium. The club was formed by a merger of Bnei Lod and Rakevet Lod in the 1980s. Two other clubs in the city play in the regional leagues: Hapoel MS Ortodoxim Lod in Liga Bet and Maccabi Lod in Liga Gimel. Hapoel Lod played in the top division during the 1960s and 1980s, and won the State Cup in 1984. The club folded in 2002. A new club, Hapoel Maxim Lod (named after former mayor Maxim Levy) was established soon after, but folded in 2007. Notable people Twin towns-sister cities Lod is twinned with: See also References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Augustine_of_Hippo] | [TOKENS: 15306] |
Contents Augustine of Hippo Augustine of Hippo (/ɔːˈɡʌstɪn/ aw-GUST-in, US also /ˈɔːɡəstiːn/ AW-gə-steen; Latin: Aurelius Augustinus Hipponensis; 13 November 354 – 28 August 430) was a Christian theologian and philosopher from Roman Africa. He was the bishop of Hippo Regius from Thagaste in Numidia Cirtensis, (present-day Souk Ahras, Algeria). His writings deeply influenced the development of Western philosophy and Western Christianity, and he is viewed as one of the most important Church Fathers of the Latin Church in the Patristic Period. His many important works include The City of God, On Christian Doctrine, and Confessions. According to his contemporary, Jerome of Stridon, Augustine "established anew the ancient Faith".[a] In his youth he was drawn to the Manichaean faith, and later to the Hellenistic philosophy of Neoplatonism. After his conversion to Christianity and baptism in 386, Augustine developed his own approach to philosophy and theology, accommodating a variety of methods and perspectives. Believing the grace of Christ was indispensable to human freedom, he helped formulate the doctrine of original sin and made significant contributions to the development of just war theory. When the Western Roman Empire began to disintegrate, Augustine imagined the Church as a spiritual City of God, distinct from the material Earthly City. The segment of the Church that adhered to the concept of the Trinity as defined by the Council of Nicaea and the Council of Constantinople closely identified with Augustine's On the Trinity. Augustine is recognized as a saint in the Catholic Church, the Eastern Orthodox Church, the Lutheran churches, and the Anglican Communion. He is also a preeminent Catholic Doctor of the Church and the patron of the Augustinians. His memorial is celebrated on 28 August, the day of his death. Augustine is the patron saint of brewers, printers, theologians, and a number of cities and dioceses. His thoughts profoundly influenced the medieval worldview. Many Protestants, especially Calvinists and Lutherans, consider him one of the theological fathers of the Protestant Reformation due to his teachings on salvation and divine grace. Protestant Reformers generally, and Martin Luther in particular, held Augustine in preeminence among early Church Fathers. From 1505 to 1521, Luther was a member of the Order of the Augustinian Eremites. In the East, his teachings are more disputed. The most controversial doctrine associated with him, the filioque, was rejected by the Eastern Orthodox Church. Other disputed teachings include his views on original sin, the doctrine of grace, and predestination. Though considered to be mistaken on some points, he is still considered a saint and has influenced some Eastern Church Fathers, most notably Gregory Palamas. In the Greek and Russian Orthodox Churches, his feast day is celebrated on 15 June. Among modern Eastern Orthodox theologians, his views were notably attacked by John Romanides, but others have shown significant approbation, chiefly Georges Florovsky. Life Augustine of Hippo, also known as Saint Augustine or Saint Austin, is known by various cognomens throughout the many denominations of the Christian world, including Blessed Augustine and the Doctor of Grace (Latin: Doctor gratiae). Augustine was born in 354 in the municipium of Thagaste (now Souk Ahras, Algeria) in the Roman province of Numidia. His mother, Monica or Monnica,[b] was a devout Christian; his father Patricius was a pagan who converted to Christianity on his deathbed. He had a brother named Navigius and a sister whose name is lost but is conventionally remembered as Perpetua. Scholars generally agree that Augustine and his family were Berbers, an ethnic group indigenous to North Africa, but were heavily Romanized, speaking only Latin at home as a matter of pride and dignity. In his writings, Augustine mentions in passing his identity as a Roman African. For example, he refers to Apuleius as "the most notorious of us Africans," to Ponticianus as "a country man of ours, insofar as being African," and to Faustus of Mileve as "an African Gentleman". Augustine's family name, Aurelius, suggests his father's ancestors were freedmen of the gens Aurelia given full Roman citizenship by the Edict of Caracalla in 212. Augustine's family had been Roman, from a legal standpoint, for at least a century when he was born. It is assumed that his mother, Monica, was of Berber origin, on the basis of her name, but as his family were honestiores, an upper class of citizens known as honorable men, Augustine's first language was likely Latin. At the age of 11, Augustine was sent to school at Madaurus (now M'Daourouch), a small Numidian city about 31 kilometres (19 miles) south of Thagaste. There he became familiar with Latin literature, as well as pagan beliefs and practices. His first insight into the nature of sin occurred when he and a number of friends stole pears from a neighbourhood garden. He tells this story in his autobiography, Confessions. He realises that the pears were "tempting neither for its colour nor its flavour" – he was neither hungry nor poor, and he had enough of fruit which were "much better". Over the next few chapters, Augustine agonises over this past sin of his, recognising that one does not desire evil for evil's sake. Rather, "through an inordinate preference for these goods of a lower kind, the better and higher are neglected". In other words, man is drawn to sin when grossly choosing the lesser good over a greater good. Eventually, Augustine concludes that it was the good of the "companionship" between him and his accomplices that allowed him to delight in this theft. At the age of 17, through the generosity of his fellow citizen Romanianus, Augustine went to Carthage to continue his education in rhetoric, though it was above the financial means of his family. Despite the good warnings of his mother, as a youth Augustine lived a hedonistic lifestyle for a time, associating with young men who boasted of their sexual exploits. The need to gain their acceptance encouraged inexperienced boys like Augustine to seek or make up stories about sexual experiences. It was while he was a student in Carthage that he read Cicero's dialogue Hortensius (now lost), which he described as leaving a lasting impression, enkindling in his heart the love of wisdom and a great thirst for truth. It started his interest in philosophy. Although raised Christian, Augustine became a Manichaean, much to his mother's chagrin. At about the age of 17, Augustine began a relationship with a young woman in Carthage. Though his mother wanted him to marry a person of his class, the woman remained his lover. He was warned by his mother to avoid fornication (sex outside marriage), but Augustine persisted in the relationship for over fifteen years, and the woman gave birth to his son Adeodatus (372–388), which means "Gift from God", who was viewed as extremely intelligent by his contemporaries. In 385, Augustine ended his relationship with his lover in order to prepare to marry a teenage heiress. By the time he was able to marry her, however, he had already converted to Christianity and decided to become a Christian priest and the marriage did not happen. Augustine was, from the beginning, a brilliant student, with an eager intellectual curiosity, but he never mastered Greek – his first Greek teacher was a brutal man who constantly beat his students, and Augustine rebelled and refused to study. By the time he realized he needed to know Greek, it was too late; and although he acquired a smattering of the language, he was never eloquent with it. He did, however, become a master of Latin. Augustine taught grammar at Thagaste between 373 and 374. The following year he moved to Carthage to conduct a school of rhetoric and remained there for the next nine years. Disturbed by unruly students in Carthage, he moved to establish a school in Rome, where he believed the best and brightest rhetoricians practised, in 383. However, Augustine was disappointed with the apathetic reception. It was the custom for students to pay their fees to the professor on the last day of the term, and many students attended faithfully all term, and then did not pay. Manichaean friends introduced him to the prefect of the City of Rome, Symmachus, who had been asked by the imperial court at Milan to provide a rhetoric professor. Augustine won the job and headed north to take his position in Milan in late 384. At thirty years old, he had won the most visible academic position in the Latin world at a time when such posts gave ready access to political careers. Although Augustine spent ten years as a Manichaean, he was never an initiate or "elect", but an "auditor", the lowest level in this religion's hierarchy. While still at Carthage a disappointing meeting with the Manichaean bishop, Faustus of Mileve, a key exponent of Manichaean theology, started Augustine's scepticism of Manichaeanism. In Rome, he reportedly turned away from Manichaeanism, embracing the scepticism of the New Academy movement. Because of his education, Augustine had great rhetorical prowess and was very knowledgeable of the philosophies behind many faiths. At Milan, his mother's religiosity, Augustine's own studies in Neoplatonism, and his friend Simplicianus all urged him towards Christianity. This was shortly after the Roman emperor Theodosius I declared Christianity to be the only legitimate religion for the Roman Empire on 27 February 380 by the Edict of Thessalonica and then issued a decree of death for all Manichaean monks in 382. Initially, Augustine was not strongly influenced by Christianity and its ideologies, but after coming in contact with Ambrose of Milan, Augustine reevaluated himself and was forever changed. Augustine arrived in Milan and visited Ambrose, having heard of his reputation as an orator. Like Augustine, Ambrose was a master of rhetoric, but older and more experienced. Soon, their relationship grew, as Augustine wrote, "And I began to love him, of course, not at the first as a teacher of the truth, for I had entirely despaired of finding that in thy Church – but as a friendly man." Augustine was very much influenced by Ambrose, even more than by his own mother and others he admired. In his Confessions, Augustine states, "That man of God received me as a father would, and welcomed my coming as a good bishop should." Ambrose adopted Augustine as a spiritual son after the death of Augustine's father. Augustine's mother had followed him to Milan and arranged a respectable marriage for him. Although Augustine acquiesced, he had to dismiss his concubine and grieved for having forsaken his lover. He wrote, "My mistress being torn from my side as an impediment to my marriage, my heart, which clave to her, was racked, and wounded, and bleeding." Augustine confessed he had not been a lover of wedlock so much as a slave of lust, so he procured another concubine since he had to wait two years until his fiancée came of age. However, his emotional wound was not healed. It was during this period that he uttered his famously insincere prayer, "Grant me chastity and continence, but not yet." There is evidence Augustine may have considered this former relationship to be equivalent to marriage. In his Confessions, he admitted the experience eventually produced a decreased sensitivity to pain. Augustine eventually broke off his engagement to his eleven-year-old fiancée but never renewed his relationship with either of his concubines. Alypius of Thagaste steered Augustine away from marriage, saying they could not live a life together in the love of wisdom if he married. Augustine looked back years later on the life at Cassiciacum, a villa outside of Milan where he gathered with his followers, and described it as Christianae vitae otium – the leisure of Christian life. In late August of 386,[c] at the age of 31, having heard of Ponticianus's and his friends' first reading of the life of Anthony of the Desert, Augustine converted to Christianity. As Augustine later told it, his conversion was prompted by hearing a child's voice say "take up and read" (Latin: tolle, lege). Resorting to the sortes biblicae, he opened a book of St. Paul's writings (Confessiones 8.12.29) at random and read Romans 13:13–14: "Not in rioting and drunkenness, not in chambering and wantonness, not in strife and envying, but put on the Lord Jesus Christ, and make no provision for the flesh to fulfil the lusts thereof." He later wrote an account of his conversion in his Confessions (Latin: Confessiones), which has since become a classic of Christian theology and a key text in the history of autobiography. This work is an outpouring of thanksgiving and penitence. Although it is written as an account of his life, the Confessions also talks about the nature of time, causality, free will, and other important philosophical topics. The following is taken from that work: Belatedly I loved thee, O Beauty so ancient and so new, belatedly I loved thee. For see, thou wast within and I was without, and I sought thee out there. Unlovely, I rushed heedlessly among the lovely things thou hast made. Thou wast with me, but I was not with thee. These things kept me far from thee; even though they were not at all unless they were in thee. Thou didst call and cry aloud, and didst force open my deafness. Thou didst gleam and shine, and didst chase away my blindness. Thou didst breathe fragrant odours and I drew in my breath; and now I pant for thee. I tasted, and now I hunger and thirst. Thou didst touch me, and I burned for thy peace. Ambrose baptized Augustine and his son Adeodatus, in Milan on Easter Vigil, 24–25 April 387. A year later, in 388, Augustine completed his apology On the Holiness of the Catholic Church. That year, also, Adeodatus and Augustine returned home to Africa. Augustine's mother Monica died at Ostia, Italy, as they prepared to embark for Africa. Upon their arrival, they began a life of aristocratic leisure at Augustine's family's property. Soon after, Adeodatus, too, died. Augustine then sold his patrimony and gave the money to the poor. He only kept the family house, which he converted into a monastic foundation for himself and a group of friends. Furthermore, while he was known for his major contributions to Christian rhetoric, another major contribution was his preaching style. After converting to Christianity, Augustine turned against his profession as a rhetoric professor in order to devote more time to preaching. In 391 Augustine was ordained a priest in Hippo Regius (now Annaba), in Algeria. He was especially interested in discovering how his previous rhetorical training in Italian schools would help the Christian Church achieve its objective of discovering and teaching the different scriptures in the Bible. He became a famous preacher (more than 350 preserved sermons are believed to be authentic), and was noted for combating the Manichaean religion, to which he had formerly adhered. He preached around 6,000 to 10,000 sermons when he was alive; however, there are only around 500 sermons that are accessible today. When Augustine preached his sermons, they were recorded by stenographers. Some of his sermons would last over one hour and he would preach multiple times throughout a given week. When talking to his audience, he would stand on an elevated platform; however, he would walk towards the audience during his sermons. When he was preaching, he used a variety of rhetorical devices that included analogies, word pictures, similes, metaphors, repetition, and antithesis when trying to explain more about the Bible. In addition, he used questions and rhymes when talking about the differences between people's life on Earth and Heaven as seen in one of his sermons that was preached in 412 AD. Augustine believed that the preachers' ultimate goal is to ensure the salvation of their audience. In 395, he was made coadjutor Bishop of Hippo and became full Bishop shortly thereafter, hence the name "Augustine of Hippo"; and he gave his property to the church of Thagaste. He remained in that position until his death in 430. Bishops were the only individuals allowed to preach when he was alive and he scheduled time to preach after being ordained despite a busy schedule made up of preparing sermons and preaching at other churches besides his own. When serving as the Bishop of Hippo, his goal was to minister to individuals in his congregation and he would choose the passages that the church planned to read every week. As bishop, he believed that it was his job to interpret the work of the Bible. He wrote his autobiographical Confessions in 397–398. His work The City of God was written to console his fellow Christians shortly after the Visigoths had sacked Rome in 410. Augustine worked tirelessly to convince the people of Hippo to convert to Christianity. Though he had left his monastery, he continued to lead a monastic life in the episcopal residence. Much of Augustine's later life was recorded by his friend Possidius, bishop of Calama (present-day Guelma, Algeria), in his Sancti Augustini Vita. During this latter part of Augustine's life, he helped lead a large community of Christians against different political and religious factors which had a major influence on his writings. Possidius admired Augustine as a man of powerful intellect and a stirring orator who took every opportunity to defend Christianity against its detractors. Possidius also described Augustine's personal traits in detail, drawing a portrait of a man who ate sparingly, worked tirelessly, despised gossip, shunned the temptations of the flesh, and exercised prudence in the financial stewardship of his see. Death and sainthood Shortly before Augustine's death, the Vandals, a Germanic tribe that had converted to Arianism, invaded Roman Africa. The Vandals besieged Hippo in the spring of 430 when Augustine entered his final illness. According to Possidius, one of the few miracles attributed to Augustine, the healing of an ill man, took place during the siege. Augustine has been cited to have excommunicated himself upon the approach of his death in an act of public penance and solidarity with sinners. Spending his final days in prayer and repentance, he requested the penitential Psalms of David be hung on his walls so he could read them and upon which led him to "[weep] freely and constantly" according to Possidius' biography. He directed the library of the church in Hippo and all the books therein should be carefully preserved. He died on 28 August 430. Shortly after his death, the Vandals lifted the siege of Hippo, but they returned soon after and burned the city. They destroyed all but Augustine's cathedral and library, which they left untouched. Augustine was canonized by popular acclaim, and later recognized as a Doctor of the Church in 1298 by Pope Boniface VIII. His feast day is 28 August. He is considered the patron saint of brewers, printers, theologians, and a number of cities and dioceses. He is invoked against sore eyes. Augustine is remembered in the Church of England's calendar of saints with a lesser festival on 28 August. According to Bede's True Martyrology, Augustine's body was later translated or moved to Cagliari, Sardinia, by the Catholic bishops expelled from North Africa by Huneric. Around 720, his remains were transported again by Peter, bishop of Pavia and uncle of the Lombard king Liutprand, to the church of San Pietro in Ciel d'Oro in Pavia, to save them from frequent coastal raids by Saracens. In January 1327, Pope John XXII issued the papal bull Veneranda Santorum Patrum, in which he appointed the Augustinians guardians of the tomb of Augustine (called Arca), which was remade in 1362 and elaborately carved with bas-reliefs of scenes from Augustine's life, created by Giovanni di Balduccio. In October 1695, some workmen in the Church of San Pietro in Ciel d'Oro in Pavia discovered a marble box containing human bones (including part of a skull). A dispute arose between the Augustinian hermits (Order of Saint Augustine) and the regular canons (Canons Regular of Saint Augustine) as to whether these were the bones of Augustine. The hermits did not believe so; the canons affirmed they were. Eventually Pope Benedict XIII (1724–1730) directed the Bishop of Pavia, Monsignor Pertusati, to make a determination. The bishop declared that, in his opinion, the bones were those of Augustine. The Augustinians were expelled from Pavia in 1785, Augustine's ark and relics were brought to Pavia Cathedral in 1799. San Pietro fell into disrepair but was finally restored in the 1870s, under the urging of Agostino Gaetano Riboldi, and reconsecrated in 1896 when the relics of Augustine and the shrine were once again reinstalled. In 1842, a portion of Augustine's right arm (cubitus) was secured from Pavia and returned to Annaba. It now rests in the Saint Augustin Basilica within a glass tube inserted into the arm of a life-size marble statue of the saint. Views and thought Augustine's large contribution of writings covered diverse fields including theology, philosophy and sociology. Along with John Chrysostom, Augustine was among the most prolific scholars of the early church by quantity. Augustine was one of the first Christian ancient Latin authors with a very clear vision of theological anthropology. He saw the human being as a perfect unity of soul and body. In his late treatise On Care to Be Had for the Dead, section 5 (420) he exhorted respect for the body on the grounds it belonged to the very nature of the human person. Augustine's favourite figure to describe body-soul unity is marriage: caro tua, coniunx tua – your body is your wife. Augustine believed that though initially the two elements of body and soul were in perfect harmony, after the fall of humanity they came into dramatic combat with one another. He wrote of them as two categorically different things: the body as a three-dimensional object composed of the four elements, and the soul as spatially dimensionless. He further defined the soul as a kind of substance, participating in reason, fit for ruling the body. Augustine was not preoccupied, as Plato and Descartes were, in detailed efforts to explain the metaphysics of the soul-body union. It sufficed for him to admit they are metaphysically distinct: to be a human is to be a composite of soul and body, with the soul superior to the body. The latter statement is grounded in his hierarchical classification of things into those that merely exist, those that exist and live, and those that exist, live, and have intelligence or reason. Sermons directed at Manichaeism focused on their error of teaching the soul was part of God instead of admission that the soul is 'full of illusions'. Like other Church Fathers such as Athenagoras, Tertullian, Clement of Alexandria and Basil of Caesarea, Augustine "vigorously condemned the practice of induced abortion", and although he disapproved of abortion during any stage of pregnancy, he made a distinction between early and later abortions. He acknowledged the distinction between "formed" and "unformed" fetuses mentioned in the Septuagint translation of Exodus 21:22–23, which incorrectly translates the word "harm" (from the original Hebrew text) as "form" in the Koine Greek of the Septuagint. His view was based on the Aristotelian distinction "between the fetus before and after its supposed 'vivification'". Therefore, he did not classify the abortion of an "unformed" fetus as murder since he thought it could not be known with certainty the fetus had received a soul. Augustine held that "the timing of the infusion of the soul was a mystery known to God alone". However, he considered procreation as "one of the goods of marriage; abortion figured as a means, along with drugs which cause sterility, of frustrating this good. It lay along a continuum which included infanticide as an instance of 'lustful cruelty' or 'cruel lust.' Augustine called the use of means to avoid the birth of a child an 'evil work:' a reference to either abortion or contraception or both." In City of God, Augustine rejected both the contemporary ideas of ages (such as those of certain Greeks and Egyptians) that differed from the Church's sacred writings. In The Literal Interpretation of Genesis, Augustine argued that God had created everything in the universe simultaneously and not over a period of six days. He argued the six-day structure of creation presented in the Book of Genesis represents a logical framework, rather than the passage of time in a physical way – it would bear a spiritual, rather than physical, meaning, which is no less literal. One reason for this interpretation is the passage in Sirach 18:1, creavit omnia simul ("He created all things at once"), which Augustine took as proof that the days of Genesis 1 had to be taken non-literalistically. As additional support for describing the six days of creation as a heuristic device, Augustine thought the actual event of creation would be incomprehensible by humans and therefore needed to be translated. Augustine also does not envision original sin as causing structural changes in the universe, and even suggests that the bodies of Adam and Eve were already created mortal before the Fall. Apart from his specific views, Augustine recognized that interpreting the creation story was difficult, and remarked that interpretations could change should new information come up. Augustine developed his doctrine of the Church principally in reaction to the Donatist sect. He taught there is one Church, but within this Church there are two realities, namely, the visible aspect (the institutional hierarchy, the Catholic sacraments, and the laity) and the invisible (the souls of those in the Church, who are either dead, sinful members or elect predestined for Heaven). The former is the institutional body established by Christ on earth which proclaims salvation and administers the sacraments, while the latter is the invisible body of the elect, made up of genuine believers from all ages, who are known only to God. The Church, which is visible and societal, will be made up of "wheat" and "tares", that is, good and wicked people (as per Mat. 13:30), until the end of time. This concept countered the Donatist claim that only those in a state of grace were the "true" or "pure" church on earth, and that priests and bishops who were not in a state of grace had no authority or ability to confect the sacraments. Augustine's ecclesiology was more fully developed in City of God. There he conceives of the church as a heavenly city or kingdom, ruled by love, which will ultimately triumph over all earthly empires which are self-indulgent and ruled by pride. Augustine followed Cyprian in teaching that bishops and priests of the Church are the successors of the Apostles, and their authority in the Church is God-given. The concept of Church invisible was advocated by Augustine as part of his refutation of the Donatist sect, though he, as other Church Fathers before him, saw the invisible Church and visible Church as one and the same thing, unlike the later Protestant reformers who did not identify the Catholic Church as the true church. He was strongly influenced by the Platonist belief that true reality is invisible and that, if the visible reflects the invisible, it does so only partially and imperfectly (see Theory of Forms). Others question whether Augustine really held to some form of an "invisible true Church" concept. Augustine originally believed in premillennialism, namely that Christ would establish a literal 1,000-year kingdom prior to the general resurrection, but later rejected the belief, viewing it as carnal. During the medieval period, the Catholic Church built its system of eschatology on Augustinian amillennialism, where Christ rules the earth spiritually through his triumphant church. During the Reformation, theologians such as John Calvin accepted amillennialism. Augustine taught that the eternal fate of the soul is determined at death, and that purgatorial fires of the intermediate state purify only those who died in communion with the Church. His teaching provided fuel for later theology. Although Augustine did not develop an independent Mariology, his statements on Mary surpass in number and depth those of other early writers. Even before the Council of Ephesus, he defended the Ever-Virgin Mary as the Mother of God, believing her to be "full of grace" (following earlier Latin writers such as Jerome) on account of her sexual integrity and innocence. Likewise, he affirmed that the Virgin Mary "conceived as virgin, gave birth as virgin and stayed virgin forever". Augustine took the view that, if a literal interpretation contradicts science and humans' God-given reason, the biblical text should be interpreted metaphorically. While each passage of Scripture has a literal sense, this "literal sense" does not always mean the Scriptures are mere history; at times they are rather an extended metaphor. Augustine taught that the sin of Adam and Eve was either an act of foolishness (insipientia) followed by pride and disobedience to God or that pride came first.[d] The first couple disobeyed God, who had told them not to eat of the Tree of the knowledge of good and evil (Gen 2:17). The tree was a symbol of the order of creation. Self-centeredness made Adam and Eve eat of it, thus failing to acknowledge and respect the world as it was created by God, with its hierarchy of beings and values.[e] Augustine wrote that Adam and Eve would not have fallen into pride and lack of wisdom if Satan had not sown into their senses "the root of evil" (radix Mali). Their nature was wounded, according to Augustine, by concupiscence or libido, which affected human intelligence and will, as well as affections and desires, including sexual desire.[f] In terms of metaphysics, Augustine found concupiscence to be not a state of being but a bad quality, the privation of good or a wound. Augustine's understanding of the consequences of original sin and the necessity of redeeming grace was developed in the struggle against Pelagius and his Pelagian disciples, Caelestius and Julian of Eclanum, who had been inspired by Rufinus of Syria, a disciple of Theodore of Mopsuestia. They refused to agree original sin wounded human will and mind, insisting human nature was given the power to act, to speak, and to think when God created it. Human nature cannot lose its moral capacity for doing good, but a person is free to act or not act in a righteous way. Pelagius gave an example of eyes: they have capacity for seeing, but a person can make either good or bad use of it. Pelagians insisted human affections and desires were not touched by the fall either. In the Pelagian view, immorality, e.g. fornication, is exclusively a matter of will, i.e. a person does not use natural desires in a proper way. In opposition, Augustine pointed out the apparent disobedience of the flesh to the spirit, and explained it as one of the results of original sin, punishment of Adam and Eve's disobedience to God. Augustine had served as a "Hearer" for the Manichaeans for about nine years, who taught that the original sin was carnal knowledge. But his struggle to understand the cause of evil in the world started before that, at the age of nineteen. By malum (evil) he understood most of all concupiscence, which he interpreted as a vice dominating people and causing in men and women moral disorder. Agostino Trapè insists Augustine's personal experience cannot be credited for his doctrine about concupiscence. He considers Augustine's marital experience to be quite normal, and even exemplary, aside from the absence of Christian wedding rites. As J. Brachtendorf showed, Augustine used Ciceronian Stoic concept of passions, to interpret Paul's doctrine of universal sin and redemption. The view that not only human soul but also senses were influenced by the fall of Adam and Eve was prevalent in Augustine's time among the Fathers of the Church. It is clear the reason for Augustine's distancing from the affairs of the flesh was different from that of Plotinus, a Neoplatonist[g] who taught that only through disdain for fleshly desire could one reach the ultimate state of mankind. Augustine taught the redemption, i.e. transformation and purification, of the body in the resurrection. Some authors perceive Augustine's doctrine as directed against human sexuality and attribute his insistence on continence and devotion to God as coming from his need to reject his own highly sensual nature as described in the Confessions.[h] Augustine taught that human sexuality has been wounded, together with the whole of human nature, and requires redemption of Christ. That healing is a process realized in conjugal acts. The virtue of continence is achieved thanks to the grace of the sacrament of Christian marriage, which becomes therefore a remedium concupiscentiae – remedy of concupiscence. The redemption of human sexuality will be, however, fully accomplished only in the resurrection of the body. Augustine also taught that the sin of Adam is inherited by all human beings. Already in his pre-Pelagian writings, Augustine taught that Original Sin is transmitted to his descendants by concupiscence, which he regarded as the passion of both soul and body,[i] making humanity a massa damnata (mass of perdition, condemned crowd) and much enfeebling, though not destroying, the freedom of the will. Although earlier Christian authors taught the elements of physical death, moral weakness, and a sin propensity within original sin, Augustine was the first to add the concept of inherited guilt (reatus) from Adam whereby an infant was eternally damned at birth. Although Augustine's anti-Pelagian defence of original sin was confirmed at numerous councils, i.e. Carthage (418), Ephesus (431), Orange (529), Trent (1546) and by popes, i.e. Pope Innocent I (401–417) and Pope Zosimus (417–418), his inherited guilt eternally damning infants was omitted by these councils and popes. Anselm of Canterbury established in his Cur Deus Homo the definition that was followed by the great 13th-century Schoolmen, namely that Original Sin is the "privation of the righteousness which every man ought to possess," thus separating it from concupiscence, with which some of Augustine's disciples had identified it, as later did Luther and Calvin. In 1567, Pope Pius V condemned the identification of Original Sin with concupiscence. Augustine taught that God orders all things while preserving human freedom. Prior to 396, he believed predestination was based on God's foreknowledge of whether individuals would believe in Christ, that God's grace was "a reward for human assent". Later, in response to Pelagius, Augustine said that the sin of pride consists in assuming "we are the ones who choose God or that God chooses us (in his foreknowledge) because of something worthy in us", and argued that God's grace causes the individual act of faith. Scholars are divided over whether Augustine's teaching implies double predestination, or the belief God chooses some people for damnation as well as some for salvation. Catholic scholars tend to deny he held such a view while some Protestants and secular scholars have held that Augustine did believe in double predestination. About 412, Augustine became the first Christian to understand predestination as a divine unilateral pre-determination of individuals' eternal destinies independently of human choice, although his prior Manichaean sect did teach this concept. Some Protestant theologians, such as Justo L. González and Bengt Hägglund, interpret Augustine's teaching that grace is irresistible, results in conversion, and leads to perseverance. In On Rebuke and Grace (De correptione et gratia), Augustine wrote: "And what is written, that He wills all men to be saved, while yet all men are not saved, may be understood in many ways, some of which I have mentioned in other writings of mine; but here I will say one thing: He wills all men to be saved, is so said that all the predestinated may be understood by it, because every kind of men is among them." Speaking of the twins Jacob and Esau, Augustine wrote in his book On the Gift of Perseverance, "[I]t ought to be a most certain fact that the former is of the predestinated, the latter is not." Also in reaction to the Donatists, Augustine developed a distinction between the "regularity" and "validity" of the sacraments. Regular sacraments are performed by clergy of the Catholic Church, while sacraments performed by schismatics are considered irregular. Nevertheless, the validity of the sacraments does not depend upon the holiness of the priests who perform them (ex opere operato); therefore, irregular sacraments are still accepted as valid provided they are done in the name of Christ and in the manner prescribed by the Church. On this point, Augustine departs from the earlier teaching of Cyprian, who taught that converts from schismatic movements must be re-baptised. Augustine taught that sacraments administered outside the Catholic Church, though true sacraments, avail nothing. However, he also stated that baptism, while it does not confer any grace when done outside the Church, does confer grace as soon as one is received into the Catholic Church. Augustine believed that in a real presence of Christ in the Eucharist, saying that Christ's statement, "This is my body" referred to the bread he carried in his hands, and that Christians must have faith the bread and wine are in fact the body and blood of Christ, despite what they see with their eyes. For instance, he stated that "He [Jesus] walked here in the same flesh, and gave us the same flesh to be eaten unto salvation. But no one eats that flesh unless first he adores it; and thus it is discovered how such a footstool of the Lord's feet is adored; and not only do we not sin by adoring, we do sin by not adoring." Presbyterian professor and author John Riggs argued that Augustine held that Christ is really present in the elements of the Eucharist, but not in a bodily manner, because his body remains in Heaven. Against the Pelagians, Augustine strongly stressed the importance of infant baptism. About the question whether baptism is an absolute necessity for salvation, however, Augustine appears to have refined his beliefs during his lifetime, causing some confusion among later theologians about his position. He said in one of his sermons that only the baptized are saved. This belief was shared by many early Christians. However, a passage from his City of God, concerning the Apocalypse, may indicate Augustine did believe in an exception for children born to Christian parents. Augustine's contemporaries often believed astrology to be an exact and genuine science. Its practitioners were regarded as true men of learning and called mathematici. Astrology played a prominent part in Manichaean doctrine, and Augustine himself was attracted by their books in his youth, being particularly fascinated by those who claimed to foretell the future. Later, as a bishop, he warned that one should avoid astrologers who combine science and horoscopes. (Augustine's term "mathematici", meaning "astrologers", is sometimes mistranslated as "mathematicians".) According to Augustine, they were not genuine students of Hipparchus or Eratosthenes but "common swindlers". Epistemological concerns shaped Augustine's intellectual development. His early dialogues Contra academicos (386) and De Magistro (389), both written shortly after his conversion to Christianity, reflect his engagement with sceptical arguments and show the development of his doctrine of divine illumination. The doctrine of illumination claims God plays an active and regular part in human perception and understanding by illuminating the mind so human beings can recognize intelligible realities God presents (as opposed to God designing the human mind to be reliable consistently, as in, for example, Descartes's idea of clear and distinct perceptions). According to Augustine, illumination is obtainable to all rational minds and is different from other forms of sense perception. It is meant to be an explanation of the conditions required for the mind to have a connection with intelligible entities. Augustine also posed the problem of other minds throughout different works, most famously perhaps in On the Trinity (VIII.6.9), and developed what has come to be a standard solution: the argument from analogy to other minds. In contrast to Plato and other earlier philosophers, Augustine recognized the centrality of testimony to human knowledge and argued that what others tell us can provide knowledge even if we do not have independent reasons to believe their testimonial reports. Augustine asserted Christians should be pacifists as a personal, philosophical stance. However, peacefulness in the face of a grave wrong that could only be stopped by violence would be a sin. Defence of one's self or others could be a necessity, especially when authorized by a legitimate authority. While not breaking down the conditions necessary for war to be just, Augustine coined the phrase in his work The City of God. In essence, the pursuit of peace must include the option of fighting for its long-term preservation. Such a war could not be pre-emptive, but defensive, to restore peace. Thomas Aquinas, centuries later, used the authority of Augustine's arguments in an attempt to define the conditions under which a war could be just. Included in Augustine's earlier theodicy is the claim God created humans and angels as rational beings possessing free will. Free will was not intended for sin, meaning it is not equally predisposed to both good and evil. A will defiled by sin is not considered as "free" as it once was because it is bound by material things, which could be lost or be difficult to part with, resulting in unhappiness. Sin impairs free will, while grace restores it. Only a will that was once free can be subjected to sin's corruption. After 412, Augustine changed his theology, teaching that humanity had no free will to believe in Christ but only a free will to sin: "I in fact strove on behalf of the free choice of the human 'will,' but God's grace conquered" (Retract. 2.1). The early Christians opposed the deterministic views (e.g., fate) of Stoics, Gnostics, and Manichaeans prevalent in the first four centuries. Christians championed the concept of a relational God who interacts with humans rather than a Stoic or Gnostic God who unilaterally foreordained every event (yet Stoics still claimed to teach free will). Patristics scholar Ken Wilson argues that every early Christian author with extant writings who wrote on the topic prior to Augustine of Hippo (412) advanced human free choice rather than a deterministic God. According to Wilson, Augustine taught traditional free choice until 412, when he reverted to his earlier Manichaean and Stoic deterministic training when battling the Pelagians. Only a few Christians accepted Augustine's view of free will until the Protestant Reformation when both Luther and Calvin embraced Augustine's deterministic teachings wholeheartedly. The Catholic Church considers Augustine's teaching to be consistent with free will. He often said that anyone can be saved if they wish. While God knows who will and will not be saved, with no possibility for the latter to be saved in their lives, this knowledge represents God's perfect knowledge of how humans will freely choose their destinies. Augustine was among the earliest to examine the legitimacy of the laws of man, and attempt to define the boundaries of what laws and rights occur naturally, instead of being arbitrarily imposed by mortals. All who have wisdom and conscience, he concludes, are able to use reason to recognize the lex naturalis, natural law. Mortal law should not attempt to force people to do what is right or avoid what is wrong, but simply to remain just. Therefore "an unjust law is no law at all". People are not obligated to obey laws that are unjust, those that their conscience and reason tell them violate natural law and rights. Augustine led many clergy under his authority at Hippo to free their slaves as a "pious and holy" act. He boldly wrote a letter urging the emperor to set up a new law against slave traders and was very much concerned about the sale of children. Christian emperors of his time for 25 years had permitted the sale of children, not because they approved of the practice, but as a way of preventing infanticide when parents were unable to care for a child. Augustine noted that the tenant farmers in particular were driven to hire out or to sell their children as a means of survival. In his book, The City of God, he presents the development of slavery as a product of sin and as contrary to God's divine plan. He wrote that God "did not intend that this rational creature, who was made in his image, should have dominion over anything but the irrational creation – not man over man, but man over the beasts". Thus he wrote that righteous men in primitive times were made shepherds of cattle, not kings over men. "The condition of slavery is the result of sin", he declared. In The City of God, Augustine wrote he felt the existence of slavery was a punishment for the existence of sin, even if an individual enslaved person committed no sin meriting punishment. He wrote: "Slavery is, however, penal, and is appointed by that law which enjoins the preservation of the natural order and forbids its disturbance." Augustine believed slavery did more harm to the slave owner than the enslaved person himself: "the lowly position does as much good to the servant as the proud position does harm to the master." Augustine proposes as a solution to sin a type of cognitive reimagining of one's situation, where slaves "may themselves make their slavery in some sort free, by serving not in crafty fear, but in faithful love," until the end of the world eradicated slavery for good: "until all unrighteousness pass away, and all principality and every human power be brought to nothing, and God be all in all." After Christianity became the state religion of the Roman Empire, it was seen as the fulfillment of Old Testament prophecies and as evidence of Christianity's truth over Judaism. The destruction of the Second Temple and the resulting Jewish exile were often viewed as divine punishment for the Jewish rejection of Jesus and led to Christian thinkers to grapple with the presence of Jewish continued existence in their midst. In this context, Augustine developed a theological justification of Jewish presence within a Christian society that came to be known as the witness theory of the Jewish diaspora. Against certain Christian movements, some of which rejected the use of Hebrew Scripture, Augustine countered that God had chosen the Jews as a special people, and he considered the scattering of Jewish people by the Roman Empire to be a fulfilment of prophecy as well as punishment for their rejection of Christ. Nevertheless, Jews should not be slain or forcibly converted because they provide Christianity with an ever-present witness to its own validity as their own scriptures, the Old Testament, foretold the coming and resurrection of Christ. Their blind adherence to the Old Testament were proof that Christians had not faked the claims in the New Testament, although the Jews remained blind to the truth. Especially relevant for Augustine's justification was Psalm 59:11 "Slay them not, lest at some time they for get your Law". Augustine, who believed Jewish people would be converted to Christianity at "the end of time", argued God had allowed them to survive their dispersion as a warning to Christians; as such, he argued, they should be permitted to dwell in Christian lands. The sentiment sometimes attributed to Augustine that Christians should let the Jews "survive but not thrive" (it is repeated by the author James Carroll in his book Constantine's Sword, for example) is apocryphal and is not found in any of his writings. Augustine's theological justification became the Catholic Church's official policy through the endorsement of pope Gregory the Great in the sixth century and allowed Jews to live in relative safety among their Christian hosts up until the twelfth century. For Augustine, the evil of sexual immorality was not in the sexual act itself, but in the emotions that typically accompany it. In On Christian Doctrine Augustine contrasts love, which is enjoyment on account of God, and lust, which is not on account of God. Augustine claims that, following the Fall, sexual lust (concupiscentia) has become necessary for copulation (as required to stimulate male erection), sexual lust is an evil result of the Fall, and therefore, evil must inevitably accompany sexual intercourse (On marriage and concupiscence). Therefore, following the Fall, even marital sex carried out merely to procreate inevitably perpetuates evil (On marriage and concupiscence 1.27; A Treatise against Two Letters of the Pelagians 2.27). For Augustine, proper love exercises a denial of selfish pleasure and the subjugation of corporeal desire to God. The only way to avoid evil caused by sexual intercourse is to take the "better" way (Confessions 8.2) and abstain from marriage (On marriage and concupiscence 1.31). Sex within marriage is not, however, for Augustine a sin, although necessarily produces the evil of sexual lust. Based on the same logic, Augustine also declared the pious virgins raped during the sack of Rome to be innocent because they did not intend to sin nor enjoy the act. Before the Fall, Augustine believed sex was a passionless affair, "just like many a laborious work accomplished by the compliant operation of our other limbs, without any lascivious heat", that the seed "might be sown without any shameful lust, the genital members simply obeying the inclination of the will". After the Fall, by contrast, the penis cannot be controlled by mere will, subject instead to both unwanted impotence and involuntary erections. Augustine censured those who try to prevent the creation of offspring when engaging in sexual relations, saying that though they may be nominally married they are not really, but are using that designation as a cloak for turpitude. Augustine believed Adam and Eve had both already chosen in their hearts to disobey God's command not to eat of the Tree of Knowledge before Eve took the fruit, ate it, and gave it to Adam. Accordingly, Augustine did not believe Adam was any less guilty of sin. Augustine praises women and their role in society and in the Church. Augustine believed that "woman has been made for man" and that "in sex she is physically subject to him in the same way as our natural impulses need to be subjected to the reasoning power of the mind, in order that the actions to which they lead may be inspired by the principles of good conduct". Women were created as a "helper" to men for Augustine. Augustine is considered an influential figure in the history of education. A work early in Augustine's writings is De Magistro (On the Teacher), which contains insights into education. His ideas changed as he found better directions or better ways of expressing his ideas. In the last years of his life, Augustine wrote his Retractationes (Retractations), reviewing his writings and improving specific texts. Henry Chadwick believes an accurate translation of "retractationes" may be "reconsiderations". Reconsiderations can be seen as an overarching theme of the way Augustine learned. Augustine's understanding of the search for understanding, meaning, and truth as a restless journey leaves room for doubt, development, and change. Augustine was a strong advocate of critical thinking skills. Because written works were limited during this time, spoken communication of knowledge was very important. His emphasis on the importance of community as a means of learning distinguishes his pedagogy from some others. Augustine believed dialectic is the best means for learning and that this method should serve as a model for learning encounters between teachers and students. Augustine's dialogue writings model the need for lively interactive dialogue among learners. He recommended adapting educational practices to fit the students' educational backgrounds: If a student has been well educated in a wide variety of subjects, the teacher must be careful not to repeat what they have already learned, but to challenge the student with material they do not yet know thoroughly. With the student who has had no education, the teacher must be patient, willing to repeat things until the student understands, and sympathetic. Perhaps the most difficult student, however, is the one with an inferior education who believes he understands something when he does not. Augustine stressed the importance of showing this type of student the difference between "having words and having understanding" and of helping the student to remain humble with his acquisition of knowledge.[citation needed] Under the influence of Bede, Alcuin, and Rabanus Maurus, De catechizandis rudibus came to exercise an important role in the education of clergy at the monastic schools, especially from the eighth century onwards. Augustine believed students should be given an opportunity to apply learned theories to practical experience. Yet another of Augustine's major contributions to education is his study on the styles of teaching. He claimed there are two basic styles a teacher uses when speaking to the students: mixed and grand. The mixed style includes complex and sometimes showy language to help students see the artistry of the subject they are studying, whereas the grand style is not considered quite as elegant. Augustine balanced his teaching philosophy with the traditional Bible-based practice of strict discipline.[citation needed] Augustine knew Latin and Ancient Greek. He had a long correspondence with St Jerome regarding the textual differences existing between the Hebrew Bible and the Greek Septuagint, concluding that the original Greek manuscripts were closely similar to the other Hebrew ones, and also that even the differences in the two original versions of the Holy Scripture could enlight its spiritual meaning to have been unitarily inspired by God. Augustine of Hippo is one of the very few authors in Antiquity to theoretically examined the ideas of religious freedom and coercion.: 107 Augustine handled the infliction of punishment and the exercise of power over law-breakers by analyzing these issues in ways similar to modern debates on penal reform. His teaching on coercion has "embarrassed his modern defenders and vexed his modern detractors,": 116 because it is seen as making him appear "to generations of religious liberals as le prince et patriarche de persecuteurs.": 107 Brown asserts that, at the same time, Augustine becomes "an eloquent advocate of the ideal of corrective punishment" and reformation of the wrongdoer. Russell says Augustine's theory of coercion "was not crafted from dogma, but in response to a unique historical situation" and is, therefore, context-dependent, while others see it as inconsistent with his other teachings.: 125 During the Great Persecution, "When Roman soldiers came calling, some of the [Catholic] officials handed over the sacred books, vessels, and other church goods rather than risk legal penalties" over a few objects.: ix Maureen Tilley says this was a problem by 305, that became a schism by 311, because many of the North African Christians had a long established tradition of a "physicalist approach to religion.": xv The sacred scriptures were not simply books to them, but were the Word of God in physical form, therefore they saw handing over the Bible, and handing over a person to be martyred, as "two sides of the same coin.": ix Those who cooperated with the authorities became known as traditores. The term originally meant one who hands over a physical object, but it came to mean "traitor".: ix According to Tilley, after the persecution ended, those who had apostatized wanted to return to their positions in the church.: xiv The North African Christians, the rigorists who became known as Donatists, refused to accept them.: ix, x Catholics were more tolerant and wanted to wipe the slate clean.: xiv, 69 For the next 75 years, both parties existed, often directly alongside each other, with a double line of bishops for the same cities.: xv Competition for the loyalty of the people included multiple new churches and violence.[j]: 334 No one is exactly sure when the Circumcellions and the Donatists allied, but for decades, they fomented protests and street violence, accosted travellers and attacked random Catholics without warning, often doing serious and unprovoked bodily harm such as beating people with clubs, cutting off their hands and feet, and gouging out eyes.: 172, 173, 222, 242, 254 Augustine became coadjutor Bishop of Hippo in 395, and since he believed that conversion must be voluntary, his appeals to the Donatists were verbal. For several years, he used popular propaganda, debate, personal appeal, General Councils, appeals to the emperor and political pressure to bring the Donatists back into union with the Catholics, but all attempts failed.: 242, 254 The harsh realities Augustine faced can be found in his Letter 28 written to bishop Novatus around 416. Donatists had attacked, cut out the tongue and cut off the hands of a Bishop Rogatus who had recently converted to Catholicism. An unnamed count of Africa had sent his agent with Rogatus, and he too had been attacked; the count was "inclined to pursue the matter.": 120 Russell says Augustine demonstrates a "hands-on" involvement with the details of his bishopric, but at one point in the letter, he confesses he does not know what to do. "All the issues that plague him are there: stubborn Donatists, Circumcellion violence, the vacillating role of secular officials, the imperative to persuade, and his own trepidations.": 120, 121 The empire responded to the civil unrest with the law and its enforcement, and thereafter, Augustine changed his mind about using verbal arguments alone. Instead, he came to support the state's use of coercion.: 107–116 Augustine did not believe the empire's enforcement would "make the Donatists more virtuous" but he did believe it would make them "less vicious.": 128 The primary 'proof-text' of what Augustine thought concerning coercion is from Letter 93, written in 408, as a reply to bishop Vincentius, of Cartenna (Mauretania, North Africa). This letter shows that both practical and biblical reasons led Augustine to defend the legitimacy of coercion. He confesses that he changed his mind because of "the ineffectiveness of dialogue and the proven efficacy of laws.": 3 He had been worried about false conversions if force was used, but "now," he says, "it seems imperial persecution is working." Many Donatists had converted.: 116 "Fear had made them reflect, and made them docile.": 3 Augustine continued to assert that coercion could not directly convert someone, but concluded it could make a person ready to be reasoned with.: 103–121 According to Mar Marcos, Augustine made use of several biblical examples to legitimize coercion, but the primary analogy in Letter 93 and in Letter 185, is the parable of the Great Feast in Luke 14.15–24 and its statement compel them to come in.: 1 Russell says, Augustine uses the Latin term cogo, instead of the compello of the Vulgate, since to Augustine, cogo meant to "gather together" or "collect" and was not simply "compel by physical force.": 121 In 1970, Robert Markus argued that, for Augustine, a degree of external pressure being brought for the purpose of reform was compatible with the exercise of free will. Russell asserts that Confessions 13 is crucial to understanding Augustine's thought on coercion; using Peter Brown's explanation of Augustine's view of salvation, he explains that Augustine's past, his own sufferings and "conversion through God's pressures," along with his biblical hermeneutics, is what led him to see the value in suffering for discerning truth.: 116–117 According to Russell, Augustine saw coercion as one among many conversion strategies for forming "a pathway to the inner person.": 119 In Augustine's view, there is such a thing as just and unjust persecution. Augustine explains that when the purpose of persecution is to lovingly correct and instruct, then it becomes discipline and is just.: 2 He said the church would discipline its people out of a loving desire to heal them, and that, "once compelled to come in, heretics would gradually give their voluntary assent to the truth of Christian orthodoxy.": 115 Frederick H. Russell describes this as "a pastoral strategy in which the church did the persecuting with the dutiful assistance of Roman authorities,": 115 adding that it is "a precariously balanced blend of external discipline and inward nurturance.": 125 Augustine placed limits on the use of coercion, recommending fines, imprisonment, banishment, and moderate floggings, preferring beatings with rods which was a common practice in the ecclesial courts.: 164 He opposed severity, maiming, and the execution of heretics. While these limits were mostly ignored by Roman authorities, Michael Lamb says that in doing this, "Augustine appropriates republican principles from his Roman predecessors..." and maintains his commitment to liberty, legitimate authority, and the rule of law as a constraint on arbitrary power. He continues to advocate holding authority accountable to prevent domination but affirms the state's right to act. Herbert A. Deane, on the other hand, says there is a fundamental inconsistency between Augustine's political thought and "his final position of approval of the use of political and legal weapons to punish religious dissidence" and others have seconded this view.[k] Brown asserts that Augustine's thinking on coercion is more of an attitude than a doctrine since it is "not in a state of rest," but is instead marked by "a painful and protracted attempt to embrace and resolve tensions.": 107 According to Russell, it is possible to see how Augustine himself had evolved from his earlier Confessions to this teaching on coercion and the latter's strong patriarchal nature: "Intellectually, the burden has shifted imperceptibly from discovering the truth to disseminating the truth.": 129 The bishops had become the church's elite with their own rationale for acting as "stewards of the truth." Russell points out that Augustine's views are limited to time and place and his own community, but later, others took what he said and applied it outside those parameters in ways Augustine never imagined or intended.: 129 Works Augustine was one of the most prolific Latin authors in terms of surviving works, and the list of his works consists of more than one hundred separate titles. They include apologetic works against the heresies of the Arians, Donatists, Manichaeans and Pelagians; texts on Christian doctrine, notably De Doctrina Christiana (On Christian Doctrine); exegetical works such as commentaries on Genesis, the Psalms and Paul's Letter to the Romans; many sermons and letters; and the Retractationes, a review of his earlier works which he wrote near the end of his life. Apart from those, Augustine is probably best known for his Confessions, which is a personal account of his earlier life, and for De civitate Dei (The City of God, consisting of 22 books), which he wrote to restore the confidence of his fellow Christians, which was badly shaken by the sack of Rome by the Visigoths in 410. His On the Trinity, in which he developed what has become known as the 'psychological analogy' of the Trinity, is also considered to be among his masterpieces, and arguably of more doctrinal importance than the Confessions or the City of God. He also wrote On Free Choice of the Will (De libero arbitrio), addressing why God gives humans free will that can be used for evil. Legacy In both his philosophical and theological reasoning, Augustine was greatly influenced by Stoicism, Platonism and Neoplatonism, particularly by the work of Plotinus, author of the Enneads, probably through the mediation of Porphyry and Victorinus (as Pierre Hadot has argued). Some Neoplatonic concepts are still visible in Augustine's early writings. His early and influential writing on the human will, a central topic in ethics, would become a focus for later philosophers such as Schopenhauer, Kierkegaard, and Nietzsche. He was also influenced by the works of Virgil (known for his teaching on language), and Cicero (known for his teaching on argument). Augustine, along with Ambrose, Jerome and pope Gregory the Great, is considered one of the four Great Latin Church Fathers. Philosopher Bertrand Russell was impressed by Augustine's meditation on the nature of time in the Confessions, comparing it favourably to Kant's version of the view that time is subjective. Catholic theologians generally subscribe to Augustine's belief that God exists outside of time in the "eternal present"; that time only exists within the created universe because only in space is time discernible through motion and change. His meditations on the nature of time are closely linked to his consideration of the human ability of memory. Frances Yates in her 1966 study The Art of Memory argues that a brief passage of the Confessions, 10.8.12, in which Augustine writes of walking up a flight of stairs and entering the vast fields of memory clearly indicates that the ancient Romans were aware of how to use explicit spatial and architectural metaphors as a mnemonic technique for organizing large amounts of information. Augustine's philosophical method, especially demonstrated in his Confessions, had a continuing influence on Continental philosophy throughout the 20th century. His descriptive approach to intentionality, memory, and language as these phenomena are experienced within consciousness and time anticipated and inspired the insights of modern phenomenology and hermeneutics. Edmund Husserl writes: "The analysis of time-consciousness is an age-old crux of descriptive psychology and theory of knowledge. The first thinker to be deeply sensitive to the immense difficulties to be found here was Augustine, who laboured almost to despair over this problem." Martin Heidegger refers to Augustine's descriptive philosophy at several junctures in his influential work Being and Time.[l] Hannah Arendt began her philosophical writing with a dissertation on Augustine's concept of love, Der Liebesbegriff bei Augustin (1929): "The young Arendt attempted to show that the philosophical basis for vita socialis in Augustine can be understood as residing in neighbourly love, grounded in his understanding of the common origin of humanity." Jean Bethke Elshtain in Augustine and the Limits of Politics tried to associate Augustine with Arendt in their concept of evil: "Augustine did not see evil as glamorously demonic but rather as absence of good, something which paradoxically is really nothing. Arendt ... envisioned even the extreme evil which produced the Holocaust as merely banal [in Eichmann in Jerusalem]." Augustine's philosophical legacy continues to influence contemporary critical theory through the contributions and inheritors of these 20th-century figures. Seen from a historical perspective, there are three main perspectives on the political thought of Augustine: first, political Augustinianism; second, Augustinian political theology; and third, Augustinian political theory. The historian Diarmaid MacCulloch has written: "Augustine's impact on Western Christian thought can hardly be overstated; only his beloved example, Paul of Tarsus, has been more influential, and Westerners have generally seen Paul through Augustine's eyes." Thomas Aquinas was influenced heavily by Augustine. On the topic of original sin, Aquinas proposed a more optimistic view of man than that of Augustine in that his conception leaves to the reason, will, and passions of fallen man their natural powers even after the Fall, without "supernatural gifts". While in his pre-Pelagian writings Augustine taught that Adam's guilt as transmitted to his descendants much enfeebles, though does not destroy, the freedom of their will, Protestant reformers Martin Luther and John Calvin affirmed that Original Sin completely destroyed liberty (see total depravity). According to Leo Ruickbie, Augustine's arguments against magic, differentiating it from a miracle, were crucial in the early Church's fight against paganism and became a central thesis in the later denunciation of witches and witchcraft. According to Professor Deepak Lal, Augustine's vision of the heavenly city has influenced the secular projects and traditions of the Enlightenment, Marxism, Freudianism and eco-fundamentalism. Post-Marxist philosophers Antonio Negri and Michael Hardt rely heavily on Augustine's thoughts, particularly The City of God, in their book of political philosophy Empire. Augustine has influenced many modern-day theologians and authors such as John Piper. Hannah Arendt, an influential 20th-century political theorist, wrote her doctoral dissertation in philosophy on Augustine, and continued to rely on his thought throughout her career. Ludwig Wittgenstein extensively quotes Augustine in Philosophical Investigations for his approach to language, both admiringly, and as a sparring partner to develop his own ideas, including an extensive opening passage from the Confessions.[citation needed] Contemporary linguists have argued that Augustine has significantly influenced the thought of Ferdinand de Saussure, who did not 'invent' the modern discipline of semiotics, but rather built upon Aristotelian and Neoplatonic knowledge from the Middle Ages, via an Augustinian connection: "as for the constitution of Saussurian semiotic theory, the importance of the Augustinian thought contribution (correlated to the Stoic one) has also been recognized. Saussure did not do anything but reform an ancient theory in Europe, according to the modern conceptual exigencies." Pope Benedict XVI, in his autobiographical book Milestones, notes that Augustine was one of the deepest influences in his thought.[page needed] Pope Francis suggests that Augustine's reflections on the relationship between the "beloved disciple" and Jesus (John 13:23: "reclining next to him") contributed to the development of devotion to the Sacred Heart, and Pope Leo XIV, a member of the Augustinian order, refers to his patron as "a vigilant pastor and theologian of rare insight". Marc-Antoine Charpentier, Motet "Pour St Augustin mourant", H.419, for 2 voices and continuo (1687), and "Pour St Augustin", H.307, for 2 voices and continuo (1670s). Much of Augustine's conversion is dramatized in the oratorio La conversione di Sant'Agostino (1750) composed by Johann Adolph Hasse. The libretto for this oratorio, written by Duchess Maria Antonia of Bavaria, draws upon the influence of Metastasio (the finished libretto having been edited by him) and is based on an earlier five-act play Idea perfectae conversionis dive Augustinus written by the Jesuit priest Franz Neumayr. In the libretto Augustine's mother Monica is presented as a prominent character that is worried that Augustine might not convert to Christianity. As Dr. Andrea Palent says: Maria Antonia Walpurgis revised the five-part Jesuit drama into a two-part oratorio liberty in which she limits the subject to the conversion of Augustine and his submission to the will of God. To this was added the figure of the mother, Monica, so as to let the transformation appear by experience rather than the dramatic artifice of deus ex machina. In his poem "Confessional", Frank Bidart compares the relationship between Augustine and his mother, Saint Monica, to the relationship between the poem's speaker and his mother. In the 2010 TV miniseries Restless Heart: The Confessions of Saint Augustine, Augustine is played by Matteo Urzia (aged 15), Alessandro Preziosi (aged 25) and Franco Nero (aged 76). In 1967, American singer-songwriter Bob Dylan released a song entitled "I Dreamed I Saw St. Augustine" as part of his album John Wesley Harding. The song has been covered by several artists including Joan Baez, Vic Chesnutt, Eric Clapton, John Doe, Thea Gilmore, Adam Selzer and Dirty Projectors. In 2016, the band The Chairman Dances released "Augustine", a response song to Bob Dylan's "I Dreamed I Saw St. Augustine". English pop/rock musician, singer and songwriter Sting wrote a song related to Saint Augustine entitled "Saint Augustine in Hell" which was part of his fourth solo studio album Ten Summoner's Tales released in 1993. See also Notes and references External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_note-FOOTNOTEMcFerran201524-132] | [TOKENS: 10728] |
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Computer#cite_note-kahng-91] | [TOKENS: 10628] |
Contents Computer A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs, which enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation, or to a group of computers that are linked and function together, such as a computer network or computer cluster. A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users. Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joysticks, etc.), output devices (monitors, printers, etc.), and input/output devices that perform both functions (e.g. touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. Etymology It was not until the mid-20th century that the word acquired its modern definition; according to the Oxford English Dictionary, the first known use of the word computer was in a different sense, in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine". The name has remained, although modern computers are capable of many higher-level functions. History Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a] The use of counting rods is one example. The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft. In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates. In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which through a system of pulleys and cylinders could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences. Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables". He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. In his work Essays on Automatics published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like a x ( y − z ) 2 {\displaystyle a^{x}(y-z)^{2}} , for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson. The art of mechanical analog computing reached its zenith with the differential analyzer, completed in 1931 by Vannevar Bush at MIT. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).[citation needed] Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers. By 1938, the United States Navy had developed the Torpedo Data Computer, an electromechanical analog computer for submarines that used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II, similar devices were developed in other countries. Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer. In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete. Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. The Z4 served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls". It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945. The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick work on semiconductor surface passivation by silicon dioxide. Modern monolithic ICs are predominantly MOS (metal–oxide–semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel.[b] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip. System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. The first mobile computers were heavy and ran from mains power. The 50 lb (23 kg) IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin. Types Computers can be classified in a number of different ways, including: A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer,[c] a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that processes information qualifies as a computer. Hardware The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Input devices are the means by which the operations of a computer are controlled and it is provided with data. Examples include: Output devices are the means by which a computer provides the results of its calculations in a human-accessible form. Examples include: The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[e] Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[f] The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor. The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic. Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[g] In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[h] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Software Software is the part of a computer system that consists of the encoded information that determines the computer's operation, such as data or instructions on how to process the data. In contrast to the physical hardware from which the system is built, software is immaterial. Software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither is useful on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware". The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. This section applies to most common RAM machine–based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language: Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[i] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. A programming language is a notation system for writing the source code from which a computer program is produced. Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of programming languages—some intended for general purpose programming, others useful for only highly specialized applications. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[j] Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[k] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[l] Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. Networking and the Internet Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. In the 20th century, artificial intelligence systems were predominantly symbolic: they executed code that was explicitly programmed by software developers. Machine learning models, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular of neural networks) has rapidly improved with progress in hardware for parallel computing, mainly graphics processing units (GPUs). Some large language models are able to control computers or robots. AI progress may lead to the creation of artificial general intelligence (AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans. Professions and organizations As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. See also Notes References Sources External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/G%C3%A9rard_de_Vaucouleurs] | [TOKENS: 613] |
Contents Gérard de Vaucouleurs Gérard Henri de Vaucouleurs (French pronunciation: [ʒeʁaʁ ɑ̃ʁi də vokulœʁ]; 25 April 1918 – 7 October 1995) was a French astronomer best known for his studies of galaxies. Life and career Gérard de Vaucouleurs was born on April 25, 1918 in Paris, he took the maiden name of his mother as his last name. He had an early interest in amateur astronomy and received his undergraduate degree in 1939 at the Sorbonne in that city. After military service in World War II, he resumed his pursuit of astronomy. He was married to fellow astronomer Antoinette de Vaucouleurs on October 31, 1944, and the couple would frequently collaborate on astronomical research. He was fluent in English and spent 1949-51 in England and 1951–57 in Australia at Mount Stromlo Observatory. He was at Lowell Observatory in Arizona from 1957-1958 and at Harvard from 1958-60. In 1960 he was appointed to the University of Texas at Austin, where he spent the rest of his career. He was one of the first five faculty in the newly formed astronomy department there. His wife Antoinette died in 1987. In 1995 he died of a heart attack in his home in Austin at the age of 77. At the time of his death he had a second wife named Elysabeth. Research His earliest work had concerned the planet Mars and while at Harvard he used telescope observations from 1909 to 1958 to study the areographic coordinates of features on the surface of Mars. His later work focused on the study of galaxies and he co-authored the Third Reference Catalogue of Bright Galaxies with his wife Antoinette (1921-1987), a fellow UT Austin astronomer and lifelong collaborator. His specialty included reanalyzing Hubble and Sandage's galaxy atlas and recomputing the distance measurements utilizing a method of averaging many different kinds of metrics such as luminosity, the diameters of ring galaxies, brightest star clusters, etc., in a method he called "spreading the risks." During the 1950s he promoted the idea that galactic clusters are grouped into superclusters. The de Vaucouleurs modified Hubble sequence is a widely used variant of the standard Hubble sequence. De Vaucouleurs was awarded the Henry Norris Russell Lectureship by the American Astronomical Society in 1988. He was awarded the Prix Jules Janssen of the Société astronomique de France (Astronomical Society of France) in the same year. He and his wife and longtime collaborator, Antoinette, together produced 400 research and technical papers, 20 books and 100 articles for laymen. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Revue] | [TOKENS: 2089] |
Contents Revue A revue is a type of multi-act popular theatrical entertainment that combines music, dance, and sketches. The revue has its roots in 19th century popular entertainment and melodrama but grew into a substantial cultural presence of its own during its golden years from 1916 to 1932. Though most famous for their visual spectacle, revues frequently satirized contemporary figures, news or literature. Similar to the related subforms of operetta and musical theatre, the revue art form brings together music, dance and sketches to create a compelling show. In contrast to these, however, revue does not have an overarching storyline. Rather, a general theme serves as the motto for a loosely related series of acts that alternate between solo performances and dance ensembles. Owing to high ticket prices, ribald publicity campaigns, and the occasional use of prurient material, the revue was typically patronized by audience members who earned more and felt even less restricted by middle-class social norms than their contemporaries in vaudeville. Like much of that era's popular entertainments, revues often featured material based on sophisticated, irreverent dissections of topical matter, public personae and fads, though the primary attraction was found in the frank display of the female body. Etymology Revue comes from the French word for "review," as in a "show presenting a review of current events." George Lederer's The Passing Show (1894) is usually held to be the first successful American "review." The English spelling was used until 1907 when Florenz Ziegfeld Jr. popularized the French spelling. "Follies" is now sometimes (incorrectly) employed as an analog for "revue," though the term was proprietary to Ziegfeld until his death in 1932. Other popular proprietary revue names included George White's "Scandals," Earl Carroll's "Vanities" and John Murray Anderson's Greenwich Village Follies. Origin Revues are most properly understood as having amalgamated several theatrical traditions within the corpus of a single entertainment. Minstrelsy's olio section provided a structural map of popular variety presentation, while literary travesties highlighted an audience hunger for satire. Theatrical extravaganzas, in particular, moving panoramas, demonstrated a vocabulary of the spectacular. Burlesque, itself a bawdy hybrid of various theatrical forms, lent to classic revue an open interest in female sexuality and the masculine gaze. Golden age Revues enjoyed great success on Broadway from the World War I years until the Great Depression, when the stock market crash forced many revues from cavernous Broadway houses into smaller venues. (The shows did, however, continue to infrequently appear in large theatres well into the 1950s.) The high ticket prices of many revues helped ensure audiences distinct from other live popular entertainments during their height of popularity (late 1910s–1940s). In 1914, the Follies charged $5.00 for an opening night ticket ($130 in 2020 dollars); at that time, many cinema houses charged from $0.10 to 0.25, while low-priced vaudeville seats were $0.15. Among the many popular producers of revues, Florenz Ziegfeld played the greatest role in developing the classical revue through his glorification of a new theatrical "type", "the American girl". Famed for his often bizarre publicity schemes and continual debt, Ziegfeld joined Earl Carroll, George White, John Murray Anderson, and the Shubert Brothers as the leading producing figure of the American revue's golden age. Revues also had a presence in Germany during the 1930s and 1940s, with films such as "Frau meiner Träume" being made. Revues took advantage of their high revenue stream to lure away performers from other media, often offering exorbitant weekly salaries without the unremitting travel demanded by other entertainments. Performers such as Eddie Cantor, Anna Held, W. C. Fields, Bert Williams, Ed Wynn, the Marx Brothers and the Fairbanks Twins found great success on the revue stage. One of Cole Porter's early shows was Raymond Hitchcock's revue Hitchy-Koo of 1919. Composers or lyricists such as Richard Rodgers, Lorenz Hart, Irving Berlin, and George M. Cohan also enjoyed a tremendous reception on the part of audiences. Sometimes, an appearance in a revue provided a key early entry into entertainment. Largely due to their centralization in New York City and their adroit use of publicity, revues proved particularly adept at introducing new talents to the American theatre. Rodgers and Hart, one of the great composer/lyricist teams of the American musical theatre, followed up their early Columbia University student revues with the successful Garrick Gaieties (1925). Comedian Fanny Brice, following a brief period in burlesque and amateur variety, bowed to revue audiences in Ziegfeld's Follies of 1910. Specialist writers and composers of revues have included Sandy Wilson, Noël Coward, John Stromberg, George Gershwin, Earl Carroll, and the British team Flanders and Swann. In Britain predominantly, Tom Arnold also specialized in promoting series of revues and his acts extended to the European continent and South Africa. Film revues With the introduction of talking pictures, in 1927, studios immediately began filming acts from the stage. Such film shorts gradually replaced the live entertainment that had often accompanied cinema exhibition. By 1928, studios began planning to film feature-length versions of popular musicals and revues from the stage. The lavish films, noted by many for a sustained opulence unrivaled in Hollywood until the 1950s epics, reached a breadth of audience never found by the stage revue, all while significantly underpricing the now-faltering theatrical shows. A number of revues were released by the studios, many of which were filmed entirely (or partly) in color. The most notable examples of these are The Show of Shows (Warner Brothers, 1929), The Hollywood Revue of 1929 (Metro-Goldwyn-Mayer, 1929), Fox Movietone Follies of 1929 (Fox Film Corporation, 1929), Paramount on Parade (Paramount, 1930), New Movietone Follies of 1930 (Fox, 1930), and King of Jazz (Universal, 1930). Even Britain jumped on the bandwagon and produced expensive revues such as Harmony Heaven (British International Pictures, 1929), Elstree Calling (BIP, 1930), and The Musical Revue Of 1959 (BI P, 1960). Contemporary revues Revues are often common today as student entertainment (with strong traditions in many universities in UK, Canada, Australia, New Zealand, Norway, Sweden, Finland and Denmark). These use pastiche, in which contemporary songs are re-written in order to comment on the college or courses in a humorous nature. While most comic songs will only be heard within the revue they were written for, sometimes they become more widely known—such as "A Transport of Delight", about the big red London bus, by Flanders and Swann, who first made their name in a revue titled At the Drop of a Hat. The Rolling Thunder Revue was a famed U.S. concert tour in the mid-1970s consisting of a traveling caravan of musicians, headed by Bob Dylan, that took place in late 1975 and early 1976. Towards the end of the 20th century, a subgenre of revue largely dispensed with the sketches, founding narrative structure within a song cycle in which the material is culled from varied works. This type of revue may or may not have identifiable characters and a rudimentary storyline but, even when it does, the songs remain the focus of the show (for example, Closer Than Ever by Richard Maltby Jr. and David Shire). This type of revue usually showcases songs written by a particular composer or songs made famous by a particular performer. Examples of the former are Side By Side By Sondheim (music/lyrics Stephen Sondheim), Eubie! (Eubie Blake), Tom Foolery (Tom Lehrer), and Five Guys Named Moe (songs made popular by Louis Jordan). The eponymous nature of these later revues suggest a continued embrace of a unifying authorial presence in this seemingly scattershot genre, much as was earlier the case with Ziegfeld, Carrol, et al. With different artistic emphases, the revue genre is today above all upheld at traditional variety theatres such as the Le Lido, Moulin Rouge, and Friedrichstadt-Palast Berlin, as well as shows in Las Vegas. University and Medics' Revues It is a current and longstanding tradition of medical, dental, engineering, legal and veterinary schools within the UK, Canada, New Zealand and Australia to stage revues each year, combining comedy sketches, songs, parodies, films and sound-bites. As well as performing at their home universities, British revues are sometimes also performed at festivals such as the Edinburgh Festival Fringe. The Moira Stuart Cup is competed for annually at the United Hospitals Comedy Revue, by all five of the University of London Medical Schools. It has been won by all medical schools at least once, with RUMS (UCL Medical School) and St George's Hospital Medical School achieving the most victories, winning the trophy six times each. The cup is not officially endorsed by Moira Stuart herself. a. In 2019, the judges ironically declared Imperial College School of Medicine the winners, because they could not decide which of The MDs Comedy Revue or The Zebraphiles were the funnier. b. The 2002 UH Revue was a showcase of each Medical School's Revue societies, with the competition element brought in from 2003. See also References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/NXT-G] | [TOKENS: 723] |
Contents Lego Mindstorms NXT Lego Mindstorms NXT is a programmable robotics kit released by Lego on August 2, 2006.[non-primary source needed] It replaced the Robotics Invention System, the first-generation Lego Mindstorms kit. The base kit ships in two versions: the retail version and the education base set. It comes with the NXT-G programming software or the optional LabVIEW for Lego Mindstorms. A variety of unofficial languages exist, such as NXC, NBC, leJOS NXJ, and RobotC. A second-generation set, Lego Mindstorms NXT 2.0, was released on August 1, 2009, with a color sensor and other upgrades. The third-generation EV3 was released in September 2013. NXT Intelligent Brick The kit's main component is the NXT Intelligent Brick computer, which can accept input from up to four sensors and control up to three motors with a modified version of RJ12 cables (similar to, but incompatible with, RJ11 phone lines). The plastic pin to hold the cable in the socket is moved slightly to the right. The brick has a 100×64 pixel monochrome LCD and four buttons that can navigate a user interface with hierarchical menus. It has a 32-bit ARM7TDMI-core Atmel AT91SAM7S256 microcontroller with 256 KB of FLASH memory and 64 KB of RAM, an 8-bit Atmel AVR ATmega48 microcontroller, and Bluetooth support. The kit has a speaker, and can play sound files at sampling rates up to 8 kHz. Power is supplied by 6 AA batteries (1.5 V each) in the consumer version of the kit, and a rechargeable Li-Ion battery in the educational version. The brick is compatible with sensors and motors of its successor Lego Mindstorms EV3. Lego has released open source firmware for the NXT Intelligent Brick and schematics for all hardware components. Several developer kits are available with documentation for the NXT: Programming Simple programs can be created using the menu on the NXT Intelligent Brick. More complicated programs and sound files can be downloaded with a USB port or Bluetooth. Files can be copied wirelessly between two NXT bricks, and some mobile phones can be used as a remote control. Up to three NXT bricks can communicate simultaneously via Bluetooth when user-created programs are run. The kit's retail version includes software for writing programs that run on Windows and Mac OS personal computers. The software, based on National Instruments LabVIEW, provides a visual programming language for writing simple programs and downloading them to the NXT Brick; instead of requiring users to write lines of code, they can use flowchart-like blocks to design their program. Sensors and actuators The Lego Mindstorms NXT base kit includes: Other parts may be bought separately. Third-party companies manufacture sensors such as a compass, gyroscope, infrared tracker, RFID reader and accelerometer sensors sold by Lego. The temperature sensor can measure in Celsius or Fahrenheit. Sensors are connected to the NXT brick with a six-position modular connector with analog and digital interfaces. The analog interface is backward-compatible (using an adapter) with the older Robotics Invention System. The digital interface is capable of I2C and RS-485 communication. NXT 2.0 Lego Mindstorms NXT 2.0 is the second set in the Lego Mindstorms series, introduced on August 5, 2009, at the Lego Shop in the U.S. The set contains 619 pieces, including a sensor that can detect colors. It was followed by the Lego Mindstorms EV3. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/History_of_the_Jews_in_Ecuador] | [TOKENS: 2491] |
Contents History of the Jews in Ecuador The history of the Jews in Ecuador dates back to the 16th and 17th centuries, when Sephardic Jews began arriving from Spain and Portugal as a result of the Spanish Inquisition. Ecuadorian Jews are members of a small Jewish community in the territory of today's Ecuador, and they form one of the smallest Jewish communities in South America. History The first Jews began to arrive in Ecuador in the 16th and 17th centuries. From 1580 to 1640, when Spain and Portugal were united in the Iberian Union, King Philip II of Spain was the only heir to the throne. During this time, many Portuguese were "suspicious of their faith", so the Jews began to enter the Viceroyalty of Peru, a newly founded colony where Inquisition surveillance was weaker. As a result of the Iberian Union, much of Spanish America was ruled by one crown during this period of sixty years. During this time, the Portuguese spread Christianity as they arrived in the dominions of Spanish America, and the term "Portuguese" was synonymous with "converted Jew". In 1640, the union ended when the Portuguese revolted against the Spanish monarchy and the Duke of Braganza took the throne of the kingdom of Portugal under the name of John IV. The "new Christians" in Spanish America found no support from the inquisitorial regime, and were forced to migrate to other regions of the Peruvian viceroyalty, especially to those where the Inquisition did not have any courts. The Viceroyalty of Peru was extremely large, and the territory still contained large areas with little to no Inquisition presence. By avoiding major urban centers, Jewish people, labeled as heretics, could survive by camouflaging their personal and group identity. Thus began a pattern of "new Christians" settling in the Viceroyalty of Peru, where they migrated from its center to the less-populated and less-controlled outer regions. A relatively large number of migrants made their way towards the southern and northern Chile Audience of Quito. Within Quito, the new diaspora first headed to the Interior Juan Salinas and Loyola (later transformed into the township of Loja), which, according to studies by Ricardo Ordoñez Chiriboga, was an important destination for many migrating Sephardim. Subsequently, many of these families migrated further north to Cuenca, and then to the northernmost township of Chimborazo (Alausí, Pallatanga and Chimborazo), continuing the flight from the powerful and cruel inquisitorial arm. Early Sephardic Jews likely arrived in Cuenca and its nearby settlements in the late sixteenth and early seventeenth centuries, but there is evidence of additional waves of Jewish migration to the area in later times. It is possible that other Sephardim had been established in the colonial territory since the early days of the Spanish conquest, suggested by the presence of names associated with conquerors who arrived alongside Sebastián de Benalcázar and Pedro de Alvarado. In the seventeenth century, landowners began to appear in Cuenca with surnames of uncertain origin, including Saavedra, Hadaad and Iglesias. Migrants also reached the northern Peruvian Andes, as cultural and ethnic influences of the region were not yet defined by colonial boundaries to the extent they are today. Rather, this cultural-historical unit traces back to pre-Columbian times. These circumstances largely explain the Sephardic presence in gold and commercial areas of Quito and Calacalí such as Loja, Zaruma, Cuenca, Santa Isabel, Yungilla, Tarquí, Chordeleg and Sígsig, as well as in other mountain passes or trade route towns between Guayaquil and Quito, such as Alausí Chapacoto, Chimborazo St. Joseph, San Miguel de Chimborazo, Guaranda, as well as other areas in the northern highlands of Peru due to their proximity. The presence of Western Sephardic Jews in Ecuador remained hidden for years, as they often settled in very remote villages and practiced Judaism in secret at home. Many of these Crypto-Jews still speak Ladino. Some say that Antonio José de Sucre, a leader in the struggle for independence in South America, and the hero of Ecuador, who served both as president of Peru and as president of Bolivia, is a descendant of these Jews. Certain family names among established Ecuadorian families attest to their (in some cases Crypto-Jewish) Sephardi ancestry; however, prior to World War II there was very little active Jewish immigration to Ecuador. Sephardic names in Ecuador include: Navon (wise), Moreno (teacher), Gabay (official), Piedra (stone), Franco (free), Amzalag (jeweler), Saban (soap), Espinoza (thorn), Nagar (carpenter), Haddad (blacksmith), and Hakim (medic). In 1904, there were only four recognized Jewish families in Ecuador, and a 1917 survey indicated the presence of 14 Jews in the country. After the United States established its immigration quota system with the Immigration Act of 1924, a handful more Jews arrived in Ecuador. However, mass Jewish immigration to Ecuador only began in the wake of the rise of Nazism and the ensuing Holocaust in Europe. During the years 1933–1943, about 2,700 Jews arrived, and by 1945 there were 3,000 new Jewish immigrants, 85% of whom were refugees from Europe. In the early years of World War II, Ecuador still admitted a certain number of immigrants; in 1939, when several South American countries refused to accept the 165 Jewish refugees from Germany aboard the ship Koenigstein, Ecuador granted them entry permits. Nevertheless, the country eventually gave way to a policy of selectivity. According to policy, Jewish immigrants to Ecuador were supposed to be employed in agriculture, but the authorities soon surmised that the immigrants were actually merchants, industrialists, and businessmen. As a result, legislation was passed in 1938 which compelled any Jew not engaged in agriculture or industry to leave the country. In addition, entry rights were limited to only Jews who possessed a minimum of $400, which they would then be required to invest in an industrial project. In 1935, the Comite pour l'Etude de l'Industrie de l'Immigration dans la Republique de l'Equateur (English: Committee for the Study of the Immigration Industry in the Republic of Ecuador) was established in Paris by the Freeland League for Jewish Colonization, with the purpose of creating a Jewish homeland in Ecuador, Australia or Surinam. An agreement was reached with the Ecuadorian government to transfer 500,000 acres of land to the committee's jurisdiction for a period of 30 years to be settled by immigrants regardless of race, religion, or nationality. Several concessions were also promised, such as tax exemption for three years, citizenship after one year, customs exemption, and free transportation by train from the port to the interior of the country. The president signed the agreement several months later on the condition that a detailed program be presented by May 1937, and that the Committee invest $8,000 and settle at least 100 families. Some Jewish organizations, however, found the land proposed for the plan unacceptable, claiming that it was too far from population centers and that the climate was too severe. These objections resulted in a total abandonment of the project. Following this attempt, the American Jewish Joint Distribution Committee and HICEM (a merger of the Hebrew Immigrant Aid Society, the Jewish Colonization Association, and EmigDirect which handled transportation through European ports; the latter German-based organization withdrew in 1934) attempted to establish chicken farms for the immigrants in other areas of Ecuador, and 60 families were settled, but conditions[clarification needed] precluded any success in the venture, which ultimately failed. Most of the immigrants were businessmen and professionals who preferred to carry on their professions. Many Jewish craftsmen discovered that the native balsa wood was excellent for furniture craft and began production. Later, these immigrants introduced iron and steel furniture to the Ecuadorian market, previously unknown to the country. They also developed retail stores and opened hotels. The success of many of these immigrants, however, caused tension among the Syrian and Cuban communities, who had previously controlled those industries. This pressure led to an anti-Jewish sentiment, but nothing more substantial. In 1940, there were 3,000 Jews recorded in Ecuador, of whom a large majority were refugees from Germany. The majority of Jews in Ecuador worked in the press, commerce, and medical industries. They also established textile, pharmaceutical, and furniture factories. At its peak, in 1950, the Jewish population of Ecuador was estimated at 4,000, with the majority living in Quito. Several hundred also lived in Guayaquil, with several scores in Ambato, Riobamba, and Cuenca. In 1952, a law was passed requiring every foreigner to supply proof that they were engaged in the occupation stipulated in their entry visa. In response, the World Jewish Congress (JWC) tried to help Jews who were practicing business, but were only allowed to engage in agricultural work according to their visas. However, attempts at agricultural settlement were unsuccessful. Ecuador's government policies regarding Jewish emigration are historically tentative and volatile; for example, in 1935 it gave the Jews permission to settle within an area of about 20,000 square kilometres (7,700 sq mi), but in 1938 it issued an order that all Jewish residents working in areas other than agriculture or incapable of developing the industry would be required to leave the country. In total, only 290 Jews live in Ecuador today. The country's Jewish community is predominantly of German origin, but the younger generation is largely Spanish-speaking. The Ecuadorian Jewish community is a homogeneous group, which has facilitated great communal organization. For example, the Asociación de Beneficencia Israelita, founded in 1938, is the central body for Jewish religious and cultural affairs in Ecuador. Other Jewish organizations in the country include the Zionist Federation, B'nai B'rith, the Women's International Zionist Organization (WIZO), and Maccabi. The community also publishes a bilingual Spanish–German bulletin called Informaciones. In Ecuador, intermarriage is not as large a problem as elsewhere, since Jews form a separate middle stratum between the upper (traditionally Catholic) classes and the lower classes of the indigenous population. There is a Jewish school in Quito, the Colegio Experimental Alberto Einstein, established in 1973, which serves both Jewish and non-Jewish students from kindergarten through the twelfth grade. The school celebrates all Jewish holidays, and it teaches Hebrew and other Jewish studies. The school has an excellent reputation and offers a pre-college preparatory program. The Jewish community of Quito also has its own building, a home for the elderly, and a synagogue that holds services on the Sabbath and holidays. Ecuador has traditionally maintained friendly relations with Israel, and has frequently supported Israel in the United Nations; the Ecuadorian Embassy is in Tel Aviv. In the late 1960s, the two countries developed a network of technical cooperation and assistance, particularly in the fields of agriculture and water development. Since 1948, 137 Ecuadorian Jews have emigrated to Israel. Prominent Ecuadorian Jews Ecuadorian Jews have achieved prominence in various fields including academics, industry, and science. Benno Weiser (a.k.a. Benjamin Varon), who was an active Ecuadorian journalist, later entered the Israeli diplomatic service, serving in various Latin American countries. His brother, Max Weiser, was the first Israeli consul in Ecuador. Moselio Schaechter is a researcher involved studying bacterial growth and cell division. References External links Books related to this topic: Los Gonzalez 1848-2018 Raul Gonzalez Tobar, Amazon, Barnes & Noble etc |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Mapillary] | [TOKENS: 1108] |
Contents Mapillary Mapillary is a service for open-source sharing of crowdsourced geotagged photos, developed by remote company Mapillary AB, based in Malmö, Sweden. Mapillary was launched in 2013 and acquired by Meta Platforms, Inc. in 2020. It offers street level imagery similar to Google Street View. History Mapillary co-founders were Jan Erik Solem, Johan Gyllenspetz, Peter Neubauer and Yubin Kuang.[non-primary source needed] According to Solem, Mapillary was founded to allow crowdsourcing of street-level imagery for use with computer vision. The project started in September 2013, with an iPhone app released in November 2013, followed by an Android app released in January 2014. Mapillary received $1.5 million in seed capital funding from a group of investors, led by Sequoia Capital in January 2015. In March 2016, it raised $8M additional funding (Atomico, Sequoia, LDV Capital, and PlayFair) for expanded operations, including more computer vision talent and a San Francisco office. In spring 2018, the company received $15M investment led by BMW i Ventures for a total estimated funding of $25M. In September 2018, Mapillary announced a "collaboration" with Amazon to use the Rekognition visual data analysis platform to extract text from Mapillary's huge database of 350 million images. As large cities struggle to manage current street sign inventories, the first major project is identifying parking signs and extracting sign text for one large U.S. city, which will use the data to build a parking app to help save drivers time when searching for parking. In October 2018, the company made CNBC's annual list of top 100 start-ups to watch. In November 2018, Mapillary released a software development kit (SDK) allowing interested third-party software developers to integrate Mapillary image-capture functionality in their apps, opening the way for additional input channels. In June 2020, Facebook acquired Mapillary for an undisclosed amount. After the acquisition, commercial use of Mapillary was made available free of charge. Due to Mapillary being widely used for contributing to OpenStreetMap and fears that Facebook might shut the service down, a tool was subsequently created for synchronizing data between Mapillary and KartaView (formerly OpenStreetCam). By November 14, over 55TiB or 30 million images had been transferred to KartaView. In August 2020, Mapillary announced that more cameras will be available for contributors, making possible streetside coverage of more places that Mapillary might not visit. Features Mapillary offers different capturing modes including walking, riding (either a bike or car), or panorama. On 10 September 2014, Mapillary announced that they now support panoramas and spherical photos. As of May 2014, Mapillary had around 0.5 million photos and by December 2014, it had over 5.5 million. As of March 2015 it had 10 million photos, and by June 11, 2015, Mapillary had over 20 million photos. As of November 15, 2016 Mapillary had over 100 million photos. In August 2023 Mapillary reached 2 billion photos. Mapillary images, millions Major dataset contributions In 2018, Mapillary acquired major image datasets from two USA state transportation departments: approximately 5 million images each contributed by the Vermont Department of Transportation and the Arizona Department of Transportation. License The images on Mapillary can be used under Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0). There is special permission to derive data from the photos for contributing to OpenStreetMap and Wikimedia Commons. The GPX tracks can be used without restriction, and derived data can be used provided it is ODbL. The license was changed on 29 April 2014 from CC BY-NC to CC BY-SA. Mobile apps (Android and iPhone) are proprietary software. Research/datasets In May 2017, Mapillary released an open source subset of its very large and ever-expanding crowdsourced image dataset, the Mapillary Vistas Dataset of 25,000 street-level images, with pixel-wise annotation, to help train autonomous vehicle AI system algorithms. With data from 190 countries, they described it as "the world's largest, most diverse dataset for object recognition on street-level imagery" and offered it free for both academic and commercial researchers, but licensing is required for commercial product integration. Mapillary Tasker On November 28, 2017, Mapillary released a beta tool known as the Mapillary Tasker. Mapillary Tasker "enables a task creator to tell other contributors where help is needed and what needs to be done." On the other hand, contributors can sort through the various tasks listed on the beta and work on whatever projects are interesting and feasible. The tasks can range from "completing coverage, making map edits based on images, and verifying object detection," and separated between capture, map edit, and verification tasks on the actual tool. As a side note, because the tool is currently in beta, users have to send requests to be reviewed by Mapillary administrators, rather than having the autonomy to post whatever task you want assistance with. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/w/index.php?title=List_of_programming_languages&action=edit§ion=15] | [TOKENS: 1432] |
Editing List of programming languages (section) Copy and paste: – — ° ′ ″ ≈ ≠ ≤ ≥ ± − × ÷ ← → · § Cite your sources: <ref></ref> {{}} {{{}}} | [] [[]] [[Category:]] #REDIRECT [[]] <s></s> <sup></sup> <sub></sub> <code></code> <pre></pre> <blockquote></blockquote> <ref></ref> <ref name="" /> {{Reflist}} <references /> <includeonly></includeonly> <noinclude></noinclude> {{DEFAULTSORT:}} <nowiki></nowiki> <!-- --> <span class="plainlinks"></span> Symbols: ~ | ¡ ¿ † ‡ ↔ ↑ ↓ • ¶ # ∞ ‹› «» ¤ ₳ ฿ ₵ ¢ ₡ ₢ $ ₫ ₯ € ₠ ₣ ƒ ₴ ₭ ₤ ℳ ₥ ₦ ₧ ₰ £ ៛ ₨ ₪ ৳ ₮ ₩ ¥ ♠ ♣ ♥ ♦ 𝄫 ♭ ♮ ♯ 𝄪 © ¼ ½ ¾ Latin: A a Á á À à  â Ä ä Ǎ ǎ Ă ă Ā ā à ã Å å Ą ą Æ æ Ǣ ǣ B b C c Ć ć Ċ ċ Ĉ ĉ Č č Ç ç D d Ď ď Đ đ Ḍ ḍ Ð ð E e É é È è Ė ė Ê ê Ë ë Ě ě Ĕ ĕ Ē ē Ẽ ẽ Ę ę Ẹ ẹ Ɛ ɛ Ǝ ǝ Ə ə F f G g Ġ ġ Ĝ ĝ Ğ ğ Ģ ģ H h Ĥ ĥ Ħ ħ Ḥ ḥ I i İ ı Í í Ì ì Î î Ï ï Ǐ ǐ Ĭ ĭ Ī ī Ĩ ĩ Į į Ị ị J j Ĵ ĵ K k Ķ ķ L l Ĺ ĺ Ŀ ŀ Ľ ľ Ļ ļ Ł ł Ḷ ḷ Ḹ ḹ M m Ṃ ṃ N n Ń ń Ň ň Ñ ñ Ņ ņ Ṇ ṇ Ŋ ŋ O o Ó ó Ò ò Ô ô Ö ö Ǒ ǒ Ŏ ŏ Ō ō Õ õ Ǫ ǫ Ọ ọ Ő ő Ø ø Œ œ Ɔ ɔ P p Q q R r Ŕ ŕ Ř ř Ŗ ŗ Ṛ ṛ Ṝ ṝ S s Ś ś Ŝ ŝ Š š Ş ş Ș ș Ṣ ṣ ß T t Ť ť Ţ ţ Ț ț Ṭ ṭ Þ þ U u Ú ú Ù ù Û û Ü ü Ǔ ǔ Ŭ ŭ Ū ū Ũ ũ Ů ů Ų ų Ụ ụ Ű ű Ǘ ǘ Ǜ ǜ Ǚ ǚ Ǖ ǖ V v W w Ŵ ŵ X x Y y Ý ý Ŷ ŷ Ÿ ÿ Ỹ ỹ Ȳ ȳ Z z Ź ź Ż ż Ž ž ß Ð ð Þ þ Ŋ ŋ Ə ə Greek: Ά ά Έ έ Ή ή Ί ί Ό ό Ύ ύ Ώ ώ Α α Β β Γ γ Δ δ Ε ε Ζ ζ Η η Θ θ Ι ι Κ κ Λ λ Μ μ Ν ν Ξ ξ Ο ο Π π Ρ ρ Σ σ ς Τ τ Υ υ Φ φ Χ χ Ψ ψ Ω ω {{Polytonic|}} Cyrillic: А а Б б В в Г г Ґ ґ Ѓ ѓ Д д Ђ ђ Е е Ё ё Є є Ж ж З з Ѕ ѕ И и І і Ї ї Й й Ј ј К к Ќ ќ Л л Љ љ М м Н н Њ њ О о П п Р р С с Т т Ћ ћ У у Ў ў Ф ф Х х Ц ц Ч ч Џ џ Ш ш Щ щ Ъ ъ Ы ы Ь ь Э э Ю ю Я я ́ IPA: t̪ d̪ ʈ ɖ ɟ ɡ ɢ ʡ ʔ ɸ β θ ð ʃ ʒ ɕ ʑ ʂ ʐ ç ʝ ɣ χ ʁ ħ ʕ ʜ ʢ ɦ ɱ ɳ ɲ ŋ ɴ ʋ ɹ ɻ ɰ ʙ ⱱ ʀ ɾ ɽ ɫ ɬ ɮ ɺ ɭ ʎ ʟ ɥ ʍ ɧ ʼ ɓ ɗ ʄ ɠ ʛ ʘ ǀ ǃ ǂ ǁ ɨ ʉ ɯ ɪ ʏ ʊ ø ɘ ɵ ɤ ə ɚ ɛ œ ɜ ɝ ɞ ʌ ɔ æ ɐ ɶ ɑ ɒ ʰ ʱ ʷ ʲ ˠ ˤ ⁿ ˡ ˈ ˌ ː ˑ ̪ {{IPA|}} This page is a member of 2 hidden categories (help): |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Operetta] | [TOKENS: 5221] |
Contents Operetta Operetta is a form of theatre and a genre of light opera. It includes spoken dialogue, songs and including dances. It is lighter than opera in terms of its music, orchestral size, and length of the work. Apart from its shorter length, the operetta is usually of a light and amusing character. The subject matter may portray "lovers' spats, mistaken identities, sudden reversals of fortune, and glittering parties". It sometimes also includes satirical commentaries. "Operetta" is the Italian diminutive of "opera" and was used originally to describe a shorter, perhaps less ambitious work than an opera. Operetta provides an alternative to operatic performances in an accessible form targeting a different audience. Operetta became a recognizable form in the mid-19th century in France, and its popularity led to the development of many national styles of operetta. Distinctive styles emerged across countries including Austria-Hungary, Germany, England, Spain, the Philippines, Mexico, Cuba, and the United States. Through the transfer of operetta among different countries, cultural cosmopolitanism emerged in the previous century. Operetta as a genre lost favor in the 1930s and gave way to modern musical theatre. Important operetta composers include Johann Strauss, Jacques Offenbach, Franz Lehár, and Francisco Alonso. Definitions The term operetta arises in the mid-eighteenth-century Italy and it is first acknowledged as an independent genre in Paris around 1850. Castil-Blaze's Dictionnaire de la musique moderne claims that this term has a long history and that Mozart was one of the first people to use the word operetta, disparagingly, describing operettas as "certain dramatic abortions, those miniature compositions in which one finds only cold songs and couplets from vaudeville". The definition of operetta has changed over the centuries and ranges depending on each country's history with the genre. It is often used to refer to pieces that resemble the one-act compositions by Offenbach in contrast with his full length compositions, 'opéra-bouffe'. Offenbach invented this art form in response to the French government's oppressive laws surrounding the stagings of works that were larger than one act or contained more than four characters. History Operetta became recognized as a musical genre around 1850 in Paris. In 1870, the centre for operetta shifted to Vienna when Paris fell to the Prussians. The form of operetta continued to evolve through the First World War. There are some common characteristics among operettas that flourished from the mid-1850s through the early 1900s, beginning with the French opéra-bouffe. They contain spoken dialogue interspersed between musical numbers, and often the principal characters, as well as the chorus, are called upon to dance, although the music is largely derived from 19th-century operatic styles, with an emphasis on singable melodies. Operetta in the twentieth century is more complex and reached its pinnacle in Austria and Germany. Operetta is a precursor of the modern musical theatre or the "musical". In the early decades of the 20th century, operetta continued to exist alongside the newer musicals, with each influencing the other. The distinctive traits of operetta are found in the musical theatre works of Jerome Kern, Richard Rodgers and Stephen Sondheim. Operetta in French Operetta was first created in Paris, France in the middle of the 19th century in order to satisfy a need for short, light works in contrast to the full-length entertainment of the increasingly serious opéra comique. By this time, the "comique" part of the genre name had become misleading: Georges Bizet's Carmen (1875) is an example of an opéra comique with a tragic plot. The definition of "comique" meant something closer to "humanistic", meant to portray "real life" in a more realistic way, representing tragedy and comedy next to each other, as Shakespeare had done centuries earlier. With this new connotation, opéra comique had dominated the French operatic stage since the decline of tragédie lyrique. The origins of French operetta began when comic actors would perform dances and songs to crowds of people at fairs on open-air stages. In the beginning of the 18th century these actors began to perform comic parodies of known operas. These performances formed operetta as a casual genre derived from opéra comique, while returning to a simpler form of music. Many scholars have debated as to which composer should be credited as the inventor of operetta; Jaques Offenbach or Hervé. It is concluded that Hervé completed the groundwork, and Offenbach refined and developed the art form into the concept of operetta as we know it today. Therefore, "Offenbach is considered the father of French operetta – but so is Hervé." Hervé was a singer, composer, librettist, conductor, and scene painter. In 1842, he wrote the one act opérette, L'Ours et le pacha, based on the popular vaudeville by Eugène Scribe and X. B. Saintine. In 1848, Hervé made his first notable appearance on the Parisian stage, with Don Quichotte et Sancho Pança (after Cervantes), which can be considered the starting point for the new French musical theatre tradition. Hervé's most famous works are the Gounod parody Le petit Faust (1869) and Mam'zelle Nitouche (1883). Jacques Offenbach is most responsible for the development and popularization of operetta—also called opéras bouffes or opérettes—giving it its enormous vogue during the Second Empire and afterwards. In 1849, Offenbach obtained permission to open the Théâtre des Bouffes Parisiens, a theatre company that offered programs of two or three satirical one-act sketches. The company was so successful that it led to the elongation of these sketches into an evening's duration. However, Offenbach's productions were bound by the police prefecture in Paris, which specified the type of performance that would be allowed: "pantomimes with at most five performers, one-act comic musical dialogues for two to three actors, and dance routines with no more than five dancers; choruses were strictly forbidden." These rules defined what came to be defined as operetta: "a small unpretentious operatic work that had no tragic implications and was designed to entertain the public". Two other French composers, Robert Planquette and Charles Lecocq, followed Offenbach's model and wrote the operettas Les Cloches de Corneville (The Bells of Normandy) and La Fille de Madame Angot (The Daughter of Madame Angot). The two operettas were considered a major hit. The political limitations placed on Offenbach and Parisian theatre were gradually lifted, and operetta gained wide popularity. While Offenbach's earliest one-act pieces included Les deux aveugles, Le violoneux and Ba-ta-clan (all 1855) did well, his first full-length operetta, Orphée aux enfers (1858), was by far the most successful. It became the first repertory operetta and was staged hundreds of times across Europe and beyond. Offenbach's legacy is seen in operettas throughout the late 19th century and beyond by encouraging Strauss the Younger to bring the genre to Austria-Hungary. Offenbach also traveled to the US and England educating musicians on the more than 100 operettas he wrote during his lifetime. This international travel resulted in the appearance of strong national schools in both nations. By the 1870s, however, Offenbach's popularity declined. The public showed more interest in romantic operettas that showed the "grace and refinement" of the late Romantic period. This included Messager's operetta Véronique and Louis Ganne's Les saltimbanques. The 20th century found French operetta even more out of favor as the international public turned to Anglo-American and Viennese operettas, which continued to develop the art form into the late Romantic era. Operetta in German and Hungarian Offenbach was unabashed about spreading operetta around the continent. In 1861, he staged some of his recent works at the Carltheater in Vienna, which paved the way for Austrian and German composers. Soon, Vienna became the epicenter of operetta productions. It is because of the Viennese operetta, not the French, that the term is used to describe a full-length work. Additionally, after the Prussian defeat in 1866, operetta became the sign of a new age in Austria, marked by modernity and industrialization. The most significant composer of operetta in the German language was the Austrian Johann Strauss II (1825–1899). Strauss was recruited from the dance hall and introduced a distinct Viennese style to the genre. Strauss was highly influenced by the work of Offenbach, so much so that he collaborated with many of Offenbach's librettists for his most popular works. His operetta, Die Fledermaus (1874), became the most performed operetta in the world, and remains his most popular stage work. In all, Strauss wrote 16 operettas and one opera, most with great success when first premiered. Strauss's satire was often generic, unlike Offenbach who commented on real-life matters. Strauss's operettas, waltzes, polkas, and marches often have a strongly Viennese style, and his popularity causes many to think of him as the national composer of Austria. The Theater an der Wien never failed to draw huge crowds when his stage works were first performed. After many of the numbers the audience would call noisily for encores. Franz von Suppé, also known as Francesco Ezechiele Ermenegildo, Cavaliere Suppé-Demelli, was born in 1819 and his fame rivals that of Offenbach. Suppé was a leading composer and conductor in Vienna and most known for his operetta Leichte Kavallerie (1866), Fatinitza (1876), and Boccaccio (1879). Suppé was a contemporary to Strauss and composed over 30 operettas 180 farces, ballets and other stage works. Recently, though most of his works have been fallen into obscurity, many of them have been reprised within films, cartoons, advertisements and so on. Both Strauss and Suppé are considered to be the most notable composers of the Golden Age of Viennese operetta. Following the death of Johann Strauss and his contemporary, Franz von Suppé, Franz Lehár was the heir apparent. Lehar is widely considered the leading operetta composer of the 20th century and his most successful operetta, Die lustige Witwe (The Merry Widow), is one of the classic operettas still in repertory. Lehár assisted in leading operetta into the Silver Age of Viennese Operetta. During this time, Viennese censorship laws were changed in 1919. Lehár is most responsible for giving the genre renewed vitality. Studying at the Prague Conservatory, Lehár began as a theatre violinist and then took off as a composer in the Austro-Hungarian Empire. During 1905, Lehár's Die lustige Witwe (The Merry Widow) paved a pathway for composers such as Leo Fall, Oscar Straus, and Emmerich Kálmán to continue the tradition of operetta. The Viennese tradition was carried on by Oscar Straus, Carl Zeller, Karl Millöcker, Leo Fall, Richard Heuberger, Edmund Eysler, Ralph Benatzky, Robert Stolz, Leo Ascher, Emmerich Kálmán, Nico Dostal, Fred Raymond, Igo Hofstetter, Paul Abraham and Ivo Tijardović in the 20th century. In the same way that Vienna was the center of Austrian operetta, Berlin was the center of German operetta. Berlin operetta often had its own style, including, especially after World War I, elements of jazz and other syncopated dance rhythms, a transatlantic style, and the presence of ragged marching tunes. Berlin operettas also sometimes included aspects of burlesque, revue, farce, or cabaret. Paul Lincke pioneered the Berlin operetta in 1899 with Frau Luna, which includes "Berliner Luft" ("Berlin Air"), which became the unofficial anthem of Berlin. His Lysistrata (1902) includes the song and tune "The Glow-Worm", which remains quite popular internationally. Much later, in the 1920s and 1930s, Kurt Weill took a more extreme form of the Berlin operetta style and used it in his operas, operettas, and musicals. It is arguable that some of Kurt Weill's compositions could be considered modernist operetta. The Berlin-style operetta coexisted with more bourgeois, charming, home-loving, and nationalistic German operettas – some of which were called Volksoperetten (folk operettas). A prime example is Leon Jessel's extremely popular 1917 Schwarzwaldmädel (Black Forest Girl). These bucolic, nostalgic, home-loving operettas were officially preferred over Berlin-style operettas after 1933, when the Nazis came to power and instituted the Reichsmusikkammer (State Music Institute), which deprecated and banned "decadent" music like jazz and similar "foreign" musical forms. In the beginning of twenty-first century, German revival of operetta was an unforeseen theatrical development. Notable German operetta composers include Paul Lincke, Eduard Künneke, Walter Kollo, Jean Gilbert, Leon Jessel, Rudolf Dellinger, Walter Goetze and Ludwig Schmidseder. Operetta in English Offenbach's influence reached England by the 1860s. Arthur Sullivan, of the Gilbert and Sullivan duo, composed Cox and Box (1866) as a direct reaction to Offenbach's Les deux aveugles (1855). Gilbert and Sullivan solidified the format in England with their long-running collaboration during the Victorian era. With W. S. Gilbert writing the libretti and Sullivan composing the music, the pair produced 14 comic operas, which were later called Savoy Operas. Most were enormously popular in Britain, the U.S., and elsewhere. Gilbert, Sullivan, and their producer Richard D'Oyly Carte themselves call their joint works comic operas to distinguish this family-friendly fare from the risqué French operettas of the 1850s and 1860s. Their works, such as H.M.S. Pinafore, The Pirates of Penzance and The Mikado, continue to enjoy regular performances throughout the English-speaking world. While many of these operas seem to be very light-hearted, works such as The Mikado were making political commentaries on the British government and military with one of the main topics being capital punishment which was still widely used at the time. English operetta continued into the 1890s, with works by composers such as Edward German, Ivan Caryll and Sidney Jones. These quickly evolved into the lighter song-and-dance pieces known as Edwardian musical comedy. Beginning in 1907, with The Merry Widow, many of the Viennese operettas were adapted very successfully for the English stage. To explain this phenomenon, Derek Scott writes, In January 1908, London's Daily Mail claimed that The Merry Widow had been performed 450 times in Vienna, 400 times in Berlin, 350 times in St Petersburg, 300 times in Copenhagen, and was currently playing every evening in Europe in nine languages. In the USA, five companies were presenting it, and "the rush for tickets at the New Amsterdam Theatre" was likened to "the feverish crowding round the doors of a threatened bank". Stan Czech, in his Lehár biography, claims that by 1910 it had been performed "around 18,000 times in ten languages on 154 American, 142 German, and 135 British stages". The international embrace of operetta directly correlated with the development of both the West End in London and Broadway in New York. American audiences were first introduced to operetta through Gilbert and Sullivan's H.M.S. Pinafore in 1878. American operetta composers included Victor Herbert, whose works at the beginning of the 20th century were influenced by both Viennese operetta and Gilbert and Sullivan. He was followed by Sigmund Romberg and Rudolph Friml. Nevertheless, American operetta largely gave way, by the end of World War I, to musicals, such as the Princess Theatre musicals, and revues, followed by the musicals of Rodgers and Hart, Cole Porter, Irving Berlin and others. Another notable operetta in English is Candide by Leonard Bernstein. It was advertised as a "comic operetta." Candide's score in some ways was typical for its announced genre with some waltzes, but Bernstein added the schottische, gavotte, and other dances, and also entered the opera house with the aria "Glitter and Be Gay" Operetta in Italian Operetta was the first imported vocal genre in Italy. Since the 1860s, French and Viennese composers such as Offenbach, Hervé, Suppé, Strauss Jr and Lehár have significantly influenced the operatic tradition of Italy. The widespread popularity of foreign operetta in Italy reached its climax at the turn of the century, in particular with the success of La vedova allegra, which premiered in Milan in 1907. Italian operetta composers tended to stretch the definition of an "operetta" more than other nations in order to fit the beauty of Italian Romantic opera style. An example would be Giacomo Puccini, who developed his work in the realistic verisimo style, and would compose "operettas in three acts". Other notable composers of Italian operetta include Vincenzo Valente, Ruggero Leoncavallo, Pasquale Mario Costa, Pietro Mascagni, Carlo Lombardo, Enrico Toselli, Virgilio Ranzato and Giuseppe Pietri. The audiences of operetta during the 1860s and 1870s are described as rowdy and loud. Operetta was considered one of the major controversies about Italian music and culture between the 1860s and the 1920s. During that period, strong nationalistic undertones in Italy strived to unify its national identity. Recognizing operetta as a foreign genre, operetta was perceived as an art form that would contaminate Italian opera or illegitimately undermine its primacy on the stage. It was not until the early twentieth century that Italian composers systematically engaged in writing operetta. Operetta in Romanian In 1848, Baba Hârca [fr; ro] (Baba the Old Witch) became the first operetta created in Romania; composed by the Moldavian composer of German-Saxon origin Alexandru Flechtenmacher [fr; ro], who was seeking a distinctly Romanian musical style, it premiered on 26 December 1848 at the National Theatre in Iași. The work is a vaudeville with an unusually developed musical dimension. Baba the Witch is a popular figure from traditional Romanian folktales, credited with freezing waters and living in isolation in a cave or at the top of a tall tree; fairy tales also attribute to her a benevolent aspect. In 1882, another major success marked the birth of operetta in the country: Crai Nou (The New Moon), by the young composer Ciprian Porumbescu, with a libretto by Vasile Alecsandri. The premiere took place in Brașov on an improvised stage, the Romanian Gymnasium's festival hall, on 27 February 1882. The work, which highlights Romanian culture and traditions in contrast to Viennese culture, displays a distinctly patriotic character at a time when Transylvania was under Austro-Hungarian rule. It is particularly renowned for its famous Viennese-style chorus and for Porumbescu’s success in integrating the Romanian folk spirit—such as the Hora, Doina, peasant dances, and traditional songs—into lyrical art while combining it with Western influences. Three other composers Eduard Caudella with Harță Răzeșul (1872), George Stephănescu with Sânziana și Pepelea (1880), on a libretto by Vasile Alecsandri and Scaiul bărbaților (1885), and Constantin Dimitrescu with Sergentul Cartuș (1895) and Nini (1897), were the first creators of Romanian operettas. They played a pivotal role in cultivating and establishing the Romanian public’s keen interest in this art form, a genre that has remained popular to the present day. 30 October 1954 marks a milestone in Romanian creative life with the premiere of the work Lăsați-mă să cânt [ro] (Let Me Sing) by Gherase Dendrino [ro], to a libretto by Erastia Sever, Liliana Delescu, and Viorel Cosma, in which the leading role was performed by Ion Dacian. This anniversary work, written in 1953 to commemorate the 100th anniversary of the birth of Ciprian Porumbescu, is a celebration of his operetta Crai Nou, composed 72 years earlier in 1882. It thus forms a bridge to the first Romanian operetta. In a context of decline for operetta in Romania, the production presented on the stage of the State Operetta Theatre was an enormous success, a success undoubtedly owed in large part to its excellent cast. The work was also performed abroad, in other countries of the Eastern Bloc, and its libretto was translated into German, Czech, Russian, and Hungarian. The successors of Ion Dacian continued to maintain a balance between works from the classical Austrian and Hungarian repertoires (Strauss, Lehár, Kálmán, Benatzky, etc.) and Romanian creations such as Spune inimioară, spune! (Say, My Heart, Say!, 1972) by Elly Roman [ro], Mătușa mea, Faustina (My Aunt Faustina) (1973) by Liviu Cavassi and Doru Butoiescu, and Raspantia (1975) and Leonard (1976) by Florin Comișel [rp]. The domestic programming reflected the contributions of authors who played a significant role in Romanian operetta: Gherase Dendrino [ro] (1901–1973), Filaret Barbu (1903–1984), Nicolae Kirculescu (1903–1985), Elly Roman [ro] (1905–1996), Alfred Mendelsohn (1910–1966), Viorel Doboș [ro] (1917–1985), Henry Mălineanu [ro] (1920–2000), Florin Comișel [ro] (1922–1977), and George Grigoriu [ro] (1927–1999). This approach allowed the theatre to combine an international tradition with Romanian cultural identity, sustaining public interest in the operetta genre. In 1977, to celebrate the centenary of Romania’s independence, a special work was staged: Eternel Iubiri (Eternal Love), composed by George Grigoriu with a libretto by Constantin Florea. The premiere took place on 7 May 1977 at the State Operetta Theatre of Bucharest. The work, centered on the struggle against the Turks, aligned with the nationalist propaganda of the Communist Party, emphasizing patriotism and heroes of Romanian history. This national-communist cultural policy, which became highly visible under Ceaușescu, had already been initiated in the 1960s by Gheorghe Gheorghiu-Dej.[page range too broad] The 2002–2003 season opened with a major national premiere, Fântâna Blanduziei (The Fountain of Blanduzia), created by one of the most renowned contemporary composers, Cornel Trăilescu, to a libretto by the poet and playwright Aurel Storin [ro], based on the original work (1883) by the Romanian poet Vasile Alecsandri. Lăsați-mă să cânt returned to the repertoire during the 2003–2004 season. Productions multiplied until 2005, strengthening the institution’s identity and visibility within the Romanian cultural landscape. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Internet#cite_note-MobleBroadbandITUDynamic2012-95] | [TOKENS: 9291] |
Contents Internet The Internet (or internet)[a] is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP)[b] to communicate between networks and devices. It is a network of networks that comprises private, public, academic, business, and government networks of local to global scope, linked by electronic, wireless, and optical networking technologies. The Internet carries a vast range of information services and resources, such as the interlinked hypertext documents and applications of the World Wide Web (WWW), electronic mail, discussion groups, internet telephony, streaming media and file sharing. Most traditional communication media, including telephone, radio, television, paper mail, newspapers, and print publishing, have been transformed by the Internet, giving rise to new media such as email, online music, digital newspapers, news aggregators, and audio and video streaming websites. The Internet has enabled and accelerated new forms of personal interaction through instant messaging, Internet forums, and social networking services. Online shopping has also grown to occupy a significant market across industries, enabling firms to extend brick and mortar presences to serve larger markets. Business-to-business and financial services on the Internet affect supply chains across entire industries. The origins of the Internet date back to research that enabled the time-sharing of computer resources, the development of packet switching, and the design of computer networks for data communication. The set of communication protocols to enable internetworking on the Internet arose from research and development commissioned in the 1970s by the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense in collaboration with universities and researchers across the United States and in the United Kingdom and France. The Internet has no single centralized governance in either technological implementation or policies for access and usage. Each constituent network sets its own policies. The overarching definitions of the two principal name spaces on the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the non-profit Internet Engineering Task Force (IETF). Terminology The word internetted was used as early as 1849, meaning interconnected or interwoven. The word Internet was used in 1945 by the United States War Department in a radio operator's manual, and in 1974 as the shorthand form of Internetwork. Today, the term Internet most commonly refers to the global system of interconnected computer networks, though it may also refer to any group of smaller networks. The word Internet may be capitalized as a proper noun, although this is becoming less common. This reflects the tendency in English to capitalize new terms and move them to lowercase as they become familiar. The word is sometimes still capitalized to distinguish the global internet from smaller networks, though many publications, including the AP Stylebook since 2016, recommend the lowercase form in every case. In 2016, the Oxford English Dictionary found that, based on a study of around 2.5 billion printed and online sources, "Internet" was capitalized in 54% of cases. The terms Internet and World Wide Web are often used interchangeably; it is common to speak of "going on the Internet" when using a web browser to view web pages. However, the World Wide Web, or the Web, is only one of a large number of Internet services. It is the global collection of web pages, documents and other web resources linked by hyperlinks and URLs. History In the 1960s, computer scientists began developing systems for time-sharing of computer resources. J. C. R. Licklider proposed the idea of a universal network while working at Bolt Beranek & Newman and, later, leading the Information Processing Techniques Office at the Advanced Research Projects Agency (ARPA) of the United States Department of Defense. Research into packet switching,[c] one of the fundamental Internet technologies, started in the work of Paul Baran at RAND in the early 1960s and, independently, Donald Davies at the United Kingdom's National Physical Laboratory in 1965. After the Symposium on Operating Systems Principles in 1967, packet switching from the proposed NPL network was incorporated into the design of the ARPANET, an experimental resource sharing network proposed by ARPA. ARPANET development began with two network nodes which were interconnected between the University of California, Los Angeles and the Stanford Research Institute on 29 October 1969. The third site was at the University of California, Santa Barbara, followed by the University of Utah. By the end of 1971, 15 sites were connected to the young ARPANET. Thereafter, the ARPANET gradually developed into a decentralized communications network, connecting remote centers and military bases in the United States. Other user networks and research networks, such as the Merit Network and CYCLADES, were developed in the late 1960s and early 1970s. Early international collaborations for the ARPANET were rare. Connections were made in 1973 to Norway (NORSAR and, later, NDRE) and to Peter Kirstein's research group at University College London, which provided a gateway to British academic networks, the first internetwork for resource sharing. ARPA projects, the International Network Working Group and commercial initiatives led to the development of various protocols and standards by which multiple separate networks could become a single network, or a network of networks. In 1974, Vint Cerf at Stanford University and Bob Kahn at DARPA published a proposal for "A Protocol for Packet Network Intercommunication". Cerf and his graduate students used the term internet as a shorthand for internetwork in RFC 675. The Internet Experiment Notes and later RFCs repeated this use. The work of Louis Pouzin and Robert Metcalfe had important influences on the resulting TCP/IP design. National PTTs and commercial providers developed the X.25 standard and deployed it on public data networks. The ARPANET initially served as a backbone for the interconnection of regional academic and military networks in the United States to enable resource sharing. Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized, which facilitated worldwide proliferation of interconnected networks. TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNet) provided access to supercomputer sites in the United States for researchers, first at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s. The NSFNet expanded into academic and research organizations in Europe, Australia, New Zealand and Japan in 1988–89. Although other network protocols such as UUCP and PTT public data networks had global reach well before this time, this marked the beginning of the Internet as an intercontinental network. Commercial Internet service providers emerged in 1989 in the United States and Australia. The ARPANET was decommissioned in 1990. The linking of commercial networks and enterprises by the early 1990s, as well as the advent of the World Wide Web, marked the beginning of the transition to the modern Internet. Steady advances in semiconductor technology and optical networking created new economic opportunities for commercial involvement in the expansion of the network in its core and for delivering services to the public. In mid-1989, MCI Mail and Compuserve established connections to the Internet, delivering email and public access products to the half million users of the Internet. Just months later, on 1 January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the networks that added to the core of the commercial Internet of later years. In March 1990, the first high-speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed between Cornell University and CERN, allowing much more robust communications than were capable with satellites. Later in 1990, Tim Berners-Lee began writing WorldWideWeb, the first web browser, after two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9, the HyperText Markup Language (HTML), the first Web browser (which was also an HTML editor and could access Usenet newsgroups and FTP files), the first HTTP server software (later known as CERN httpd), the first web server, and the first Web pages that described the project itself. In 1991 the Commercial Internet eXchange was founded, allowing PSInet to communicate with the other commercial networks CERFnet and Alternet. Stanford Federal Credit Union was the first financial institution to offer online Internet banking services to all of its members in October 1994. In 1996, OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe. By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was decommissioned, removing the last restrictions on use of the Internet to carry commercial traffic. As technology advanced and commercial opportunities fueled reciprocal growth, the volume of Internet traffic started experiencing similar characteristics as that of the scaling of MOS transistors, exemplified by Moore's law, doubling every 18 months. This growth, formalized as Edholm's law, was catalyzed by advances in MOS technology, laser light wave systems, and noise performance. Since 1995, the Internet has tremendously impacted culture and commerce, including the rise of near-instant communication by email, instant messaging, telephony (Voice over Internet Protocol or VoIP), two-way interactive video calls, and the World Wide Web. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 Gbit/s, 10 Gbit/s, or more. The Internet continues to grow, driven by ever-greater amounts of online information and knowledge, commerce, entertainment and social networking services. During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network. In November 2006, the Internet was included on USA Today's list of the New Seven Wonders. As of 31 March 2011[update], the estimated total number of Internet users was 2.095 billion (30% of world population). It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication. By 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet. Modern smartphones can access the Internet through cellular carrier networks, and internet usage by mobile and tablet devices exceeded desktop worldwide for the first time in October 2016. As of 2018[update], 80% of the world's population were covered by a 4G network. The International Telecommunication Union (ITU) estimated that, by the end of 2017, 48% of individual users regularly connect to the Internet, up from 34% in 2012. Mobile Internet connectivity has played an important role in expanding access in recent years, especially in Asia and the Pacific and in Africa. The number of unique mobile cellular subscriptions increased from 3.9 billion in 2012 to 4.8 billion in 2016, two-thirds of the world's population, with more than half of subscriptions located in Asia and the Pacific. The limits that users face on accessing information via mobile applications coincide with a broader process of fragmentation of the Internet. Fragmentation restricts access to media content and tends to affect the poorest users the most. One solution, zero-rating, is the practice of Internet service providers allowing users free connectivity to access specific content or applications without cost. Social impact The Internet has enabled new forms of social interaction, activities, and social associations, giving rise to the scholarly study of the sociology of the Internet. Between 2000 and 2009, the number of Internet users globally rose from 390 million to 1.9 billion. By 2010, 22% of the world's population had access to computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube. In 2014 the world's Internet users surpassed 3 billion or 44 percent of world population, but two-thirds came from the richest countries, with 78 percent of Europeans using the Internet, followed by 57 percent of the Americas. However, by 2018, Asia alone accounted for 51% of all Internet users, with 2.2 billion out of the 4.3 billion Internet users in the world. China's Internet users surpassed a major milestone in 2018, when the country's Internet regulatory authority, China Internet Network Information Centre, announced that China had 802 million users. China was followed by India, with some 700 million users, with the United States third with 275 million users. However, in terms of penetration, in 2022, China had a 70% penetration rate compared to India's 60% and the United States's 90%. In 2022, 54% of the world's Internet users were based in Asia, 14% in Europe, 7% in North America, 10% in Latin America and the Caribbean, 11% in Africa, 4% in the Middle East and 1% in Oceania. In 2019, Kuwait, Qatar, the Falkland Islands, Bermuda and Iceland had the highest Internet penetration by the number of users, with 93% or more of the population with access. As of 2022, it was estimated that 5.4 billion people use the Internet, more than two-thirds of the world's population. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet. After English (27%), the most requested languages on the World Wide Web are Chinese (25%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%). Modern character encoding standards, such as Unicode, allow for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain. Several neologisms exist that refer to Internet users: Netizen (as in "citizen of the net") refers to those actively involved in improving online communities, the Internet in general or surrounding political affairs and rights such as free speech, Internaut refers to operators or technically highly capable users of the Internet, digital citizen refers to a person using the Internet in order to engage in society, politics, and government participation. The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet wirelessly.[citation needed] Educational material at all levels from pre-school (e.g. CBeebies) to post-doctoral (e.g. scholarly literature through Google Scholar) is available on websites. The internet has facilitated the development of virtual universities and distance education, enabling both formal and informal education. The Internet allows researchers to conduct research remotely via virtual laboratories, with profound changes in reach and generalizability of findings as well as in communication between scientists and in the publication of results. By the late 2010s the Internet had been described as "the main source of scientific information "for the majority of the global North population".: 111 Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries. In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work. The United States Patent and Trademark Office uses a wiki to allow the public to collaborate on finding prior art relevant to examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park. The English Wikipedia has the largest user base among wikis on the World Wide Web and ranks in the top 10 among all sites in terms of traffic. The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic. Many Internet forums have sections devoted to games and funny videos. Another area of leisure activity on the Internet is multiplayer gaming. This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer. Streaming media is the real-time delivery of digital media for immediate consumption or enjoyment by end users. Streaming companies (such as Netflix, Disney+, Amazon's Prime Video, Mubi, Hulu, and Apple TV+) now dominate the entertainment industry, eclipsing traditional broadcasters. Audio streamers such as Spotify and Apple Music also have significant market share in the audio entertainment market. Video sharing websites are also a major factor in the entertainment ecosystem. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with more than two billion users. It uses a web player to stream and show video files. YouTube users watch hundreds of millions, and upload hundreds of thousands, of videos daily. Other video sharing websites include Vimeo, Instagram and TikTok.[citation needed] Although many governments have attempted to restrict both Internet pornography and online gambling, this has generally failed to stop their widespread popularity. A number of advertising-funded ostensible video sharing websites known as "tube sites" have been created to host shared pornographic video content. Due to laws requiring the documentation of the origin of pornography, these websites now largely operate in conjunction with pornographic movie studios and their own independent creator networks, acting as de-facto video streaming services. Major players in this field include the market leader Aylo, the operator of PornHub and numerous other branded sites, as well as other independent operators such as xHamster and Xvideos. As of 2023[update], Internet traffic to pornographic video sites rivalled that of mainstream video streaming and sharing services. Remote work is facilitated by tools such as groupware, virtual private networks, conference calling, videotelephony, and VoIP so that work may be performed from any location, such as the worker's home.[citation needed] The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have made collaborative work dramatically easier, with the help of collaborative software, which allow groups to easily form, cheaply communicate, and share ideas. An example of collaborative software is the free software movement, which has produced, among other things, Linux, Mozilla Firefox, and OpenOffice.org (later forked into LibreOffice).[citation needed] Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work.[citation needed] The internet also allows for cloud computing, virtual private networks, remote desktops, and remote work.[citation needed] The online disinhibition effect describes the tendency of many individuals to behave more stridently or offensively online than they would in person. A significant number of feminist women have been the target of various forms of harassment, including insults and hate speech, to, in extreme cases, rape and death threats, in response to posts they have made on social media. Social media companies have been criticized in the past for not doing enough to aid victims of online abuse. Children also face dangers online such as cyberbullying and approaches by sexual predators, who sometimes pose as children themselves. Due to naivety, they may also post personal information about themselves online, which could put them or their families at risk unless warned not to do so. Many parents choose to enable Internet filtering or supervise their children's online activities in an attempt to protect their children from pornography or violent content on the Internet. The most popular social networking services commonly forbid users under the age of 13. However, these policies can be circumvented by registering an account with a false birth date, and a significant number of children aged under 13 join such sites.[citation needed] Social networking services for younger children, which claim to provide better levels of protection for children, also exist. Internet usage has been correlated to users' loneliness. Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the "I am lonely will anyone speak to me" thread.[citation needed] Cyberslacking can become a drain on corporate resources; employees spend a significant amount of time surfing the Web while at work. Internet addiction disorder is excessive computer use that interferes with daily life. Nicholas G. Carr believes that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity. Electronic business encompasses business processes spanning the entire value chain: purchasing, supply chain management, marketing, sales, customer service, and business relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance relationships with clients and partners. According to International Data Corporation, the size of worldwide e-commerce, when global business-to-business and -consumer transactions are combined, equate to $16 trillion in 2013. A report by Oxford Economics added those two together to estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global sales. While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide. Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality. A 2013 Institute for Local Self-Reliance report states that brick-and-mortar retailers employ 47 people for every $10 million in sales, while Amazon employs only 14. Similarly, the 700-employee room rental start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Worldwide, which employs 152,000 people. At that time, Uber employed 1,000 full-time employees and was valued at $18.2 billion, about the same valuation as Avis Rent a Car and The Hertz Corporation combined, which together employed almost 60,000 people. Advertising on popular web pages can be lucrative, and e-commerce. Online advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. It includes email marketing, search engine marketing (SEM), social media marketing, many types of display advertising (including web banner advertising), and mobile advertising. In 2011, Internet advertising revenues in the United States surpassed those of cable television and nearly exceeded those of broadcast television.: 19 Many common online advertising practices are controversial and increasingly subject to regulation. The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing for carrying out their mission, having given rise to Internet activism. Social media websites, such as Facebook and Twitter, helped people organize the Arab Spring, by helping activists organize protests, communicate grievances, and disseminate information. Many have understood the Internet as an extension of the Habermasian notion of the public sphere, observing how network communication technologies provide something like a global civic forum. However, incidents of politically motivated Internet censorship have now been recorded in many countries, including western democracies. E-government is the use of technological communications devices, such as the Internet, to provide public services to citizens and other persons in a country or region. E-government offers opportunities for more direct and convenient citizen access to government and for government provision of services directly to citizens. Cybersectarianism is a new organizational form that involves: highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in the collective study via email, online chat rooms, and web-based message boards. In particular, the British government has raised concerns about the prospect of young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being persuaded to join terrorist groups such as the so-called "Islamic State", and then potentially committing acts of terrorism on returning to Britain after fighting in Syria or Iraq.[citation needed] Applications and services The Internet carries many applications and services, most prominently the World Wide Web, including social media, electronic mail, mobile applications, multiplayer online games, Internet telephony, file sharing, and streaming media services. The World Wide Web is a global collection of documents, images, multimedia, applications, and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs), which provide a global system of named references. URIs symbolically identify services, web servers, databases, and the documents and resources that they can provide. HyperText Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP for communication between software systems for information transfer, sharing and exchanging business data and logistics and is one of many languages or protocols that can be used for communication on the Internet. World Wide Web browser software, such as Microsoft Edge, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, enable users to navigate from one web page to another via the hyperlinks embedded in the documents. These documents may also contain computer data, including graphics, sounds, text, video, multimedia and interactive content. Client-side scripts can include animations, games, office applications and scientific demonstrations. Email is an important communications service available via the Internet. The concept of sending electronic text messages between parties, analogous to mailing letters or memos, predates the creation of the Internet. Internet telephony is a common communications service realized with the Internet. The name of the principal internetworking protocol, the Internet Protocol, lends its name to voice over Internet Protocol (VoIP).[citation needed] VoIP systems now dominate many markets, being as easy and convenient as a traditional telephone, while having substantial cost savings, especially over long distances. File sharing is the practice of transferring large amounts of data in the form of computer files across the Internet, for example via file servers. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. Access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by a digital signature. Governance The Internet is a global network that comprises many voluntarily interconnected autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by the IETF. The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. The resulting contributions and standards are published as Request for Comments (RFC) documents on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices when implementing Internet technologies. To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. The organization coordinates the assignment of unique identifiers for use on the Internet, including domain names, IP addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces are essential for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the global Internet. The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, had final approval over changes to the DNS root zone until the IANA stewardship transition on 1 October 2016. Regional Internet registries (RIRs) were established for five regions of the world to assign IP address blocks and other Internet parameters to local registries, such as Internet service providers, from a designated pool of addresses set aside for each region:[citation needed] The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". Its members include individuals as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the Internet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United Nations-sponsored World Summit on the Information Society in Tunis established the Internet Governance Forum (IGF) to discuss Internet-related issues.[citation needed] Infrastructure The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. As with any computer network, the Internet physically consists of routers, media (such as cabling and radio links), repeaters, and modems. However, as an example of internetworking, many of the network nodes are not necessarily Internet equipment per se. Internet packets are carried by other full-fledged networking protocols, with the Internet acting as a homogeneous networking standard, running across heterogeneous hardware, with the packets guided to their destinations by IP routers.[citation needed] Internet service providers (ISPs) establish worldwide connectivity between individual networks at various levels of scope. At the top of the routing hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly with each other via very high speed fiber-optic cables and governed by peering agreements. Tier 2 and lower-level networks buy Internet transit from other providers to reach at least some parties on the global Internet, though they may also engage in peering. End-users who only access the Internet when needed to perform a function or obtain information, represent the bottom of the routing hierarchy.[citation needed] An ISP may use a single upstream provider for connectivity, or implement multihoming to achieve redundancy and load balancing. Internet exchange points are major traffic exchanges with physical connections to multiple ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect with large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET.[citation needed] Common methods of Internet access by users include broadband over coaxial cable, fiber optics or copper wires, Wi-Fi, satellite, and cellular telephone technology.[citation needed] Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services that cover large areas are available in many cities, such as New York, London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. Most servers that provide internet services are today hosted in data centers, and content is often accessed through high-performance content delivery networks. Colocation centers often host private peering connections between their customers, internet transit providers, cloud providers, meet-me rooms for connecting customers together, Internet exchange points, and landing points and terminal equipment for fiber optic submarine communication cables, connecting the internet. Internet Protocol Suite The Internet standards describe a framework known as the Internet protocol suite (also called TCP/IP, based on the first two components.) This is a suite of protocols that are ordered into a set of four conceptional layers by the scope of their operation, originally documented in RFC 1122 and RFC 1123:[citation needed] The most prominent component of the Internet model is the Internet Protocol. IP enables internetworking, essentially establishing the Internet itself. Two versions of the Internet Protocol exist, IPv4 and IPv6.[citation needed] Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe the exchange of data over the network.[citation needed] For locating individual computers on the network, the Internet provides IP addresses. IP addresses are used by the Internet infrastructure to direct internet packets to their destinations. They consist of fixed-length numbers, which are found within the packet. IP addresses are generally assigned to equipment either automatically via Dynamic Host Configuration Protocol, or are configured.[citation needed] Domain Name Systems convert user-inputted domain names (e.g. "en.wikipedia.org") into IP addresses.[citation needed] Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. IPv4 is the initial version used on the first generation of the Internet and is still in dominant use. It was designed in 1981 to address up to ≈4.3 billion (109) hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011, when the global IPv4 address allocation pool was exhausted. Because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 uses 128 bits for the IP address and was standardized in 1998. IPv6 deployment has been ongoing since the mid-2000s and is currently in growing deployment around the world, since Internet address registries began to urge all resource managers to plan rapid adoption and conversion. By design, IPv6 is not directly interoperable with IPv4. Instead, it establishes a parallel version of the Internet not directly accessible with IPv4 software. Thus, translation facilities exist for internetworking, and some nodes have duplicate networking software for both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol.[citation needed] Network infrastructure, however, has been lagging in this development.[citation needed] A subnet or subnetwork is a logical subdivision of an IP network.: 1, 16 Computers that belong to a subnet are addressed with an identical most-significant bit-group in their IP addresses. This results in the logical division of an IP address into two fields, the network number or routing prefix and the rest field or host identifier. The rest field is an identifier for a specific host or network interface.[citation needed] The routing prefix may be expressed in Classless Inter-Domain Routing (CIDR) notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example, 198.51.100.0/24 is the prefix of the Internet Protocol version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range 198.51.100.0 to 198.51.100.255 belong to this network. The IPv6 address specification 2001:db8::/32 is a large address block with 296 addresses, having a 32-bit routing prefix.[citation needed] For IPv4, a network may also be characterized by its subnet mask or netmask, which is the bitmask that when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an address. For example, 255.255.255.0 is the subnet mask for the prefix 198.51.100.0/24.[citation needed] Computers and routers use routing tables in their operating system to forward IP packets to reach a node on a different subnetwork. Routing tables are maintained by manual configuration or automatically by routing protocols. End-nodes typically use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet.[citation needed] The default gateway is the node that serves as the forwarding host (router) to other networks when no other route specification matches the destination IP address of a packet. Security Internet resources, hardware, and software components are the target of criminal or malicious attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or access private information. Malware is malicious software used and distributed via the Internet. It includes computer viruses which are copied with the help of humans, computer worms which copy themselves automatically, software for denial of service attacks, ransomware, botnets, and spyware that reports on the activity and typing of users.[citation needed] Usually, these activities constitute cybercrime. Defense theorists have also speculated about the possibilities of hackers using cyber warfare using similar methods on a large scale. Malware poses serious problems to individuals and businesses on the Internet. According to Symantec's 2018 Internet Security Threat Report (ISTR), malware variants number has increased to 669,947,865 in 2017, which is twice as many malware variants as in 2016. Cybercrime, which includes malware attacks as well as other crimes committed by computer, was predicted to cost the world economy US$6 trillion in 2021, and is increasing at a rate of 15% per year. Since 2021, malware has been designed to target computer systems that run critical infrastructure such as the electricity distribution network. Malware can be designed to evade antivirus software detection algorithms. The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet. In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies. Under the Act, all U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers' broadband Internet and VoIP traffic.[d] The large amount of data gathered from packet capture requires surveillance software that filters and reports relevant information, such as the use of certain words or phrases, the access to certain types of web sites, or communicating via email or chat with certain parties. Agencies, such as the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to develop, purchase, implement, and operate systems for interception and analysis of data. Similar systems are operated by Iranian secret police to identify and suppress dissidents. The required hardware and software were allegedly installed by German Siemens AG and Finnish Nokia. Some governments, such as those of Myanmar, Iran, North Korea, Mainland China, Saudi Arabia and the United Arab Emirates, restrict access to content on the Internet within their territories, especially to political and religious content, with domain name and keyword filters. In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to contain only known child pornography sites, the content of the list is secret. Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet but do not mandate filter software. Many free or commercially available software programs, called content-control software are available to users to block offensive specific on individual computers or networks in order to limit access by children to pornographic material or depiction of violence.[citation needed] Performance As the Internet is a heterogeneous network, its physical characteristics, including, for example the data transfer rates of connections, vary widely. It exhibits emergent phenomena that depend on its large-scale organization. PB per monthYear020,00040,00060,00080,000100,000120,000140,000199019952000200520102015Petabytes per monthGlobal Internet Traffic Volume The volume of Internet traffic is difficult to measure because no single point of measurement exists in the multi-tiered, non-hierarchical topology. Traffic data may be estimated from the aggregate volume through the peering points of the Tier 1 network providers, but traffic that stays local in large provider networks may not be accounted for.[citation needed] An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to the small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests. Estimates of the Internet's electricity usage have been the subject of controversy, according to a 2014 peer-reviewed research paper that found claims differing by a factor of 20,000 published in the literature during the preceding decade, ranging from 0.0064 kilowatt hours per gigabyte transferred (kWh/GB) to 136 kWh/GB. The researchers attributed these discrepancies mainly to the year of reference (i.e. whether efficiency gains over time had been taken into account) and to whether "end devices such as personal computers and servers are included" in the analysis. In 2011, academic researchers estimated the overall energy used by the Internet to be between 170 and 307 GW, less than two percent of the energy used by humanity. This estimate included the energy needed to build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and 100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi transmitters and cloud storage devices use when transmitting Internet traffic. According to a non-peer-reviewed study published in 2018 by The Shift Project (a French think tank funded by corporate sponsors), nearly 4% of global CO2 emissions could be attributed to global data transfer and the necessary infrastructure. The study also said that online video streaming alone accounted for 60% of this data transfer and therefore contributed to over 300 million tons of CO2 emission per year, and argued for new "digital sobriety" regulations restricting the use and size of video files. See also Notes References Sources Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/API] | [TOKENS: 3859] |
Contents API An application programming interface (API) is a connection between computers or between computer programs. It is a type of software interface, offering a service to other pieces of software. A document or standard that describes how to build such a connection or interface is called an API specification. A computer system that meets this standard is said to implement or expose an API. The term API may refer either to the specification or to the implementation. In contrast to a user interface, which connects a computer to a person, an application programming interface connects computers or pieces of software to each other. It is not intended to be used directly by a person (the end user) other than a computer programmer who is incorporating it into software. An API is often made up of different parts which act as tools or services that are available to the programmer. A program or a programmer that uses one of these parts is said to call that portion of the API. The calls that make up the API are also known as subroutines, methods, requests, or endpoints. An API specification defines these calls, meaning that it explains how to use or implement them. One purpose of APIs is to hide the internal details of how a system works, exposing only those parts a programmer will find useful and keeping them consistent even if the internal details later change. An API may be custom-built for a particular pair of systems, or it may be a shared standard allowing interoperability among many systems. The term API is often used to refer to web APIs, which allow communication between computers that are joined by the internet. There are also APIs for programming languages, software libraries, computer operating systems, and computer hardware. APIs originated in the 1940s, though the term did not emerge until the 1960s and 70s. Purpose An API opens a software system to interactions from the outside. It allows two software systems to communicate across a boundary — an interface — using mutually agreed-upon signals. In other words, an API connects software entities together. Unlike a user interface, an API is typically not visible to users. It is an "under the hood" portion of a software system, used for machine-to-machine communication. A well-designed API exposes only objects or actions needed by software or software developers. It hides details that have no use. This abstraction simplifies programming. Building software using APIs has been compared to using building-block toys, such as Lego bricks. Software services or software libraries are analogous to the bricks; they may be joined together via their APIs, composing a new software product. The process of joining is called integration. As an example, consider a weather sensor that offers an API. When a certain message is transmitted to the sensor, it will detect the current weather conditions and reply with a weather report. The message that activates the sensor is an API call, and the weather report is an API response. A weather forecasting app might integrate with a number of weather sensor APIs, gathering weather data from throughout a geographical area. An API is often compared to a contract. It represents an agreement between parties: a service provider who offers the API and the software developers who rely upon it. If the API remains stable, or if it changes only in predictable ways, developers' confidence in the API will increase. This may increase their use of the API. History of the term The term API initially described an interface only for end-user-facing programs, known as application programs. This origin is still reflected in the name "application programming interface." Today, the term is broader, including also utility software and even hardware interfaces. The idea of the API is much older than the term itself. British computer scientists Maurice Wilkes and David Wheeler worked on a modular software library in the 1940s for EDSAC, an early computer. The subroutines in this library were stored on punched paper tape organized in a filing cabinet. This cabinet also contained what Wilkes and Wheeler called a "library catalog" of notes about each subroutine and how to incorporate it into a program. Today, such a catalog would be called an API (or an API specification or API documentation) because it instructs a programmer on how to use (or "call") each subroutine that the programmer needs. Wilkes and Wheeler's book The Preparation of Programs for an Electronic Digital Computer contains the first published API specification. Joshua Bloch considers that Wilkes and Wheeler "latently invented" the API, because it is more of a concept that is discovered than invented. The term "application program interface" (without an -ing suffix) is first recorded in a paper called Data structures and techniques for remote computer graphics presented at an AFIPS conference in 1968. The authors of this paper use the term to describe the interaction of an application—a graphics program in this case—with the rest of the computer system. A consistent application interface (consisting of Fortran subroutine calls) was intended to free the programmer from dealing with idiosyncrasies of the graphics display device, and to provide hardware independence if the computer or the display were replaced. The term was introduced to the field of databases by C. J. Date in a 1974 paper called The Relational and Network Approaches: Comparison of the Application Programming Interface. An API became a part of the ANSI/SPARC framework for database management systems. This framework treated the application programming interface separately from other interfaces, such as the query interface. Database professionals in the 1970s observed these different interfaces could be combined; a sufficiently rich application interface could support the other interfaces as well. This observation led to APIs that supported all types of programming, not just application programming. By 1990, the API was defined simply as "a set of services available to a programmer for performing certain tasks" by technologist Carl Malamud. The idea of the API was expanded again with the dawn of remote procedure calls and web APIs. As computer networks became common in the 1970s and 80s, programmers wanted to call libraries located not only on their local computers, but on computers located elsewhere. These remote procedure calls were well supported by the Java language in particular. In the 1990s, with the spread of the internet, standards like CORBA, COM, and DCOM competed to become the most common way to expose API services. Roy Fielding's dissertation Architectural Styles and the Design of Network-based Software Architectures at UC Irvine in 2000 outlined Representational state transfer (REST) and described the idea of a "network-based Application Programming Interface" that Fielding contrasted with traditional "library-based" APIs. XML and JSON web APIs saw widespread commercial adoption beginning in 2000 and continuing as of 2021. The web API is now the most common meaning of the term API. The Semantic Web proposed by Tim Berners-Lee in 2001 included "semantic APIs" that recast the API as an open, distributed data interface rather than a software behavior interface. Proprietary interfaces and agents became more widespread than open ones, but the idea of the API as a data interface took hold. Because web APIs are widely used to exchange data of all kinds online, API has become a broad term describing much of the communication on the internet. When used in this way, the term API has overlap in meaning with the term communication protocol. Types The interface to a software library is one type of API. The API describes and prescribes the "expected behavior" (a specification) while the library is an "actual implementation" of this set of rules. A single API can have multiple implementations (or none, being abstract) in the form of different libraries that share the same programming interface. The separation of the API from its implementation can allow programs written in one language to use a library written in another. For example, because Scala and Java compile to compatible bytecode, Scala developers can take advantage of any Java API. API use can vary depending on the type of programming language involved. An API for a procedural language such as Lua could consist primarily of basic routines to execute code, manipulate data or handle errors while an API for an object-oriented language, such as Java, would provide a specification of classes and its class methods. Hyrum's law states that "With a sufficient number of users of an API, it does not matter what you promise in the contract: all observable behaviors of your system will be depended on by somebody." Meanwhile, several studies show that most applications that use an API tend to use a small part of the API. Language bindings are also APIs. By mapping the features and capabilities of one language to an interface implemented in another language, a language binding allows a library or service written in one language to be used when developing in another language. Tools such as SWIG and F2PY, a Fortran-to-Python interface generator, facilitate the creation of such interfaces. An API can also be related to a software framework: a framework can be based on several libraries implementing several APIs, but unlike the normal use of an API, the access to the behavior built into the framework is mediated by extending its content with new classes plugged into the framework itself. Moreover, the overall program flow of control can be out of the control of the caller and in the framework's hands by inversion of control or a similar mechanism. An API can specify the interface between an application and the operating system. POSIX, for example, specifies a set of common APIs that aim to enable an application written for a POSIX conformant operating system to be compiled for another POSIX conformant operating system. Linux and Berkeley Software Distribution are examples of operating systems that implement the POSIX APIs. Microsoft has shown a strong commitment to a backward-compatible API, particularly within its Windows API (Win32) library, so older applications may run on newer versions of Windows using an executable-specific setting called "Compatibility Mode". How much Microsoft developers' access to the company's operating systems' internal APIs is an advantage is unclear. Richard A. Shaffer of Technologic Computer Letter in 1987 compared the situation to a baseball game in which "Microsoft owns all the bats and the field", and large vendors like Lotus Development and Ashton-Tate reportedly received information about MS-DOS 5.0 that smaller software developers did not. Ed Esber of Ashton-Tate said in a 1987 interview, however, that Bill Gates told him that his developers sometimes had to rewrite software based on early APIs. Gates noted in the interview that Microsoft's Apple Macintosh applications were more successful than those for MS-DOS, because his company did not have to also devote resources to Mac OS. An API differs from an application binary interface (ABI) in that an API is source code based while an ABI is binary based. For instance, POSIX provides APIs while the Linux Standard Base provides an ABI. Remote APIs allow developers to manipulate remote resources through protocols, specific standards for communication that allow different technologies to work together, regardless of language or platform. For example, the Java Database Connectivity API allows developers to query many different types of databases with the same set of functions, while the Java remote method invocation API uses the Java Remote Method Protocol to allow invocation of functions that operate remotely, but appear local to the developer. Therefore, remote APIs are useful in maintaining the object abstraction in object-oriented programming; a method call, executed locally on a proxy object, invokes the corresponding method on the remote object, using the remoting protocol, and acquires the result to be used locally as a return value. A modification of the proxy object will also result in a corresponding modification of the remote object. Web APIs are the defined interfaces through which interactions happen between an enterprise and applications that use its assets, which also is a Service Level Agreement (SLA) to specify the functional provider and expose the service path or URL for its API users. An API approach is an architectural approach that revolves around providing a program interface to a set of services to different applications serving different types of consumers. When used in the context of web development, an API is typically defined as a set of specifications, such as Hypertext Transfer Protocol (HTTP) request messages, along with a definition of the structure of response messages, usually in an Extensible Markup Language (XML) or JavaScript Object Notation (JSON) format. An example might be a shipping company API that can be added to an eCommerce-focused website to facilitate ordering shipping services and automatically include current shipping rates, without the site developer having to enter the shipper's rate table into a web database. While "web API" historically has been virtually synonymous with web service, the recent[when?] trend (so-called Web 2.0) has been moving away from Simple Object Access Protocol (SOAP) based web services and service-oriented architecture (SOA) towards more direct representational state transfer (REST) style web resources and resource-oriented architecture (ROA). Part of this trend is related to the Semantic Web movement toward Resource Description Framework (RDF), a concept to promote web-based ontology engineering technologies. Web APIs allow the combination of multiple APIs into new applications known as mashups. In the social media space, web APIs have allowed web communities to facilitate sharing content and data between communities and applications. In this way, content that is created in one place dynamically can be posted and updated to multiple locations on the web. For example, Twitter's REST API allows developers to access core Twitter data and the Search API provides methods for developers to interact with Twitter Search and trends data. Design The design of an API has significant impact on its usage. The principle of information hiding describes the role of programming interfaces as enabling modular programming by hiding the implementation details of the modules so that users of modules need not understand the complexities inside the modules. Thus, the design of an API attempts to provide only the tools a user would expect. The design of programming interfaces represents an important part of software architecture, the organization of a complex piece of software. Release policies APIs are one of the more common ways technology companies integrate. Those that provide and use APIs are considered as being members of a business ecosystem. The main policies for releasing an API are: An important factor when an API becomes public is its "interface stability". Changes to the API—for example adding new parameters to a function call—could break compatibility with the clients that depend on that API. When parts of a publicly presented API are subject to change and thus not stable, such parts of a particular API should be documented explicitly as "unstable". For example, in the Google Guava library, the parts that are considered unstable, and that might change soon, are marked with the Java annotation @Beta. A public API can sometimes declare parts of itself as deprecated or rescinded. This usually means that part of the API should be considered a candidate for being removed, or modified in a backward incompatible way. Therefore, these changes allow developers to transition away from parts of the API that will be removed or not supported in the future. Client code may contain innovative or opportunistic usages that were not intended by the API designers. In other words, for a library with a significant user base, when an element becomes part of the public API, it may be used in diverse ways. On February 19, 2020, Akamai published their annual “State of the Internet” report, showcasing the growing trend of cybercriminals targeting public API platforms at financial services worldwide. From December 2017 through November 2019, Akamai witnessed 85.42 billion credential violation attacks. About 20%, or 16.55 billion, were against hostnames defined as API endpoints. Of these, 473.5 million have targeted financial services sector organizations. API documentation API documentation describes what services an API offers and how to use those services, aiming to cover everything a client would need to know for practical purposes. Documentation is crucial for the development and maintenance of applications using the API. API documentation is traditionally found in documentation files but can also be found in social media such as blogs, forums, and Q&A websites. Traditional documentation files are often presented via a documentation system, such as Javadoc or Pydoc, that has a consistent appearance and structure. However, the types of content included in the documentation differs from API to API. In the interest of clarity, API documentation may include a description of classes and methods in the API as well as typical usage scenarios, code snippets, design rationales, performance discussions, and contracts, but implementation details of the API services themselves are usually omitted. It can take a number of forms, including instructional documents, tutorials, and reference works. It'll also include a variety of information types, including guides and functionalities. Restrictions and limitations on how the API can be used are also covered by the documentation. For instance, documentation for an API function could note that its parameters cannot be null, that the function itself is not thread safe. Because API documentation tends to be comprehensive, it is a challenge for writers to keep the documentation updated and for users to read it carefully, potentially yielding bugs. API documentation can be enriched with metadata information like Java annotations. This metadata can be used by the compiler, tools, and by the run-time environment to implement custom behaviors or custom handling. It is possible to generate API documentation in a data-driven manner. By observing many programs that use a given API, it is possible to infer the typical usages, as well the required contracts and directives. Then, templates can be used to generate natural language from the mined data. Dispute over copyright protection for APIs In 2010, Oracle Corporation sued Google for having distributed a new implementation of Java embedded in the Android operating system. Google had not acquired any permission to reproduce the Java API, although permission had been given to the similar OpenJDK project. Judge William Alsup ruled in the Oracle v. Google case that APIs cannot be copyrighted in the U.S. and that a victory for Oracle would have widely expanded copyright protection to a "functional set of symbols" and allowed the copyrighting of simple software commands: To accept Oracle's claim would be to allow anyone to copyright one version of code to carry out a system of commands and thereby bar all others from writing its different versions to carry out all or part of the same commands. Alsup's ruling was overturned in 2014 on appeal to the Court of Appeals for the Federal Circuit, though the question of whether such use of APIs constitutes fair use was left unresolved. In 2016, following a two-week trial, a jury determined that Google's reimplementation of the Java API constituted fair use, but Oracle vowed to appeal the decision. Oracle won on its appeal, with the Court of Appeals for the Federal Circuit ruling that Google's use of the APIs did not qualify for fair use. In 2019, Google appealed to the Supreme Court of the United States over both the copyrightability and fair use rulings, and the Supreme Court granted review. Due to the COVID-19 pandemic, the oral hearings in the case were delayed until October 2020. The case was decided by the Supreme Court in Google's favor. Examples See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Elon_Musk#cite_note-153] | [TOKENS: 10515] |
Contents Elon Musk Elon Reeve Musk (/ˈiːlɒn/ EE-lon; born June 28, 1971) is a businessman and entrepreneur known for his leadership of Tesla, SpaceX, Twitter, and xAI. Musk has been the wealthiest person in the world since 2025; as of February 2026,[update] Forbes estimates his net worth to be around US$852 billion. Born into a wealthy family in Pretoria, South Africa, Musk emigrated in 1989 to Canada; he has Canadian citizenship since his mother was born there. He received bachelor's degrees in 1997 from the University of Pennsylvania before moving to California to pursue business ventures. In 1995, Musk co-founded the software company Zip2. Following its sale in 1999, he co-founded X.com, an online payment company that later merged to form PayPal, which was acquired by eBay in 2002. Musk also became an American citizen in 2002. In 2002, Musk founded the space technology company SpaceX, becoming its CEO and chief engineer; the company has since led innovations in reusable rockets and commercial spaceflight. Musk joined the automaker Tesla as an early investor in 2004 and became its CEO and product architect in 2008; it has since become a leader in electric vehicles. In 2015, he co-founded OpenAI to advance artificial intelligence (AI) research, but later left; growing discontent with the organization's direction and their leadership in the AI boom in the 2020s led him to establish xAI, which became a subsidiary of SpaceX in 2026. In 2022, he acquired the social network Twitter, implementing significant changes, and rebranding it as X in 2023. His other businesses include the neurotechnology company Neuralink, which he co-founded in 2016, and the tunneling company the Boring Company, which he founded in 2017. In November 2025, a Tesla pay package worth $1 trillion for Musk was approved, which he is to receive over 10 years if he meets specific goals. Musk was the largest donor in the 2024 U.S. presidential election, where he supported Donald Trump. After Trump was inaugurated as president in early 2025, Musk served as Senior Advisor to the President and as the de facto head of the Department of Government Efficiency (DOGE). After a public feud with Trump, Musk left the Trump administration and returned to managing his companies. Musk is a supporter of global far-right figures, causes, and political parties. His political activities, views, and statements have made him a polarizing figure. Musk has been criticized for COVID-19 misinformation, promoting conspiracy theories, and affirming antisemitic, racist, and transphobic comments. His acquisition of Twitter was controversial due to a subsequent increase in hate speech and the spread of misinformation on the service, following his pledge to decrease censorship. His role in the second Trump administration attracted public backlash, particularly in response to DOGE. The emails he sent to Jeffrey Epstein are included in the Epstein files, which were published between 2025–26 and became a topic of worldwide debate. Early life Elon Reeve Musk was born on June 28, 1971, in Pretoria, South Africa's administrative capital. He is of British and Pennsylvania Dutch ancestry. His mother, Maye (née Haldeman), is a model and dietitian born in Saskatchewan, Canada, and raised in South Africa. Musk therefore holds both South African and Canadian citizenship from birth. His father, Errol Musk, is a South African electromechanical engineer, pilot, sailor, consultant, emerald dealer, and property developer, who partly owned a rental lodge at Timbavati Private Nature Reserve. His maternal grandfather, Joshua N. Haldeman, who died in a plane crash when Elon was a toddler, was an American-born Canadian chiropractor, aviator and political activist in the technocracy movement who moved to South Africa in 1950. Elon has a younger brother, Kimbal, a younger sister, Tosca, and four paternal half-siblings. Musk was baptized as a child in the Anglican Church of Southern Africa. Despite both Elon and Errol previously stating that Errol was a part owner of a Zambian emerald mine, in 2023, Errol recounted that the deal he made was to receive "a portion of the emeralds produced at three small mines". Errol was elected to the Pretoria City Council as a representative of the anti-apartheid Progressive Party and has said that his children shared their father's dislike of apartheid. After his parents divorced in 1979, Elon, aged around 9, chose to live with his father because Errol Musk had an Encyclopædia Britannica and a computer. Elon later regretted his decision and became estranged from his father. Elon has recounted trips to a wilderness school that he described as a "paramilitary Lord of the Flies" where "bullying was a virtue" and children were encouraged to fight over rations. In one incident, after an altercation with a fellow pupil, Elon was thrown down concrete steps and beaten severely, leading to him being hospitalized for his injuries. Elon described his father berating him after he was discharged from the hospital. Errol denied berating Elon and claimed, "The [other] boy had just lost his father to suicide, and Elon had called him stupid. Elon had a tendency to call people stupid. How could I possibly blame that child?" Elon was an enthusiastic reader of books, and had attributed his success in part to having read The Lord of the Rings, the Foundation series, and The Hitchhiker's Guide to the Galaxy. At age ten, he developed an interest in computing and video games, teaching himself how to program from the VIC-20 user manual. At age twelve, Elon sold his BASIC-based game Blastar to PC and Office Technology magazine for approximately $500 (equivalent to $1,600 in 2025). Musk attended Waterkloof House Preparatory School, Bryanston High School, and then Pretoria Boys High School, where he graduated. Musk was a decent but unexceptional student, earning a 61/100 in Afrikaans and a B on his senior math certification. Musk applied for a Canadian passport through his Canadian-born mother to avoid South Africa's mandatory military service, which would have forced him to participate in the apartheid regime, as well as to ease his path to immigration to the United States. While waiting for his application to be processed, he attended the University of Pretoria for five months. Musk arrived in Canada in June 1989, connected with a second cousin in Saskatchewan, and worked odd jobs, including at a farm and a lumber mill. In 1990, he entered Queen's University in Kingston, Ontario. Two years later, he transferred to the University of Pennsylvania, where he studied until 1995. Although Musk has said that he earned his degrees in 1995, the University of Pennsylvania did not award them until 1997 – a Bachelor of Arts in physics and a Bachelor of Science in economics from the university's Wharton School. He reportedly hosted large, ticketed house parties to help pay for tuition, and wrote a business plan for an electronic book-scanning service similar to Google Books. In 1994, Musk held two internships in Silicon Valley: one at energy storage startup Pinnacle Research Institute, which investigated electrolytic supercapacitors for energy storage, and another at Palo Alto–based startup Rocket Science Games. In 1995, he was accepted to a graduate program in materials science at Stanford University, but did not enroll. Musk decided to join the Internet boom of the 1990s, applying for a job at Netscape, to which he reportedly never received a response. The Washington Post reported that Musk lacked legal authorization to remain and work in the United States after failing to enroll at Stanford. In response, Musk said he was allowed to work at that time and that his student visa transitioned to an H1-B. According to numerous former business associates and shareholders, Musk said he was on a student visa at the time. Business career In 1995, Musk, his brother Kimbal, and Greg Kouri founded the web software company Zip2 with funding from a group of angel investors. They housed the venture at a small rented office in Palo Alto. Replying to Rolling Stone, Musk denounced the notion that they started their company with funds borrowed from Errol Musk, but in a tweet, he recognized that his father contributed 10% of a later funding round. The company developed and marketed an Internet city guide for the newspaper publishing industry, with maps, directions, and yellow pages. According to Musk, "The website was up during the day and I was coding it at night, seven days a week, all the time." To impress investors, Musk built a large plastic structure around a standard computer to create the impression that Zip2 was powered by a small supercomputer. The Musk brothers obtained contracts with The New York Times and the Chicago Tribune, and persuaded the board of directors to abandon plans for a merger with CitySearch. Musk's attempts to become CEO were thwarted by the board. Compaq acquired Zip2 for $307 million in cash in February 1999 (equivalent to $590,000,000 in 2025), and Musk received $22 million (equivalent to $43,000,000 in 2025) for his 7-percent share. In 1999, Musk co-founded X.com, an online financial services and e-mail payment company. The startup was one of the first federally insured online banks, and, in its initial months of operation, over 200,000 customers joined the service. The company's investors regarded Musk as inexperienced and replaced him with Intuit CEO Bill Harris by the end of the year. The following year, X.com merged with online bank Confinity to avoid competition. Founded by Max Levchin and Peter Thiel, Confinity had its own money-transfer service, PayPal, which was more popular than X.com's service. Within the merged company, Musk returned as CEO. Musk's preference for Microsoft software over Unix created a rift in the company and caused Thiel to resign. Due to resulting technological issues and lack of a cohesive business model, the board ousted Musk and replaced him with Thiel in 2000.[b] Under Thiel, the company focused on the PayPal service and was renamed PayPal in 2001. In 2002, PayPal was acquired by eBay for $1.5 billion (equivalent to $2,700,000,000 in 2025) in stock, of which Musk—the largest shareholder with 11.72% of shares—received $175.8 million (equivalent to $320,000,000 in 2025). In 2017, Musk purchased the domain X.com from PayPal for an undisclosed amount, stating that it had sentimental value. In 2001, Musk became involved with the nonprofit Mars Society and discussed funding plans to place a growth-chamber for plants on Mars. Seeking a way to launch the greenhouse payloads into space, Musk made two unsuccessful trips to Moscow to purchase intercontinental ballistic missiles (ICBMs) from Russian companies NPO Lavochkin and Kosmotras. Musk instead decided to start a company to build affordable rockets. With $100 million of his early fortune, (equivalent to $180,000,000 in 2025) Musk founded SpaceX in May 2002 and became the company's CEO and Chief Engineer. SpaceX attempted its first launch of the Falcon 1 rocket in 2006. Although the rocket failed to reach Earth orbit, it was awarded a Commercial Orbital Transportation Services program contract from NASA, then led by Mike Griffin. After two more failed attempts that nearly caused Musk to go bankrupt, SpaceX succeeded in launching the Falcon 1 into orbit in 2008. Later that year, SpaceX received a $1.6 billion NASA contract (equivalent to $2,400,000,000 in 2025) for Falcon 9-launched Dragon spacecraft flights to the International Space Station (ISS), replacing the Space Shuttle after its 2011 retirement. In 2012, the Dragon vehicle docked with the ISS, a first for a commercial spacecraft. Working towards its goal of reusable rockets, in 2015 SpaceX successfully landed the first stage of a Falcon 9 on a land platform. Later landings were achieved on autonomous spaceport drone ships, an ocean-based recovery platform. In 2018, SpaceX launched the Falcon Heavy; the inaugural mission carried Musk's personal Tesla Roadster as a dummy payload. Since 2019, SpaceX has been developing Starship, a reusable, super heavy-lift launch vehicle intended to replace the Falcon 9 and Falcon Heavy. In 2020, SpaceX launched its first crewed flight, the Demo-2, becoming the first private company to place astronauts into orbit and dock a crewed spacecraft with the ISS. In 2024, NASA awarded SpaceX an $843 million (equivalent to $865,000,000 in 2025) contract to build a spacecraft that NASA will use to deorbit the ISS at the end of its lifespan. In 2015, SpaceX began development of the Starlink constellation of low Earth orbit satellites to provide satellite Internet access. After the launch of prototype satellites in 2018, the first large constellation was deployed in May 2019. As of May 2025[update], over 7,600 Starlink satellites are operational, comprising 65% of all operational Earth satellites. The total cost of the decade-long project to design, build, and deploy the constellation was estimated by SpaceX in 2020 to be $10 billion (equivalent to $12,000,000,000 in 2025).[c] During the Russian invasion of Ukraine, Musk provided free Starlink service to Ukraine, permitting Internet access and communication at a yearly cost to SpaceX of $400 million (equivalent to $440,000,000 in 2025). However, Musk refused to block Russian state media on Starlink. In 2023, Musk denied Ukraine's request to activate Starlink over Crimea to aid an attack against the Russian navy, citing fears of a nuclear response. Tesla, Inc., originally Tesla Motors, was incorporated in July 2003 by Martin Eberhard and Marc Tarpenning. Both men played active roles in the company's early development prior to Musk's involvement. Musk led the Series A round of investment in February 2004; he invested $6.35 million (equivalent to $11,000,000 in 2025), became the majority shareholder, and joined Tesla's board of directors as chairman. Musk took an active role within the company and oversaw Roadster product design, but was not deeply involved in day-to-day business operations. Following a series of escalating conflicts in 2007 and the 2008 financial crisis, Eberhard was ousted from the firm.[page needed] Musk assumed leadership of the company as CEO and product architect in 2008. A 2009 lawsuit settlement with Eberhard designated Musk as a Tesla co-founder, along with Tarpenning and two others. Tesla began delivery of the Roadster, an electric sports car, in 2008. With sales of about 2,500 vehicles, it was the first mass production all-electric car to use lithium-ion battery cells. Under Musk, Tesla has since launched several well-selling electric vehicles, including the four-door sedan Model S (2012), the crossover Model X (2015), the mass-market sedan Model 3 (2017), the crossover Model Y (2020), and the pickup truck Cybertruck (2023). In May 2020, Musk resigned as chairman of the board as part of the settlement of a lawsuit from the SEC over him tweeting that funding had been "secured" for potentially taking Tesla private. The company has also constructed multiple lithium-ion battery and electric vehicle factories, called Gigafactories. Since its initial public offering in 2010, Tesla stock has risen significantly; it became the most valuable carmaker in summer 2020, and it entered the S&P 500 later that year. In October 2021, it reached a market capitalization of $1 trillion (equivalent to $1,200,000,000,000 in 2025), the sixth company in U.S. history to do so. Musk provided the initial concept and financial capital for SolarCity, which his cousins Lyndon and Peter Rive founded in 2006. By 2013, SolarCity was the second largest provider of solar power systems in the United States. In 2014, Musk promoted the idea of SolarCity building an advanced production facility in Buffalo, New York, triple the size of the largest solar plant in the United States. Construction of the factory started in 2014 and was completed in 2017. It operated as a joint venture with Panasonic until early 2020. Tesla acquired SolarCity for $2 billion in 2016 (equivalent to $2,700,000,000 in 2025) and merged it with its battery unit to create Tesla Energy. The deal's announcement resulted in a more than 10% drop in Tesla's stock price; at the time, SolarCity was facing liquidity issues. Multiple shareholder groups filed a lawsuit against Musk and Tesla's directors, stating that the purchase of SolarCity was done solely to benefit Musk and came at the expense of Tesla and its shareholders. Tesla directors settled the lawsuit in January 2020, leaving Musk the sole remaining defendant. Two years later, the court ruled in Musk's favor. In 2016, Musk co-founded Neuralink, a neurotechnology startup, with an investment of $100 million. Neuralink aims to integrate the human brain with artificial intelligence (AI) by creating devices that are embedded in the brain. Such technology could enhance memory or allow the devices to communicate with software. The company also hopes to develop devices to treat neurological conditions like spinal cord injuries. In 2022, Neuralink announced that clinical trials would begin by the end of the year. In September 2023, the Food and Drug Administration approved Neuralink to initiate six-year human trials. Neuralink has conducted animal testing on macaques at the University of California, Davis. In 2021, the company released a video in which a macaque played the video game Pong via a Neuralink implant. The company's animal trials—which have caused the deaths of some monkeys—have led to claims of animal cruelty. The Physicians Committee for Responsible Medicine has alleged that Neuralink violated the Animal Welfare Act. Employees have complained that pressure from Musk to accelerate development has led to botched experiments and unnecessary animal deaths. In 2022, a federal probe was launched into possible animal welfare violations by Neuralink.[needs update] In 2017, Musk founded the Boring Company to construct tunnels; he also revealed plans for specialized, underground, high-occupancy vehicles that could travel up to 150 miles per hour (240 km/h) and thus circumvent above-ground traffic in major cities. Early in 2017, the company began discussions with regulatory bodies and initiated construction of a 30-foot (9.1 m) wide, 50-foot (15 m) long, and 15-foot (4.6 m) deep "test trench" on the premises of SpaceX's offices, as that required no permits. The Los Angeles tunnel, less than two miles (3.2 km) in length, debuted to journalists in 2018. It used Tesla Model Xs and was reported to be a rough ride while traveling at suboptimal speeds. Two tunnel projects announced in 2018, in Chicago and West Los Angeles, have been canceled. A tunnel beneath the Las Vegas Convention Center was completed in early 2021. Local officials have approved further expansions of the tunnel system. April 14, 2022 In early 2017, Musk expressed interest in buying Twitter and had questioned the platform's commitment to freedom of speech. By 2022, Musk had reached 9.2% stake in the company, making him the largest shareholder.[d] Musk later agreed to a deal that would appoint him to Twitter's board of directors and prohibit him from acquiring more than 14.9% of the company. Days later, Musk made a $43 billion offer to buy Twitter. By the end of April Musk had successfully concluded his bid for approximately $44 billion. This included approximately $12.5 billion in loans and $21 billion in equity financing. Having backtracked on his initial decision, Musk bought the company on October 27, 2022. Immediately after the acquisition, Musk fired several top Twitter executives including CEO Parag Agrawal; Musk became the CEO instead. Under Elon Musk, Twitter instituted monthly subscriptions for a "blue check", and laid off a significant portion of the company's staff. Musk lessened content moderation and hate speech also increased on the platform after his takeover. In late 2022, Musk released internal documents relating to Twitter's moderation of Hunter Biden's laptop controversy in the lead-up to the 2020 presidential election. Musk also promised to step down as CEO after a Twitter poll, and five months later, Musk stepped down as CEO and transitioned his role to executive chairman and chief technology officer (CTO). Despite Musk stepping down as CEO, X continues to struggle with challenges such as viral misinformation, hate speech, and antisemitism controversies. Musk has been accused of trying to silence some of his critics such as Twitch streamer Asmongold, who criticized him during one of his streams. Musk has been accused of removing their accounts' blue checkmarks, which hinders visibility and is considered a form of shadow banning, or suspending their accounts without justification. Other activities In August 2013, Musk announced plans for a version of a vactrain, and assigned engineers from SpaceX and Tesla to design a transport system between Greater Los Angeles and the San Francisco Bay Area, at an estimated cost of $6 billion. Later that year, Musk unveiled the concept, dubbed the Hyperloop, intended to make travel cheaper than any other mode of transport for such long distances. In December 2015, Musk co-founded OpenAI, a not-for-profit artificial intelligence (AI) research company aiming to develop artificial general intelligence, intended to be safe and beneficial to humanity. Musk pledged $1 billion of funding to the company, and initially gave $50 million. In 2018, Musk left the OpenAI board. Since 2018, OpenAI has made significant advances in machine learning. In July 2023, Musk launched the artificial intelligence company xAI, which aims to develop a generative AI program that competes with existing offerings like OpenAI's ChatGPT. Musk obtained funding from investors in SpaceX and Tesla, and xAI hired engineers from Google and OpenAI. December 16, 2022 Musk uses a private jet owned by Falcon Landing LLC, a SpaceX-linked company, and acquired a second jet in August 2020. His heavy use of the jets and the consequent fossil fuel usage have received criticism. Musk's flight usage is tracked on social media through ElonJet. In December 2022, Musk banned the ElonJet account on Twitter, and made temporary bans on the accounts of journalists that posted stories regarding the incident, including Donie O'Sullivan, Keith Olbermann, and journalists from The New York Times, The Washington Post, CNN, and The Intercept. In October 2025, Musk's company xAI launched Grokipedia, an AI-generated online encyclopedia that he promoted as an alternative to Wikipedia. Articles on Grokipedia are generated and reviewed by xAI's Grok chatbot. Media coverage and academic analysis described Grokipedia as frequently reusing Wikipedia content but framing contested political and social topics in line with Musk's own views and right-wing narratives. A study by Cornell University researchers and NBC News stated that Grokipedia cites sources that are blacklisted or considered "generally unreliable" on Wikipedia, for example, the conspiracy site Infowars and the neo-Nazi forum Stormfront. Wired, The Guardian and Time criticized Grokipedia for factual errors and for presenting Musk himself in unusually positive terms while downplaying controversies. Politics Musk is an outlier among business leaders who typically avoid partisan political advocacy. Musk was a registered independent voter when he lived in California. Historically, he has donated to both Democrats and Republicans, many of whom serve in states in which he has a vested interest. Since 2022, his political contributions have mostly supported Republicans, with his first vote for a Republican going to Mayra Flores in the 2022 Texas's 34th congressional district special election. In 2024, he started supporting international far-right political parties, activists, and causes, and has shared misinformation and numerous conspiracy theories. Since 2024, his views have been generally described as right-wing. Musk supported Barack Obama in 2008 and 2012, Hillary Clinton in 2016, Joe Biden in 2020, and Donald Trump in 2024. In the 2020 Democratic Party presidential primaries, Musk endorsed candidate Andrew Yang and expressed support for Yang's proposed universal basic income, and endorsed Kanye West's 2020 presidential campaign. In 2021, Musk publicly expressed opposition to the Build Back Better Act, a $3.5 trillion legislative package endorsed by Joe Biden that ultimately failed to pass due to unanimous opposition from congressional Republicans and several Democrats. In 2022, gave over $50 million to Citizens for Sanity, a conservative political action committee. In 2023, he supported Republican Ron DeSantis for the 2024 U.S. presidential election, giving $10 million to his campaign, and hosted DeSantis's campaign announcement on a Twitter Spaces event. From June 2023 to January 2024, Musk hosted a bipartisan set of X Spaces with Republican and Democratic candidates, including Robert F. Kennedy Jr., Vivek Ramaswamy, and Dean Phillips. In October 2025, former vice-president Kamala Harris commented that it was a mistake from the Democratic side to not invite Musk to a White House electric vehicle event organized in August 2021 and featuring executives from General Motors, Ford and Stellantis, despite Tesla being "the major American manufacturer of extraordinary innovation in this space." Fortune remarked that this was a nod to United Auto Workers and organized labor. Harris said presidents should put aside political loyalties when it came to recognizing innovation, and guessed that the non-invitation impacted Musk's perspective. Fortune noted that, at the time, Musk said, "Yeah, seems odd that Tesla wasn't invited." A month later, he criticized Biden as "not the friendliest administration." Jacob Silverman, author of the book Gilded Rage: Elon Musk and the Radicalization of Silicon Valley, said that the tech industry represented by Musk, Thiel, Andreessen and other capitalists, actually flourished under Biden, but the tech leaders chose Trump for their common ground on cultural issues. By early 2024, Musk had become a vocal and financial supporter of Donald Trump. In July 2024, minutes after the attempted assassination of Donald Trump, Musk endorsed him for president saying; "I fully endorse President Trump and hope for his rapid recovery." During the presidential campaign, Musk joined Trump on stage at a campaign rally, and during the campaign promoted conspiracy theories and falsehoods about Democrats, election fraud and immigration, in support of Trump. Musk was the largest individual donor of the 2024 election. In 2025, Musk contributed $19 million to the Wisconsin Supreme Court race, hoping to influence the state's future redistricting efforts and its regulations governing car manufacturers and dealers. In 2023, Musk said he shunned the World Economic Forum because it was boring. The organization commented that they had not invited him since 2015. He has participated in Dialog, dubbed "Tech Bilderberg" and organized by Peter Thiel and Auren Hoffman, though. Musk's international political actions and comments have come under increasing scrutiny and criticism, especially from the governments and leaders of France, Germany, Norway, Spain and the United Kingdom, particularly due to his position in the U.S. government as well as ownership of X. An NBC News analysis found he had boosted far-right political movements to cut immigration and curtail regulation of business in at least 18 countries on six continents since 2023. During his speech after the second inauguration of Donald Trump, Musk twice made a gesture interpreted by many as a Nazi or a fascist Roman salute.[e] He thumped his right hand over his heart, fingers spread wide, and then extended his right arm out, emphatically, at an upward angle, palm down and fingers together. He then repeated the gesture to the crowd behind him. As he finished the gestures, he said to the crowd, "My heart goes out to you. It is thanks to you that the future of civilization is assured." It was widely condemned as an intentional Nazi salute in Germany, where making such gestures is illegal. The Anti-Defamation League said it was not a Nazi salute, but other Jewish organizations disagreed and condemned the salute. American public opinion was divided on partisan lines as to whether it was a fascist salute. Musk dismissed the accusations of Nazi sympathies, deriding them as "dirty tricks" and a "tired" attack. Neo-Nazi and white supremacist groups celebrated it as a Nazi salute. Multiple European political parties demanded that Musk be banned from entering their countries. The concept of DOGE emerged in a discussion between Musk and Donald Trump, and in August 2024, Trump committed to giving Musk an advisory role, with Musk accepting the offer. In November and December 2024, Musk suggested that the organization could help to cut the U.S. federal budget, consolidate the number of federal agencies, and eliminate the Consumer Financial Protection Bureau, and that its final stage would be "deleting itself". In January 2025, the organization was created by executive order, and Musk was designated a "special government employee". Musk led the organization and was a senior advisor to the president, although his official role is not clear. In sworn statement during a lawsuit, the director of the White House Office of Administration stated that Musk "is not an employee of the U.S. DOGE Service or U.S. DOGE Service Temporary Organization", "is not the U.S. DOGE Service administrator", and has "no actual or formal authority to make government decisions himself". Trump said two days later that he had put Musk in charge of DOGE. A federal judge has ruled that Musk acted as the de facto leader of DOGE. Musk's role in the second Trump administration, particularly in response to DOGE, has attracted public backlash. He was criticized for his treatment of federal government employees, including his influence over the mass layoffs of the federal workforce. He has prioritized secrecy within the organization and has accused others of violating privacy laws. A Senate report alleged that Musk could avoid up to $2 billion in legal liability as a result of DOGE's actions. In May 2025, Bill Gates accused Musk of "killing the world's poorest children" through his cuts to USAID, which modeling by Boston University estimated had resulted in 300,000 deaths by this time, most of them of children. By November 2025, the estimated death toll had increased to 400,000 children and 200,000 adults. Musk announced on May 28, 2025, that he would depart from the Trump administration as planned when the special government employee's 130 day deadline expired, with a White House official confirming that Musk's offboarding from the Trump administration was already underway. His departure was officially confirmed during a joint Oval Office press conference with Trump on May 30, 2025. @realDonaldTrump is in the Epstein files. That is the real reason they have not been made public. June 5, 2025 After leaving office, Musk criticized the Trump administration's Big Beautiful Bill, calling it a "disgusting abomination" due to its provisions increasing the deficit. A feud began between Musk and Trump, with its most notable event being Musk alleging Trump had ties to sex offender Jeffrey Epstein on X (formerly Twitter) on June 5, 2025. Trump responded on Truth Social stating that Musk went "CRAZY" after the "EV Mandate" was purportedly taken away and threatened to cut Musk's government contracts. Musk then called for a third Trump impeachment. The next day, Trump stated that he did not wish to reconcile with Musk, and added that Musk would face "very serious consequences" if he funds Democratic candidates. On June 11, Musk publicly apologized for the tweets against Trump, saying they "went too far". Views November 6, 2022 Rejecting the conservative label, Musk has described himself as a political moderate, even as his views have become more right-wing over time. His views have been characterized as libertarian and far-right, and after his involvement in European politics, they have received criticism from world leaders such as Emmanuel Macron and Olaf Scholz. Within the context of American politics, Musk supported Democratic candidates up until 2022, at which point he voted for a Republican for the first time. He has stated support for universal basic income, gun rights, freedom of speech, a tax on carbon emissions, and H-1B visas. Musk has expressed concern about issues such as artificial intelligence (AI) and climate change, and has been a critic of wealth tax, short-selling, and government subsidies. An immigrant himself, Musk has been accused of being anti-immigration, and regularly blames immigration policies for illegal immigration. He is also a pronatalist who believes population decline is the biggest threat to civilization, and identifies as a cultural Christian. Musk has long been an advocate for space colonization, especially the colonization of Mars. He has repeatedly pushed for humanity colonizing Mars, in order to become an interplanetary species and lower the risks of human extinction. Musk has promoted conspiracy theories and made controversial statements that have led to accusations of racism, sexism, antisemitism, transphobia, disseminating disinformation, and support of white pride. While describing himself as a "pro-Semite", his comments regarding George Soros and Jewish communities have been condemned by the Anti-Defamation League and the Biden White House. Musk was criticized during the COVID-19 pandemic for making unfounded epidemiological claims, defying COVID-19 lockdowns restrictions, and supporting the Canada convoy protest against vaccine mandates. He has amplified false claims of white genocide in South Africa. Musk has been critical of Israel's actions in the Gaza Strip during the Gaza war, praised China's economic and climate goals, suggested that Taiwan and China should resolve cross-strait relations, and was described as having a close relationship with the Chinese government. In Europe, Musk expressed support for Ukraine in 2022 during the Russian invasion, recommended referendums and peace deals on the annexed Russia-occupied territories, and supported the far-right Alternative for Germany political party in 2024. Regarding British politics, Musk blamed the 2024 UK riots on mass migration and open borders, criticized Prime Minister Keir Starmer for what he described as a "two-tier" policing system, and was subsequently attacked as being responsible for spreading misinformation and amplifying the far-right. He has also voiced his support for far-right activist Tommy Robinson and pledged electoral support for Reform UK. In February 2026, Musk described Spanish Prime Minister Pedro Sánchez as a "tyrant" following Sánchez's proposal to prohibit minors under the age of 16 from accessing social media platforms. Legal affairs In 2018, Musk was sued by the U.S. Securities and Exchange Commission (SEC) for a tweet stating that funding had been secured for potentially taking Tesla private.[f] The securities fraud lawsuit characterized the tweet as false, misleading, and damaging to investors, and sought to bar Musk from serving as CEO of publicly traded companies. Two days later, Musk settled with the SEC, without admitting or denying the SEC's allegations. As a result, Musk and Tesla were fined $20 million each, and Musk was forced to step down for three years as Tesla chairman but was able to remain as CEO. Shareholders filed a lawsuit over the tweet, and in February 2023, a jury found Musk and Tesla not liable. Musk has stated in interviews that he does not regret posting the tweet that triggered the SEC investigation. In 2019, Musk stated in a tweet that Tesla would build half a million cars that year. The SEC reacted by asking a court to hold him in contempt for violating the terms of the 2018 settlement agreement. A joint agreement between Musk and the SEC eventually clarified the previous agreement details, including a list of topics about which Musk needed preclearance. In 2020, a judge blocked a lawsuit that claimed a tweet by Musk regarding Tesla stock price ("too high imo") violated the agreement. Freedom of Information Act (FOIA)-released records showed that the SEC concluded Musk had subsequently violated the agreement twice by tweeting regarding "Tesla's solar roof production volumes and its stock price". In October 2023, the SEC sued Musk over his refusal to testify a third time in an investigation into whether he violated federal law by purchasing Twitter stock in 2022. In February 2024, Judge Laurel Beeler ruled that Musk must testify again. In January 2025, the SEC filed a lawsuit against Musk for securities violations related to his purchase of Twitter. In January 2024, Delaware judge Kathaleen McCormick ruled in a 2018 lawsuit that Musk's $55 billion pay package from Tesla be rescinded. McCormick called the compensation granted by the company's board "an unfathomable sum" that was unfair to shareholders. The Delaware Supreme Court overturned McCormick's decision in December 2025, restoring Musk's compensation package and awarding $1 in nominal damages. Personal life Musk became a U.S. citizen in 2002. From the early 2000s until late 2020, Musk resided in California, where both Tesla and SpaceX were founded. He then relocated to Cameron County, Texas, saying that California had become "complacent" about its economic success. While hosting Saturday Night Live in 2021, Musk stated that he has Asperger syndrome (an outdated term for autism spectrum disorder). When asked about his experience growing up with Asperger's syndrome in a TED2022 conference in Vancouver, Musk stated that "the social cues were not intuitive ... I would just tend to take things very literally ... but then that turned out to be wrong — [people were not] simply saying exactly what they mean, there's all sorts of other things that are meant, and [it] took me a while to figure that out." Musk suffers from back pain and has undergone several spine-related surgeries, including a disc replacement. In 2000, he contracted a severe case of malaria while on vacation in South Africa. Musk has stated he uses doctor-prescribed ketamine for occasional depression and that he doses "a small amount once every other week or something like that"; since January 2024, some media outlets have reported that he takes ketamine, marijuana, LSD, ecstasy, mushrooms, cocaine and other drugs. Musk at first refused to comment on his alleged drug use, before responding that he had not tested positive for drugs, and that if drugs somehow improved his productivity, "I would definitely take them!". The New York Times' investigations revealed Musk's overuse of ketamine and numerous other drugs, as well as strained family relationships and concerns from close associates who have become troubled by his public behavior as he became more involved in political activities and government work. According to The Washington Post, President Trump described Musk as "a big-time drug addict". Through his own label Emo G Records, Musk released a rap track, "RIP Harambe", on SoundCloud in March 2019. The following year, he released an EDM track, "Don't Doubt Ur Vibe", featuring his own lyrics and vocals. Musk plays video games, which he stated has a "'restoring effect' that helps his 'mental calibration'". Some games he plays include Quake, Diablo IV, Elden Ring, and Polytopia. Musk once claimed to be one of the world's top video game players but has since admitted to "account boosting", or cheating by hiring outside services to achieve top player rankings. Musk has justified the boosting by claiming that all top accounts do it so he has to as well to remain competitive. In 2024 and 2025, Musk criticized the video game Assassin's Creed Shadows and its creator Ubisoft for "woke" content. Musk posted to X that "DEI kills art" and specified the inclusion of the historical figure Yasuke in the Assassin's Creed game as offensive; he also called the game "terrible". Ubisoft responded by saying that Musk's comments were "just feeding hatred" and that they were focused on producing a game not pushing politics. Musk has fathered at least 14 children, one of whom died as an infant. The Wall Street Journal reported in 2025 that sources close to Musk suggest that the "true number of Musk's children is much higher than publicly known". He had six children with his first wife, Canadian author Justine Wilson, whom he met while attending Queen's University in Ontario, Canada; they married in 2000. In 2002, their first child Nevada Musk died of sudden infant death syndrome at the age of 10 weeks. After his death, the couple used in vitro fertilization (IVF) to continue their family; they had twins in 2004, followed by triplets in 2006. The couple divorced in 2008 and have shared custody of their children. The elder twin he had with Wilson came out as a trans woman and, in 2022, officially changed her name to Vivian Jenna Wilson, adopting her mother's surname because she no longer wished to be associated with Musk. Musk began dating English actress Talulah Riley in 2008. They married two years later at Dornoch Cathedral in Scotland. In 2012, the couple divorced, then remarried the following year. After briefly filing for divorce in 2014, Musk finalized a second divorce from Riley in 2016. Musk then dated the American actress Amber Heard for several months in 2017; he had reportedly been "pursuing" her since 2012. In 2018, Musk and Canadian musician Grimes confirmed they were dating. Grimes and Musk have three children, born in 2020, 2021, and 2022.[g] Musk and Grimes originally gave their eldest child the name "X Æ A-12", which would have violated California regulations as it contained characters that are not in the modern English alphabet; the names registered on the birth certificate are "X" as a first name, "Æ A-Xii" as a middle name, and "Musk" as a last name. They received criticism for choosing a name perceived to be impractical and difficult to pronounce; Musk has said the intended pronunciation is "X Ash A Twelve". Their second child was born via surrogacy. Despite the pregnancy, Musk confirmed reports that the couple were "semi-separated" in September 2021; in an interview with Time in December 2021, he said he was single. In October 2023, Grimes sued Musk over parental rights and custody of X Æ A-Xii. Elon Musk has taken X Æ A-Xii to multiple official events in Washington, D.C. during Trump's second term in office. Also in July 2022, The Wall Street Journal reported that Musk allegedly had an affair with Nicole Shanahan, the wife of Google co-founder Sergey Brin, in 2021, leading to their divorce the following year. Musk denied the report. Musk also had a relationship with Australian actress Natasha Bassett, who has been described as "an occasional girlfriend". In October 2024, The New York Times reported Musk bought a Texas compound for his children and their mothers, though Musk denied having done so. Musk also has four children with Shivon Zilis, director of operations and special projects at Neuralink: twins born via IVF in 2021, a child born in 2024 via surrogacy and a child born in 2025.[h] On February 14, 2025, Ashley St. Clair, an influencer and author, posted on X claiming to have given birth to Musk's son Romulus five months earlier, which media outlets reported as Musk's supposed thirteenth child.[i] On February 22, 2025, it was reported that St Clair had filed for sole custody of her five-month-old son and for Musk to be recognised as the child's father. On March 31, 2025, Musk wrote that, while he was unsure if he was the father of St. Clair's child, he had paid St. Clair $2.5 million and would continue paying her $500,000 per year.[j] Later reporting from the Wall Street Journal indicated that $1 million of these payments to St. Clair were structured as a loan. In 2014, Musk and Ghislaine Maxwell appeared together in a photograph taken at an Academy Awards after-party, which Musk later described as a "photobomb". The January 2026 Epstein files contain emails between Musk and Epstein from 2012 to 2013, after Epstein's first conviction. Emails released on January 30, 2026, indicated that Epstein invited Musk to visit his private island on multiple occasions. The correspondence showed that while Epstein repeatedly encouraged Musk to attend, Musk did not visit the island. In one instance, Musk discussed the possibility of attending a party with his then-wife Talulah Riley and asked which day would be the "wildest party"; according to the emails, the visit did not take place after Epstein later cancelled the plans.[k] On Christmas day in 2012, Musk emailed Epstein asking "Do you have any parties planned? I’ve been working to the edge of sanity this year and so, once my kids head home after Christmas, I really want to hit the party scene in St Barts or elsewhere and let loose. The invitation is much appreciated, but a peaceful island experience is the opposite of what I’m looking for". Epstein replied that the "ratio on my island" might make Musk's wife uncomfortable to which Musk responded, "Ratio is not a problem for Talulah". On September 11, 2013, Epstein sent an email asking Musk if he had any plans for coming to New York for the opening of the United Nations General Assembly where many "interesting people" would be coming to his house to which Musk responded that "Flying to NY to see UN diplomats do nothing would be an unwise use of time". Epstein responded by stating "Do you think i am retarded. Just kidding, there is no one over 25 and all very cute." Musk has denied any close relationship with Epstein and described him as a "creep" who attempted to ingratiate himself with influential people. When Musk was asked in 2019 if he introduced Epstein to Mark Zuckerberg, Musk responded: "I don’t recall introducing Epstein to anyone, as I don’t know the guy well enough to do so." The released emails nonetheless showed cordial exchanges on a range of topics, including Musk's inquiry about parties on the island. The correspondence also indicated that Musk suggested hosting Epstein at SpaceX, while Epstein separately discussed plans to tour SpaceX and bring "the girls", though there is no evidence that such a visit occurred. Musk has described the release of the files a "distraction", later accusing the second Trump administration of suppressing them to protect powerful individuals, including Trump himself.[l] Wealth Elon Musk is the wealthiest person in the world, with an estimated net worth of US$690 billion as of January 2026, according to the Bloomberg Billionaires Index, and $852 billion according to Forbes, primarily from his ownership stakes in SpaceX and Tesla. Having been first listed on the Forbes Billionaires List in 2012, around 75% of Musk's wealth was derived from Tesla stock in November 2020, although he describes himself as "cash poor". According to Forbes, he became the first person in the world to achieve a net worth of $300 billion in 2021; $400 billion in December 2024; $500 billion in October 2025; $600 billion in mid-December 2025; $700 billion later that month; and $800 billion in February 2026. In November 2025, a Tesla pay package worth potentially $1 trillion for Musk was approved, which he is to receive over 10 years if he meets specific goals. Public image Although his ventures have been highly influential within their separate industries starting in the 2000s, Musk only became a public figure in the early 2010s. He has been described as an eccentric who makes spontaneous and impactful decisions, while also often making controversial statements, contrary to other billionaires who prefer reclusiveness to protect their businesses. Musk's actions and his expressed views have made him a polarizing figure. Biographer Ashlee Vance described people's opinions of Musk as polarized due to his "part philosopher, part troll" persona on Twitter. He has drawn denouncement for using his platform to mock the self-selection of personal pronouns, while also receiving praise for bringing international attention to matters like British survivors of grooming gangs. Musk has been described as an American oligarch due to his extensive influence over public discourse, social media, industry, politics, and government policy. After Trump's re-election, Musk's influence and actions during the transition period and the second presidency of Donald Trump led some to call him "President Musk", the "actual president-elect", "shadow president" or "co-president". Awards for his contributions to the development of the Falcon rockets include the American Institute of Aeronautics and Astronautics George Low Transportation Award in 2008, the Fédération Aéronautique Internationale Gold Space Medal in 2010, and the Royal Aeronautical Society Gold Medal in 2012. In 2015, he received an honorary doctorate in engineering and technology from Yale University and an Institute of Electrical and Electronics Engineers Honorary Membership. Musk was elected a Fellow of the Royal Society (FRS) in 2018.[m] In 2022, Musk was elected to the National Academy of Engineering. Time has listed Musk as one of the most influential people in the world in 2010, 2013, 2018, and 2021. Musk was selected as Time's "Person of the Year" for 2021. Then Time editor-in-chief Edward Felsenthal wrote that, "Person of the Year is a marker of influence, and few individuals have had more influence than Musk on life on Earth, and potentially life off Earth too." Notes References Works cited Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Benjamin_Franklin] | [TOKENS: 18468] |
Contents Benjamin Franklin Benjamin Franklin (January 17, 1706 [O.S. January 6, 1705][Note 1] – April 17, 1790) was an American polymath: a writer, scientist, inventor, statesman, diplomat, printer, publisher and political philosopher. Among the most influential intellectuals of his time, Franklin was one of the Founding Fathers of the United States; a drafter and signer of the Declaration of Independence; and the first postmaster general. Born in the Province of Massachusetts Bay, Franklin became a successful newspaper editor and printer in Philadelphia, the leading city in the colonies, publishing The Pennsylvania Gazette at age 23. He became wealthy publishing this and Poor Richard's Almanack, which he wrote under the pseudonym "Richard Saunders". After 1767, he was associated with the Pennsylvania Chronicle, a newspaper known for its revolutionary sentiments and criticisms of the policies of the British Parliament and the Crown. He pioneered and was the first president of the Academy and College of Philadelphia, which opened in 1751 and later became the University of Pennsylvania. He organized and was the first secretary of the American Philosophical Society and was elected its president in 1769. He was appointed deputy postmaster-general for the British colonies in 1753, which enabled him to set up the first national communications network. Franklin was active in community affairs and colonial and state politics, as well as national and international affairs. He became a hero in North America when, as an agent in London for several colonies, he spearheaded the repeal of the unpopular Stamp Act by the British Parliament. An accomplished diplomat, he was widely admired as the first U.S. ambassador to France and was a major figure in the development of positive Franco–American relations. His efforts proved vital in securing French aid for the American Revolution. From 1785 to 1788, he served as President of Pennsylvania. From at least as early as 1735 through the following decades, Franklin owned at least seven slaves and ran "for sale" ads for slaves in his newspaper, but by the late 1750s, he had begun arguing against slavery, became an active abolitionist, and promoted the education and integration of African Americans into U.S. society. As a scientist, Franklin's studies of electricity made him a major figure in the American Enlightenment and the history of physics. He also charted and named the Gulf Stream current. His numerous important inventions include the lightning rod, bifocals, glass harmonica and the Franklin stove. He founded many civic organizations, including the Library Company, the University of Pennsylvania, and Philadelphia's first fire department. Franklin earned the title of "The First American" for his early and indefatigable campaigning for colonial unity. He was the only person to sign the Declaration of Independence, the Treaty of Paris peace with Britain, and the Constitution. Foundational in defining the American ethos, Franklin has been called "the most accomplished American of his age and the most influential in inventing the type of society America would become". Franklin's life and legacy of scientific and political achievement, and his status as one of America's most influential Founding Fathers, have seen him honored for more than two centuries after his death on the $100 bill and in the names of many towns and counties, educational institutions and corporations, as well as in numerous cultural references and a portrait in the Oval Office. His more than 30,000 letters and documents have been collected in The Papers of Benjamin Franklin. Anne Robert Jacques Turgot said of him: "Eripuit fulmen cœlo, mox sceptra tyrannis" ("He snatched lightning from the sky and the scepter from tyrants"). Ancestry Benjamin Franklin's father, Josiah Franklin, was a tallow chandler, soaper, and candlemaker. Josiah Franklin was born at Ecton, Northamptonshire, England, on December 23, 1657, to Thomas Franklin and Jane White. Benjamin's father and all four of his grandparents were born in England. Josiah Franklin had a total of seventeen children with his two wives. He married his first wife, Anne Child, in about 1677 in Ecton and emigrated with her to Boston in 1683; they had three children before emigration and four after. Following her death, Josiah married Abiah Folger on July 9, 1689, in the Old South Meeting House by Reverend Samuel Willard, and had ten children with her. Benjamin, their eighth child, was Josiah Franklin's fifteenth child overall, and his tenth and final son. Benjamin Franklin's mother, Abiah, was born in Nantucket, Massachusetts Bay Colony, on August 15, 1667, to Peter Folger, a miller and schoolteacher, and his wife, Mary Morrell Folger, a former indentured servant. Mary Folger came from a Puritan family that was among the first Pilgrims to flee to Massachusetts for religious freedom, sailing for Boston in 1635 after King Charles I of England had begun persecuting Puritans. Her father Peter was "the sort of rebel destined to transform colonial America." As clerk of the court, he was arrested on February 10, 1676, and jailed on February 19 for his inability to pay bail. He spent over a year and a half in jail. Early life and education Franklin was born on Milk Street in Boston, Province of Massachusetts Bay on January 17, 1706,[Note 1] and baptized at the Old South Meeting House in Boston. As a child growing up along the Charles River, Franklin recalled that he was "generally the leader among the boys." Franklin's father wanted him to attend school with the clergy but only had enough money to send him to school for two years. He attended Boston Latin School but did not graduate; he continued his education through voracious reading. Although "his parents talked of the church as a career" for Franklin, his schooling ended when he was ten. He worked for his father for a time, and at 12 he became an apprentice to his brother James, a printer, who taught him the printing trade. When Benjamin was 15, James founded The New-England Courant, which was the third newspaper founded in Boston. When denied the chance to write a letter to the paper for publication, Franklin adopted the pseudonym of "Silence Dogood", a middle-aged widow. Mrs. Dogood's letters were published and became a subject of conversation around town. Neither James nor the Courant's readers were aware of the ruse, and James was unhappy with Benjamin when he discovered the popular correspondent was his younger brother. Franklin was an advocate of free speech from an early age. When his brother was jailed for three weeks in 1722 for publishing material unflattering to the governor, young Franklin took over the newspaper and had Mrs. Dogood proclaim, quoting Cato's Letters, "Without freedom of thought there can be no such thing as wisdom and no such thing as public liberty without freedom of speech." Franklin left his apprenticeship without his brother's permission, and in so doing became a fugitive. At age 17, Franklin ran away to Philadelphia, seeking a new start in a new city. When he first arrived, he worked in several printing shops there, but he was not satisfied by the immediate prospects in any of these jobs. After a few months, while Franklin was working in one printing house, Pennsylvania governor Sir William Keith convinced him to go to London, ostensibly to acquire the equipment necessary for establishing another newspaper in Philadelphia. Discovering that Keith's promises of backing a newspaper were empty, he worked as a typesetter in a printer's shop in what is today the Lady Chapel of Church of St Bartholomew-the-Great in the Smithfield area of London, which had at that time been deconsecrated. He returned to Philadelphia in 1726 with the help of Thomas Denham, an English merchant who had emigrated but returned to England, and who employed Franklin as a clerk, shopkeeper, and bookkeeper in his business. In 1727, at age 21, Franklin formed the Junto, a group of "like minded aspiring artisans and tradesmen who hoped to improve themselves while they improved their community." The Junto was a discussion group for issues of the day; it subsequently gave rise to many organizations in Philadelphia. The Junto was modeled after English coffeehouses that Franklin knew well and which had become the center of the spread of Enlightenment ideas in Britain. Reading was a great pastime of the Junto, but books were rare and expensive. The members created a library, initially assembled from their own books, after Franklin wrote: A proposition was made by me that since our books were often referr'd to in our disquisitions upon the inquiries, it might be convenient for us to have them altogether where we met, that upon occasion they might be consulted; and by thus clubbing our books to a common library, we should, while we lik'd to keep them together, have each of us the advantage of using the books of all the other members, which would be nearly as beneficial as if each owned the whole. This did not suffice, however. Franklin conceived the idea of a subscription library, which would pool the funds of the members to buy books for all to read. This was the birth of the Library Company of Philadelphia, whose charter he composed in 1731. Upon Denham's death, Franklin returned to his former trade. In 1728, he set up a printing house in partnership with Hugh Meredith; the following year he became the publisher of The Pennsylvania Gazette, a newspaper in Philadelphia. The Gazette gave Franklin a forum for agitation about a variety of local reforms and initiatives through printed essays and observations. Over time, his commentary, and his adroit cultivation of a positive image as an industrious and intellectual young man, earned him a great deal of social respect. But even after he achieved fame as a scientist and statesman, he habitually signed his letters with the unpretentious 'B. Franklin, Printer'. In 1732, he published the first German-language newspaper in America – Die Philadelphische Zeitung – although it failed after only one year because four other newly founded German papers quickly dominated the newspaper market. Franklin also printed Moravian religious books in German. He often visited Bethlehem, Pennsylvania, staying at the Moravian Sun Inn. In a 1751 pamphlet on demographic growth and its implications for the Thirteen Colonies, he called the Pennsylvania Germans "Palatine Boors" who could never acquire the "Complexion" of Anglo-American settlers and referred to "Blacks and Tawneys" as weakening the social structure of the colonies. Although he apparently reconsidered shortly thereafter, and the phrases were omitted from all later printings of the pamphlet, his views may have played a role in his political defeat in 1764. According to Ralph Frasca, Franklin promoted the printing press as a device to instruct colonial Americans in moral virtue. Frasca argues he saw this as a service to God, because he understood moral virtue in terms of actions, thus, doing good provides a service to God. Despite his own moral lapses, Franklin saw himself as uniquely qualified to instruct Americans in morality. He tried to influence American moral life through the construction of a printing network based on a chain of partnerships from the Carolinas to New England. He thereby invented the first newspaper chain.[citation needed] It was more than a business venture, for like many publishers he believed that the press had a public-service duty. When he established himself in Philadelphia, shortly before 1730, the town boasted two "wretched little" news sheets, Andrew Bradford's The American Weekly Mercury and Samuel Keimer's Universal Instructor in all Arts and Sciences, and Pennsylvania Gazette. This instruction in all arts and sciences consisted of weekly extracts from Chambers's Universal Dictionary. Franklin quickly did away with all of this when he took over the Instructor and made it The Pennsylvania Gazette. The Gazette soon became his characteristic organ, which he freely used for satire, for the play of his wit, even for sheer excess of mischief or of fun. From the first, he had a way of adapting his models to his own uses. The series of essays called "The Busy-Body", which he wrote for Bradford's American Mercury in 1729, followed the general Addisonian form, already modified to suit homelier conditions. The thrifty Patience, in her busy little shop, complaining of the useless visitors who waste her valuable time, is related to the women who address Mr. Spectator. The Busy-Body himself is a true Censor Morum, as Isaac Bickerstaff had been in the Tatler. And a number of the fictitious characters, Ridentius, Eugenius, Cato, and Cretico, represent traditional 18th-century classicism. Franklin even used this classical framework for contemporary satire, as seen in the character of Cretico, the "sour Philosopher", who is clearly a caricature of his rival, Samuel Keimer.[page needed] Franklin had mixed success in his plan to establish an inter-colonial network of newspapers that would produce a profit for him and disseminate virtue. Over the years he sponsored two dozen printers in Pennsylvania, South Carolina, New York, Connecticut, and even the Caribbean. By 1753, eight of the fifteen English language newspapers in the colonies were published by him or his partners. He began in Charleston, South Carolina, in 1731. After his second editor died, the widow, Elizabeth Timothy, took over and made it a success. She was one of the colonial era's first woman printers. For three decades Franklin maintained a close business relationship with her and her son Peter Timothy, who took over the South Carolina Gazette in 1746. The Gazette was impartial in political debates, while creating the opportunity for public debate, which encouraged others to challenge authority. Timothy avoided blandness and crude bias and, after 1765, increasingly took a patriotic stand in the growing crisis with Great Britain. Franklin's Connecticut Gazette (1755–68), however, proved unsuccessful. As the Revolution approached, political strife slowly tore his network apart. In 1730 or 1731, Franklin was initiated into the local Masonic lodge. He became a grand master in 1734, indicating his rapid rise to prominence in Pennsylvania. The same year, he edited and published the first Masonic book in the Americas, a reprint of James Anderson's Constitutions of the Free-Masons. He was the secretary of St. John's Lodge in Philadelphia from 1735 to 1738. In January 1738, "Franklin appeared as a witness" in a manslaughter trial against two men who killed "a simple-minded apprentice" named Daniel Rees in a fake Masonic initiation gone wrong. One of the men "threw, or accidentally spilled, the burning spirits, and Daniel Rees died of his burns two days later." While Franklin did not directly participate in the hazing that led to Rees' death, he knew of the hazing before it turned fatal, and did nothing to stop it. He was criticized for his inaction in The American Weekly Mercury, by his publishing rival Andrew Bradford. Ultimately, "Franklin replied in his own defense in the Gazette." Franklin remained a Freemason for the rest of his life. At age 17 in 1723, Franklin proposed to 15-year-old Deborah Read while a boarder in the Read home. At that time, Deborah's mother was wary of allowing her young daughter to marry Franklin, who was on his way to London at Governor Keith's request, and also because of his financial instability. Her own husband had recently died, and she declined Franklin's request to marry her daughter. Franklin travelled to London, and after he failed to communicate as expected with Deborah and her family, they interpreted his long silence as a breaking of his promises. At the urging of her mother, Deborah married a potter named John Rogers on August 5, 1725. John soon fled to Barbados with her dowry in order to avoid debts and prosecution. Since Rogers' fate was unknown, bigamy laws prevented Deborah from remarrying. Franklin returned in 1726 and resumed his courtship of Deborah. They established a common-law marriage on September 1, 1730. They took in his recently acknowledged illegitimate young son and raised him in their household. They had two children together. Their son, Francis Folger Franklin, was born in October 1732 and died of smallpox in 1736. Their daughter, Sarah "Sally" Franklin, was born in 1743 and eventually married Richard Bache.[Note 2] Deborah's fear of the sea meant that she never accompanied Franklin on any of his extended trips to Europe; another possible reason why they spent much time apart is that he may have blamed her for possibly preventing their son Francis from being inoculated against the disease that subsequently killed him. Deborah wrote to him in November 1769, saying she was ill due to "dissatisfied distress" from his prolonged absence, but he did not return until his business was done. Deborah Read Franklin died of a stroke on December 14, 1774, while Franklin was on an extended mission to Great Britain; he returned in 1775. In 1730, 24-year-old Franklin publicly acknowledged his illegitimate son William and raised him in his household. William was born on February 22, 1730, but his mother's identity is unknown. He was educated in Philadelphia and beginning at about age 30 studied law in London in the early 1760s. William himself fathered an illegitimate son, William Temple Franklin, born on the same day and month: February 22, 1760. The boy's mother was never identified, and he was placed in foster care. In 1762, the elder William Franklin married Elizabeth Downes, daughter of a planter from Barbados, in London. In 1763, he was appointed as the last royal governor of New Jersey. A Loyalist to the king, William Franklin saw his relations with father Benjamin eventually break down over their differences about the American Revolutionary War, as Benjamin Franklin could never accept William's position. Deposed in 1776 by the revolutionary government of New Jersey, William was placed under house arrest at his home in Perth Amboy for six months. After the Declaration of Independence, he was formally taken into custody by order of the Provincial Congress of New Jersey, an entity which he refused to recognize, regarding it as an "illegal assembly." He was incarcerated in Connecticut for two years, in Wallingford and Middletown, and, after being caught surreptitiously engaging Americans into supporting the Loyalist cause, was held in solitary confinement at Litchfield for eight months. When finally released in a prisoner exchange in 1778, he moved to New York City, which was occupied by the British at the time. While in New York City, he became leader of the Board of Associated Loyalists, a quasi-military organization chartered by King George III and headquartered in New York City. They initiated guerrilla forays into New Jersey, southern Connecticut, and New York counties north of the city. When British troops evacuated from New York, William Franklin left with them and sailed to England. He settled in London, never to return to North America. In the preliminary peace talks in 1782 with Britain, "... Benjamin Franklin insisted that loyalists who had borne arms against the United States would be excluded from this plea (that they be given a general pardon). He was undoubtedly thinking of William Franklin."[unreliable source?] In 1732, Franklin began to publish the noted Poor Richard's Almanack (with content both original and borrowed) under the pseudonym Richard Saunders, on which much of his popular reputation is based. He frequently wrote under pseudonyms. The first issue published was for the upcoming year, 1733. He had developed a distinct, signature style that was plain, pragmatic and had a sly, soft but self-deprecating tone with declarative sentences. Although it was no secret that he was the author, his Richard Saunders character repeatedly denied it. "Poor Richard's Proverbs", adages from this almanac, such as "A penny saved is twopence dear" (often misquoted as "A penny saved is a penny earned") and "Fish and visitors stink in three days", remain common quotations in the modern world. Wisdom in folk society meant the ability to provide an apt adage for any occasion, and his readers became well prepared. He sold about ten thousand copies per year—it became an institution. In 1741, Franklin began publishing The General Magazine and Historical Chronicle for all the British Plantations in America. He used the heraldic badge of the Prince of Wales as the cover illustration. Franklin wrote a letter, "Advice to a Friend on Choosing a Mistress", dated June 25, 1745, in which he gives advice to a young man about channeling sexual urges. Due to its licentious nature, it was not published in collections of his papers during the 19th century. Federal court rulings from the mid-to-late 20th century cited the document as a reason for overturning obscenity laws and against censorship. Public life In 1736, Franklin created the Union Fire Company, one of the first volunteer firefighting companies in America. In the same year, he printed a new currency for New Jersey based on innovative anti-counterfeiting techniques he had devised. His political career also commenced, particularly as the Chief Clerk of the Pennsylvania Provincial Assembly, a capacity wherein he served until 1751. Throughout his career, he was an advocate for paper money, publishing A Modest Enquiry into the Nature and Necessity of a Paper Currency in 1729, and his printer printed money. He was influential in the more restrained and thus successful monetary experiments in the Middle Colonies, which stopped deflation without causing excessive inflation. In 1766, he made a case for paper money to the British House of Commons. As he matured, Franklin began to concern himself more with public affairs. In 1743, he first devised a scheme for the Academy, Charity School, and College of Philadelphia; however, the person he had in mind to run the academy, Rev. Richard Peters, refused and Franklin put his ideas away until 1749 when he printed his own pamphlet, Proposals Relating to the Education of Youth in Pensilvania.: 30 He was appointed president of the Academy on November 13, 1749; the academy and the charity school opened in 1751. In 1743, he founded the American Philosophical Society to help scientific men discuss their discoveries and theories. He began the electrical research that, along with other scientific inquiries, would occupy him for the rest of his life, in between bouts of politics and moneymaking. During King George's War, Franklin raised a militia called the Association for General Defense because the legislators of the city had decided to take no action to defend Philadelphia "either by erecting fortifications or building Ships of War." He raised money to create earthwork defenses and buy artillery. The largest of these was the "Association Battery" or "Grand Battery" of 50 guns. In 1747, Franklin (already a very wealthy man) retired from printing and went into other businesses. He formed a partnership with his foreman, David Hall, which provided Franklin with half of the shop's profits for 18 years. This lucrative business arrangement provided leisure time for study, and in a few years he had made many new discoveries. Franklin became involved in Philadelphia politics and rapidly progressed. In October 1748, he was selected as a councilman; in June 1749, he became a justice of the peace for Philadelphia; and in 1751, he was elected to the Pennsylvania Assembly. On August 10, 1753, he was appointed deputy postmaster-general of British North America. His service in domestic politics included reforming the postal system, with mail sent out every week. In 1751, Franklin and Thomas Bond obtained a charter from the Pennsylvania legislature to establish a hospital. Pennsylvania Hospital was the first hospital in the colonies. In 1752, Franklin organized the Philadelphia Contributionship, the Colonies' first homeowner's insurance company. Between 1750 and 1753, the "educational triumvirate" of Franklin, Samuel Johnson of Stratford, Connecticut, and schoolteacher William Smith built on Franklin's initial scheme and created what Bishop James Madison, president of the College of William & Mary, called a "new-model" plan or style of American college. Franklin solicited, printed in 1752, and promoted an American textbook of moral philosophy by Samuel Johnson, titled Elementa Philosophica, to be taught in the new colleges. In June 1753, Johnson, Franklin, and Smith met in Stratford. They decided the new-model college would focus on the professions, with classes taught in English instead of Latin, have subject matter experts as professors instead of one tutor leading a class for four years, and there would be no religious test for admission. Johnson went on to found King's College (now Columbia University) in New York City in 1754, while Franklin hired Smith as provost of the College of Philadelphia, which opened in 1755. At its first commencement, on May 17, 1757, seven men graduated; six with a Bachelor of Arts and one with a Master of Arts. It was later merged with the University of the State of Pennsylvania to become the University of Pennsylvania. The college was to become influential in guiding the founding documents of the United States: in the Continental Congress, for example, over one-third of the college-affiliated men who contributed to the Declaration of Independence between September 4, 1774, and July 4, 1776, were affiliated with the college. In 1754, he headed the Pennsylvania delegation to the Albany Congress. This meeting of several colonies had been requested by the Board of Trade in England to improve relations with the Indians and defense against the French. Franklin proposed a broad Plan of Union for the colonies. While the plan was not adopted, elements of it found their way into the Articles of Confederation and the Constitution. In 1753, Harvard University and Yale awarded him honorary master of arts degrees. In 1756, he was awarded an honorary Master of Arts degree from the College of William & Mary. Later in 1756, Franklin organized the Pennsylvania Militia. He used Tun Tavern as a gathering place to recruit a regiment of soldiers to go into battle against the Native American uprisings that beset the American colonies. Well known as a printer and publisher, Franklin was appointed postmaster of Philadelphia in 1737, holding the office until 1753, when he and publisher William Hunter were named deputy postmasters–general of British North America, the first to hold the office. (Joint appointments were standard at the time, for political reasons.) He was responsible for the British colonies from Pennsylvania north and east, as far as the island of Newfoundland. A post office for local and outgoing mail had been established in Halifax, Nova Scotia, by local stationer Benjamin Leigh, on April 23, 1754, but service was irregular. Franklin opened the first post office to offer regular, monthly mail in Halifax on December 9, 1755. Meantime, Hunter became postal administrator in Williamsburg, Virginia, and oversaw areas south of Annapolis, Maryland. Franklin reorganized the service's accounting system and improved speed of delivery between Philadelphia, New York, and Boston. By 1761, efficiencies led to the first profits for the colonial post office. When the lands of New France were ceded to the British under the Treaty of Paris in 1763, the British province of Quebec was created among them, and Franklin saw mail service expanded between Montreal, Trois-Rivières, Quebec City, and New York. For the greater part of his appointment, he lived in England (from 1757 to 1762, and again from 1764 to 1774)—about three-quarters of his term. Eventually, his sympathies for the rebel cause in the American Revolution led to his dismissal on January 31, 1774. On July 26, 1775, the Second Continental Congress established the United States Post Office and named Franklin as the first United States postmaster general. He had been a postmaster for decades and was a natural choice for the position. He had just returned from England and was appointed chairman of a Committee of Investigation to establish a postal system. The report of the committee, providing for the appointment of a postmaster general for the 13 American colonies, was considered by the Continental Congress on July 25 and 26. On July 26, 1775, Franklin was appointed postmaster general, the first appointed under the Continental Congress. His apprentice, William Goddard, felt that his ideas were mostly responsible for shaping the postal system and that the appointment should have gone to him, but he graciously conceded it to Franklin, 36 years his senior. Franklin, however, appointed Goddard as Surveyor of the Posts, issued him a signed pass, and directed him to investigate and inspect the various post offices and mail routes as he saw fit. The newly established postal system became the United States Post Office, a system that continues to operate today. In 1757, he was sent to England by the Pennsylvania Assembly as a colonial agent to protest against the political influence of the Penn family, the proprietors of the colony. He remained there for five years, striving to end the proprietors' prerogative to overturn legislation from the elected Assembly and their exemption from paying taxes on their land. His lack of influential allies in Whitehall led to the failure of this mission.[citation needed] At this time, many members of the Pennsylvania Assembly were feuding with William Penn's heirs, who controlled the colony as proprietors. After his return to the colony, Franklin led the "anti-proprietary party" in the struggle against the Penn family and was elected Speaker of the Pennsylvania House in May 1764. His call for a change from proprietary to royal government was a rare political miscalculation, however: Pennsylvanians worried that such a move would endanger their political and religious freedoms. Because of these fears and because of political attacks on his character, Franklin lost his seat in the October 1764 Assembly elections. The anti-proprietary party dispatched him to England again to continue the struggle against the Penn family proprietorship. During this trip, events drastically changed the nature of his mission. In London, Franklin opposed the 1765 Stamp Act. Unable to prevent its passage, he made another political miscalculation and recommended a friend to the post of stamp distributor for Pennsylvania. Pennsylvanians were outraged, believing that he had supported the measure all along, and threatened to destroy his home in Philadelphia. Franklin soon learned of the extent of colonial resistance to the Stamp Act, and he testified during the House of Commons proceedings that led to its repeal. With this, Franklin suddenly emerged as the leading spokesman for American interests in England. He wrote popular essays on behalf of the colonies. Georgia, New Jersey, and Massachusetts also appointed him as their agent to the Crown. During his lengthy missions to London between 1757 and 1775, Franklin lodged in a house on Craven Street, just off the Strand in central London. During his stays there, he developed a close friendship with his landlady, Margaret Stevenson, and her circle of friends and relations, in particular, her daughter Mary, who was more often known as Polly. The house is now a museum known as the Benjamin Franklin House. Whilst in London, Franklin became involved in radical politics. He belonged to a gentlemen's club (which he called "the honest Whigs"), which held stated meetings, and included members such as Richard Price, the minister of Newington Green Unitarian Church who ignited the Revolution controversy, and Andrew Kippis. In 1756, Franklin had become a member of the Society for the Encouragement of Arts, Manufactures & Commerce (now the Royal Society of Arts), which had been founded in 1754. After his return to the United States in 1775, he became the Society's Corresponding Member, continuing a close connection. The Royal Society of Arts instituted a Benjamin Franklin Medal in 1956 to commemorate the 250th anniversary of his birth and the 200th anniversary of his membership of the RSA. The study of natural philosophy (referred today as science in general) drew him into overlapping circles of acquaintance. Franklin was, for example, a corresponding member of the Lunar Society of Birmingham. In 1759, the University of St Andrews awarded him an honorary doctorate in recognition of his accomplishments. In October 1759, he was granted Freedom of the Borough of St Andrews. He was also awarded an honorary doctorate by Oxford University in 1762. Because of these honors, he was often addressed as "Dr. Franklin." While living in London in 1768, he developed a phonetic alphabet in A Scheme for a new Alphabet and a Reformed Mode of Spelling. This reformed alphabet discarded six letters he regarded as redundant (c, j, q, w, x, and y), and substituted six new letters for sounds he felt lacked letters of their own. This alphabet never caught on, and he eventually lost interest. From the mid-1750s to the mid-1770s, Franklin returned to England and spent much of his time in London., using the city as a base from which to travel. In 1771, he made short journeys through different parts of England, staying with Joseph Priestley at Leeds, Thomas Percival at Manchester and Erasmus Darwin at Lichfield. In Scotland, he spent five days with Lord Kames near Stirling and stayed for three weeks with David Hume in Edinburgh. In 1759, he visited Edinburgh with his son and later reported that he considered his six weeks in Scotland "six weeks of the densest happiness I have met with in any part of my life." In Ireland, he stayed with Lord Hillsborough. Franklin noted of him that "all the plausible behaviour I have described is meant only, by patting and stroking the horse, to make him more patient, while the reins are drawn tighter, and the spurs set deeper into his sides." In Dublin, Franklin was invited to sit with the members of the Irish Parliament rather than in the gallery. He was the first American to receive this honor. While touring Ireland, he was deeply moved by the level of poverty he witnessed. The economy of the Kingdom of Ireland was affected by the same trade regulations and laws that governed the Thirteen Colonies. He feared that the American colonies could eventually come to the same level of poverty if the regulations and laws continued to apply to them. Franklin spent two months in German lands in 1766, but his connections to the country stretched across a lifetime. He declared a debt of gratitude to German scientist Otto von Guericke for his early studies of electricity. Franklin also co-authored the first treaty of friendship between Prussia and America in 1785. In September 1767, he visited Paris with his usual traveling partner, Sir John Pringle, 1st Baronet. News of his electrical discoveries was widespread in France. His reputation meant that he was introduced to many influential scientists and politicians, and also to King Louis XV. One line of argument in Parliament was that Americans should pay a share of the costs of the French and Indian War and therefore taxes should be levied on them. Franklin became the American spokesman in highly publicized testimony in Parliament in 1766. He stated that Americans already contributed heavily to the defense of the Empire. He said local governments had raised, outfitted and paid 25,000 soldiers to fight France—as many as Great Britain itself sent—and spent many millions from American treasuries doing so in the French and Indian War alone. In 1772, Franklin obtained private letters of Thomas Hutchinson and Andrew Oliver, governor and lieutenant governor of the Province of Massachusetts Bay, proving that they had encouraged the Crown to crack down on Bostonians. Franklin sent them to North America, where they escalated tensions. The letters were finally leaked to the public in the Boston Gazette in mid-June 1773, causing a political firestorm in Massachusetts and raising significant questions in England. The British began to regard him as the fomenter of serious trouble. Hopes for a peaceful solution ended as he was systematically ridiculed and humiliated by Solicitor-General Alexander Wedderburn, before the Privy Council on January 29, 1774. He returned to Philadelphia in March 1775, and abandoned his accommodationist stance. In 1773, Franklin published two of his most celebrated pro-American satirical essays: "Rules by Which a Great Empire May Be Reduced to a Small One", and "An Edict by the King of Prussia." Franklin is known to have occasionally attended the Hellfire Club's meetings during 1758 as a non-member during his time in England. However, some authors and historians would argue he was in fact a British spy. As there are no records left (having been burned in 1774), many of these members are just assumed or linked by letters sent to each other. One early proponent that Franklin was a member of the Hellfire Club and a double agent is the historian Donald McCormick, who has a history of making controversial claims. In 1763, soon after Franklin returned to Pennsylvania from England for the first time, the western frontier was engulfed in a bitter war known as Pontiac's Rebellion. The Paxton Boys, a group of settlers convinced that the Pennsylvania government was not doing enough to protect them from American Indian raids, murdered a group of peaceful Susquehannock Indians and marched on Philadelphia. Franklin helped to organize a local militia to defend the capital against the mob. He met with the Paxton leaders and persuaded them to disperse. Franklin wrote a scathing attack against the racial prejudice of the Paxton Boys. "If an Indian injures me", he asked, "does it follow that I may revenge that injury on all Indians?" He provided an early response to British surveillance through his own network of counter-surveillance and manipulation. "He waged a public relations campaign, secured secret aid, played a role in privateering expeditions, and churned out effective and inflammatory propaganda." By the time Franklin arrived in Philadelphia on May 5, 1775, after his second mission to Great Britain, the American Revolution had begun at the Battles of Lexington and Concord the previous month, on April 19, 1775. The New England militia had forced the main British army to remain inside Boston. The Pennsylvania Assembly unanimously chose Franklin as their delegate to the Second Continental Congress. In June 1776, he was appointed a member of the Committee of Five that drafted the Declaration of Independence. Although he was temporarily disabled by gout and unable to attend most meetings of the committee,[citation needed] he made several "small but important" changes to the draft sent to him by Thomas Jefferson. The "all hang together" saying ascribed to Franklin at the signing is probably apocryphal. He reportedly replied to John Hancock when Hancock stated that they must all hang together, "Yes, we must, indeed, all hang together, or most assuredly we shall all hang separately." Carl Van Doren in Benjamin Franklin’s Autobiographical Writings writes that the person who said this was most likely Richard Penn, former governor of Pennsylvania, replying to a member of Congress who had said "they must all hang together"... 'If you do not, gentlemen,' said Mr. Penn, 'I can tell you that you will be very apt to hang separately.'" On October 26, 1776, Franklin was dispatched to France as commissioner for the United States. He took with him as secretary his 16-year-old grandson, William Temple Franklin. They lived in a home in the Parisian suburb of Passy, donated by Jacques-Donatien Le Ray de Chaumont, who supported the United States. Franklin remained in France until 1785. He conducted the affairs of his country toward the French nation with great success, which included securing a critical military alliance in 1778, signing the 1783 Treaty of Paris, and spearheading various clandestine operations against the British, including the privateer activities of John Paul Jones. Among his associates in France was Honoré Gabriel Riqueti, comte de Mirabeau—a French Revolutionary writer, orator and statesman who in 1791 was elected president of the National Assembly. In July 1784, Franklin met with Mirabeau and contributed anonymous materials that the Frenchman used in his first signed work: Considerations sur l'ordre de Cincinnatus. The publication was critical of the Society of the Cincinnati, established in the United States. Franklin and Mirabeau thought of it as a "noble order", inconsistent with the egalitarian ideals of the new republic. During his stay in France, he was active as a Freemason, serving as venerable master of the lodge Les Neuf Sœurs from 1779 until 1781. In 1784, when Franz Mesmer began to publicize his theory of "animal magnetism" which was considered offensive by many, Louis XVI appointed a commission to investigate it. These included the chemist Antoine Lavoisier, the physician Joseph-Ignace Guillotin, the astronomer Jean Sylvain Bailly, and Franklin. In doing so, the committee concluded, through blind trials that mesmerism only seemed to work when the subjects expected it, which discredited mesmerism and became the first major demonstration of the placebo effect, which was described at that time as "imagination." In 1781, he was elected a fellow of the American Academy of Arts and Sciences. Franklin's advocacy for religious tolerance in France contributed to arguments made by French philosophers and politicians that resulted in Louis XVI's signing of the Edict of Versailles in November 1787. This edict effectively nullified the Edict of Fontainebleau, which had denied non-Catholics civil status and the right to openly practice their faith. Franklin also served as American minister to Sweden, although he never visited that country. He negotiated a treaty that was signed in April 1783. On August 27, 1783, in Paris, he witnessed the world's first hydrogen balloon flight. Le Globe, created by professor Jacques Charles and Les Frères Robert, was watched by a vast crowd as it rose from the Champ de Mars (now the site of the Eiffel Tower). Franklin became so enthusiastic that he subscribed financially to the next project to build a manned hydrogen balloon. On December 1, 1783, Franklin was seated in the special enclosure for honored guests it took off from the Jardin des Tuileries, piloted by Charles and Nicolas-Louis Robert. Walter Isaacson describes a chess game between Franklin and the Duchess of Bourbon, "who made a move that inadvertently exposed her king. Ignoring the rules of the game, he promptly captured it. 'Ah,' said the duchess, 'we do not take Kings so.' Replied Franklin in a famous quip: 'We do in America.'" When he returned home in 1785, Franklin occupied a position second only to that of George Washington as the champion of American independence. Le Ray honored him with a commissioned portrait painted by Joseph Duplessis, which now hangs in the National Portrait Gallery of the Smithsonian Institution in Washington, D.C. After his return, Franklin became an abolitionist and freed his two slaves. He eventually became president of the Pennsylvania Abolition Society. Special balloting conducted October 18, 1785, unanimously elected him the sixth president of the Supreme Executive Council of Pennsylvania, replacing John Dickinson. The office was practically that of the governor. He held that office for slightly over three years, longer than any other, and served the constitutional limit of three full terms. Shortly after his initial election, he was re-elected to a full term on October 29, 1785, and again in the fall of 1786 and on October 31, 1787. In that capacity, he served as host to the Constitutional Convention of 1787 in Philadelphia. He also served as a delegate to the Convention. It was primarily an honorary position and he seldom engaged in debate. According to James McHenry, Elizabeth Willing Powel asked Franklin what kind of government they had wrought. He replied: "A republic, madam, if you can keep it." Death Franklin suffered from obesity throughout his middle age and elder years, which resulted in multiple health problems, including gout, which worsened as he aged. In poor health during the signing of the U.S. Constitution in 1787, he was rarely seen in public after then until his death.[citation needed] Franklin died from pleuritic attack at his home in Philadelphia on April 17, 1790, at age 84. His last reported words, conveyed to his daughter, were, "a dying man can do nothing easy", after she suggested that he change position in bed and lie on his side so he could breathe more easily. Franklin's death is described in the book The Life of Benjamin Franklin, quoting from the account of John Paul Jones: ... when the pain and difficulty of breathing entirely left him, and his family were flattering themselves with the hopes of his recovery, when an imposthume, which had formed itself in his lungs, suddenly burst, and discharged a quantity of matter, which he continued to throw up while he had power; but, as that failed, the organs of respiration became gradually oppressed; a calm, lethargic state succeeded; and on the 17th instant (April 1790), about eleven o'clock at night, he quietly expired, closing a long and useful life of eighty-four years and three months. Approximately 20,000 people attended Franklin's funeral, after which he was interred in Christ Church Burial Ground in Philadelphia. Upon learning of his death, the Constitutional Assembly in Revolutionary France entered into a state of mourning for a period of three days, and memorial services were conducted in honor of Franklin throughout the country. In 1728, at age 22, Franklin wrote what he hoped would be his own epitaph: The Body of B. Franklin Printer; Like the Cover of an old Book, Its Contents torn out, And stript of its Lettering and Gilding, Lies here, Food for Worms. But the Work shall not be wholly lost: For it will, as he believ'd, appear once more, In a new & more perfect Edition, Corrected and Amended By the Author. Franklin's actual grave, however, as he specified in his final will, simply reads "Benjamin and Deborah Franklin." Inventions and scientific inquiries Franklin was a prodigious inventor. Among his many creations were the lightning rod, Franklin stove, bifocal glasses and the flexible urinary catheter. He never patented his inventions; in his autobiography[Note 5] he wrote, "... as we enjoy great advantages from the inventions of others, we should be glad of an opportunity to serve others by any invention of ours; and this we should do freely and generously." Franklin was, along with his contemporary Leonhard Euler, the only major scientist who supported Christiaan Huygens's wave theory of light, which was basically ignored by the rest of the scientific community. In the 18th century, Isaac Newton's corpuscular theory was held to be true; it took Thomas Young's well-known slit experiment in 1803 to persuade most scientists to believe Huygens's theory. Franklin started exploring the phenomenon of electricity in the 1740s, after he met the itinerant lecturer Archibald Spencer, who used static electricity in his demonstrations. He proposed that "vitreous" and "resinous" electricity were not different types of "electrical fluid" (as electricity was called then), but the same "fluid" under different pressures. (The same proposal was made independently that same year by William Watson.) He was the first to label them as positive and negative respectively, which replaced the then current distinction made between 'vitreous' and 'resinous' electricity, and he was the first to discover the principle of conservation of charge. In 1748, he constructed a multiple plate capacitor, that he called an "electrical battery" (not a true battery like Volta's pile) by placing eleven panes of glass sandwiched between lead plates, suspended with silk cords and connected by wires. In pursuit of more pragmatic uses for electricity, remarking in spring 1749 that he felt "chagrin'd a little" that his experiments had heretofore resulted in "Nothing in this Way of Use to Mankind", Franklin planned a practical demonstration. He proposed a dinner party where a turkey was to be killed via electric shock and roasted on an electrical spit. After having prepared several turkeys this way, he noted that "the birds kill'd in this manner eat uncommonly tender." Franklin recounted that in the process of one of these experiments, he was shocked by a pair of Leyden jars, resulting in numbness in his arms that persisted for one evening, noting "I am Ashamed to have been Guilty of so Notorious a Blunder." Franklin briefly investigated electrotherapy, including the use of the electric bath. This work led to the field becoming widely known. In recognition of his work with electricity, he received the Royal Society's Copley Medal in 1753, and in 1756, he became one of the few 18th-century Americans elected a fellow of the Society. The CGS unit of electric charge has been named after him: one franklin (Fr) is equal to one statcoulomb. Franklin advised Harvard University in its acquisition of new electrical laboratory apparatus after the complete loss of its original collection, in a fire that destroyed the original Harvard Hall in 1764. The collection he assembled later became part of the Harvard Collection of Historical Scientific Instruments, now on public display in its Science Center. Franklin published a proposal for an experiment to prove that lightning is electricity by flying a kite in a storm. On May 10, 1752, Thomas-François Dalibard of France conducted Franklin's experiment using a 40-foot-tall (12 m) iron rod instead of a kite, and he extracted electrical sparks from a cloud. On June 15, 1752, Franklin may possibly have conducted his well-known kite experiment in Philadelphia, successfully extracting sparks from a cloud. He described the experiment in his newspaper, The Pennsylvania Gazette, on October 19, 1752, without mentioning that he himself had performed it. This account was read to the Royal Society on December 21 and printed as such in the Philosophical Transactions. Joseph Priestley published an account with additional details in his 1767 History and Present Status of Electricity. Franklin was careful to stand on an insulator, keeping dry under a roof to avoid the danger of electric shock. Others, such as Georg Wilhelm Richmann in Russia, were indeed electrocuted in performing lightning experiments during the months immediately following his experiment. In his writings, Franklin indicates that he was aware of the dangers and offered alternative ways to demonstrate that lightning was electrical, as shown by his use of the concept of electrical ground. He did not perform this experiment in the way that is often pictured in popular literature, flying the kite and waiting to be struck by lightning, as it would have been dangerous. Instead he used the kite to collect some electric charge from a storm cloud, showing that lightning was electrical. On October 19, 1752, in a letter to England with directions for repeating the experiment, he wrote: When rain has wet the kite twine so that it can conduct the electric fire freely, you will find it streams out plentifully from the key at the approach of your knuckle, and with this key a phial, or Leyden jar, may be charged: and from electric fire thus obtained spirits may be kindled, and all other electric experiments [may be] performed which are usually done by the help of a rubber glass globe or tube; and therefore the sameness of the electrical matter with that of lightening [sic] completely demonstrated. Franklin's electrical experiments led to his invention of the lightning rod. He said that conductors with a sharp rather than a smooth point could discharge silently and at a far greater distance. He surmised that this could help protect buildings from lightning by attaching "upright Rods of Iron, made sharp as a Needle and gilt to prevent Rusting, and from the Foot of those Rods a Wire down the outside of the Building into the Ground; ... Would not these pointed Rods probably draw the Electrical Fire silently out of a Cloud before it came nigh enough to strike, and thereby secure us from that most sudden and terrible Mischief!" Following a series of experiments on Franklin's own house, lightning rods were installed on the Academy of Philadelphia (later the University of Pennsylvania) and the Pennsylvania State House (later Independence Hall) in 1752. Though Franklin is famously associated with kites from his lightning experiments, he has also been noted by many for using kites to pull humans and ships across waterways. George Pocock in the book A Treatise on The Aeropleustic Art, or Navigation in the Air, by means of Kites, or Buoyant Sails noted being inspired by Benjamin Franklin's traction of his body by kite power across a waterway. Franklin noted a principle of refrigeration by observing that on a very hot day, he stayed cooler in a wet shirt in a breeze than he did in a dry one. To understand this phenomenon more clearly, he conducted experiments. In 1758 on a warm day in Cambridge, England, he and fellow scientist John Hadley experimented by continually wetting the ball of a mercury thermometer with ether and using bellows to evaporate the ether. With each subsequent evaporation, the thermometer read a lower temperature, eventually reaching 7 °F (−14 °C). Another thermometer showed that the room temperature was constant at 65 °F (18 °C). In his letter Cooling by Evaporation, Franklin noted that, "One may see the possibility of freezing a man to death on a warm summer's day." In 1761, Franklin wrote a letter to Mary Stevenson describing his experiments on the relationship between color and heat absorption. He found that darker color clothes got hotter when exposed to sunlight than lighter color clothes, an early demonstration of black body thermal radiation. One experiment he performed consisted of placing square pieces of cloth of various color out in the snow on a sunny day. He waited some time and then measured that the black pieces sank furthest into the snow of all the colors, indicating that they got the hottest and melted the most snow. According to Michael Faraday, Franklin's experiments on the non-conduction of ice are worth mentioning, although the law of the general effect of liquefaction on electrolytes is not attributed to Franklin. However, as reported in 1836 by Franklin's great-grandson Alexander Dallas Bache of the University of Pennsylvania, the law of the effect of heat on the conduction of bodies otherwise non-conductors, for example, glass, could be attributed to Franklin. Franklin wrote, "... A certain quantity of heat will make some bodies good conductors, that will not otherwise conduct ..." and again, "... And water, though naturally a good conductor, will not conduct well when frozen into ice." As deputy postmaster, Franklin became interested in North Atlantic Ocean circulation patterns. While in England in 1768, he heard a complaint from the Colonial Board of Customs. British packet ships carrying mail had taken several weeks longer to reach New York than it took an average merchant ship to reach Newport, Rhode Island. The merchantmen had a longer and more complex voyage because they left from London, while the packets left from Falmouth in Cornwall. Franklin put the question to his cousin Timothy Folger, a Nantucket whaler captain, who told him that merchant ships routinely avoided a strong eastbound mid-ocean current. The mail packet captains sailed dead into it, thus fighting an adverse current of 3 miles per hour (5 km/h). Franklin worked with Folger and other experienced ship captains, learning enough to chart the current and name it the Gulf Stream, by which it is still known today. Franklin published his Gulf Stream chart in 1770 in England, where it was ignored. Subsequent versions were printed in France in 1778 and the U.S. in 1786. The British original edition of the chart had been so thoroughly ignored that everyone assumed it was lost forever until Phil Richardson, a Woods Hole oceanographer and Gulf Stream expert, discovered it in the Bibliothèque Nationale in Paris in 1980. This find received front-page coverage in The New York Times. It took many years for British sea captains to adopt Franklin's advice on navigating the current; once they did, they were able to trim two weeks from their sailing time. In 1853, the oceanographer and cartographer Matthew Fontaine Maury noted that while Franklin charted and codified the Gulf Stream, he did not discover it: Though it was Dr. Franklin and Captain Tim Folger, who first turned the Gulf Stream to nautical account, the discovery that there was a Gulf Stream cannot be said to belong to either of them, for its existence was known to Peter Martyr d'Anghiera, and to Sir Humphrey Gilbert, in the 16th century. An aging Franklin accumulated all his oceanographic findings in Maritime Observations, published by the Philosophical Society's transactions in 1786. It contained ideas for sea anchors, catamaran hulls, watertight compartments, shipboard lightning rods and a soup bowl designed to stay stable in stormy weather. While traveling on a ship, Franklin had observed that the wake of a ship was diminished when the cooks scuttled their greasy water. He studied the effects on a large pond in Clapham Common, London. "I fetched out a cruet of oil and dropt a little of it on the water ... though not more than a teaspoon full, produced an instant calm over a space of several yards square." He later used the trick to "calm the waters" by carrying "a little oil in the hollow joint of [his] cane." On October 21, 1743, according to the popular myth, a storm moving from the southwest denied Franklin the opportunity of witnessing a lunar eclipse. He was said to have noted that the prevailing winds were actually from the northeast, contrary to what he had expected. In correspondence with his brother, he learned that the same storm had not reached Boston until after the eclipse, despite the fact that Boston is to the northeast of Philadelphia. He deduced that storms do not always travel in the direction of the prevailing wind, a concept that greatly influenced meteorology. After the Icelandic volcanic eruption of Laki in 1783, and the subsequent harsh European winter of 1784, Franklin made observations on the causal nature of these two seemingly separate events. He wrote about them in a lecture series. Franklin had a major influence on the emerging science of demography or population studies. In the 1730s and 1740s, he began taking notes on population growth, finding that the American population had the fastest growth rate on Earth. Emphasizing that population growth depended on food supplies, he emphasized the abundance of food and available farmland in America. He calculated that America's population was doubling every 20 years and would surpass that of England in a century. In 1751, he drafted Observations concerning the Increase of Mankind, Peopling of Countries, etc. Four years later, it was anonymously printed in Boston and was quickly reproduced in Britain, where it influenced the economist Adam Smith and later the demographer Thomas Malthus, who credited Franklin for discovering a rule of population growth. Franklin's predictions on how British mercantilism was unsustainable alarmed British leaders who did not want to be surpassed by the colonies, so they became more willing to impose restrictions on the colonial economy. Kammen (1990) and Drake (2011) say Franklin's Observations concerning the Increase of Mankind (1755) stands alongside Ezra Stiles' "Discourse on Christian Union" (1760) as the leading works of 18th-century Anglo-American demography; Drake credits Franklin's "wide readership and prophetic insight." Franklin was also a pioneer in the study of slave demography, as shown in his 1755 essay. In his capacity as a farmer, he wrote at least one critique about the negative consequences of price controls, trade restrictions, and subsidy of the poor. This is succinctly preserved in his letter to the London Chronicle published November 29, 1766, titled "On the Price of Corn, and Management of the poor." In a 1772 letter to Joseph Priestley, Franklin laid out the earliest known description of the Pro & Con list, a common decision-making technique, now sometimes called a decisional balance sheet: ... my Way is, to divide half a Sheet of Paper by a Line into two Columns, writing over the one Pro, and over the other Con. Then during three or four Days Consideration I put down under the different Heads short Hints of the different Motives that at different Times occur to me for or against the Measure. When I have thus got them all together in one View, I endeavour to estimate their respective Weights; and where I find two, one on each side, that seem equal, I strike them both out: If I find a Reason pro equal to some two Reasons con, I strike out the three. If I judge some two Reasons con equal to some three Reasons pro, I strike out the five; and thus proceeding I find at length where the Ballance lies; and if after a Day or two of farther Consideration nothing new that is of Importance occurs on either side, I come to a Determination accordingly. Views on religion, morality, and slavery Like the other advocates of republicanism, Franklin emphasized that the new republic could survive only if the people were virtuous. All his life, he explored the role of civic and personal virtue, as expressed in Poor Richard's aphorisms. He felt that organized religion was necessary to keep men good to their fellow men, but rarely attended religious services himself. When he met Voltaire in Paris and asked his fellow member of the Enlightenment vanguard to bless his grandson, Voltaire said in English, "God and Liberty", and added, "this is the only appropriate benediction for the grandson of Monsieur Franklin." Franklin's parents were both pious Puritans. The family attended the Old South Church, the most liberal Puritan congregation in Boston, where Benjamin Franklin was baptized in 1706. Franklin's father, a poor chandler, owned a copy of a book, Bonifacius: Essays to Do Good, by the Puritan preacher and family friend Cotton Mather, which Franklin often cited as a key influence on his life. "If I have been a useful citizen," Franklin wrote to Cotton Mather's son seventy years later, "the public owes the advantage of it to that book." His first pen name, Silence Dogood, paid homage both to the book and to a widely known sermon by Mather. The book preached the importance of forming voluntary associations to benefit society. Franklin learned about forming do-good associations from Mather, but his organizational skills made him the most influential force in making voluntarism an enduring part of the American ethos. Franklin formulated a presentation of his beliefs and published it in 1728. He no longer accepted the key Puritan ideas regarding salvation, the divinity of Jesus, or indeed much religious dogma. He classified himself as a deist in his 1771 autobiography, although he still considered himself a Christian. He retained a strong faith in a God as the wellspring of morality and goodness in man, and as a Providential actor in history responsible for American independence. At a critical impasse during the Constitutional Convention in June 1787, he attempted to introduce the practice of daily common prayer with these words: ... In the beginning of the contest with G. Britain, when we were sensible of danger we had daily prayer in this room for the Divine Protection. Our prayers, Sir, were heard, and they were graciously answered. All of us who were engaged in the struggle must have observed frequent instances of a Superintending providence in our favor. ... And have we now forgotten that powerful friend? or do we imagine that we no longer need His assistance. I have lived, Sir, a long time and the longer I live, the more convincing proofs I see of this truth—that God governs in the affairs of men....I therefore beg leave to move—that henceforth prayers imploring the assistance of Heaven, and its blessings on our deliberations, be held in this Assembly every morning before we proceed to business, and that one or more of the Clergy of this City be requested to officiate in that service. The motion gained almost no support and was never brought to a vote. Franklin was an enthusiastic admirer of the evangelical minister George Whitefield during the First Great Awakening. He did not himself subscribe to Whitefield's theology, but he admired Whitefield for exhorting people to worship God through good works. He published all of Whitefield's sermons and journals, thereby earning a lot of money and boosting the Great Awakening. When he stopped attending church, Franklin wrote in his autobiography: ... Sunday being my studying day, I never was without some religious principles. I never doubted, for instance, the existence of the Deity; that He made the world, and governed it by His providence; that the most acceptable service of God was the doing good to man; that our souls are immortal; and that all crime will be punished, and virtue rewarded, either here or hereafter. Franklin retained a lifelong commitment to the non-religious Puritan virtues and political values he had grown up with, and through his civic work and publishing, he succeeded in passing these values into the American culture permanently. He had a "passion for virtue." These Puritan values included his devotion to egalitarianism, education, industry, thrift, honesty, temperance, charity and community spirit. Thomas Kidd states, "As an adult, Franklin touted ethical responsibility, industriousness, and benevolence, even as he jettisoned Christian orthodoxy." The classical authors read in the Enlightenment period taught an abstract ideal of republican government based on hierarchical social orders of king, aristocracy and commoners. It was widely believed that English liberties relied on their balance of power, but also hierarchal deference to the privileged class. "Puritanism ... and the epidemic evangelism of the mid-eighteenth century, had created challenges to the traditional notions of social stratification" by preaching that the Bible taught all men are equal, that the true value of a man lies in his moral behavior, not his class, and that all men can be saved. Franklin, steeped in Puritanism and an enthusiastic supporter of the evangelical movement, rejected the salvation dogma but embraced the radical notion of egalitarian democracy.[citation needed] Franklin's commitment to teach these values was itself something he gained from his Puritan upbringing, with its stress on "inculcating virtue and character in themselves and their communities." These Puritan values and the desire to pass them on, were one of his quintessentially American characteristics and helped shape the character of the nation. Max Weber considered Franklin's ethical writings a culmination of the Protestant ethic, which ethic created the social conditions necessary for the birth of capitalism. One of his characteristics was his respect, tolerance and promotion of all churches. Referring to his experience in Philadelphia, he wrote in his autobiography, "new Places of worship were continually wanted, and generally erected by voluntary Contribution, my Mite for such purpose, whatever might be the Sect, was never refused." "He helped create a new type of nation that would draw strength from its religious pluralism." The evangelical revivalists who were active mid-century, such as Whitefield, were the greatest advocates of religious freedom, "claiming liberty of conscience to be an 'inalienable right of every rational creature.'" Whitefield's supporters in Philadelphia, including Franklin, erected "a large, new hall, that ... could provide a pulpit to anyone of any belief." Franklin's rejection of dogma and doctrine and his stress on the God of ethics and morality and civic virtue made him the "prophet of tolerance." He composed "A Parable Against Persecution", an apocryphal 51st chapter of Genesis in which God teaches Abraham the duty of tolerance. While he was living in London in 1774, he was present at the birth of British Unitarianism, attending the inaugural session of the Essex Street Chapel, at which Theophilus Lindsey drew together the first avowedly Unitarian congregation in England; this was somewhat politically risky and pushed religious tolerance to new boundaries, as a denial of the doctrine of the Trinity was illegal until the 1813 Act. Although his parents had intended for him a career in the church, Franklin as a young man adopted the Enlightenment religious belief in deism, that God's truths can be found entirely through nature and reason, declaring, "I soon became a thorough Deist." He rejected Christian dogma in a 1725 pamphlet A Dissertation on Liberty and Necessity, Pleasure and Pain, which he later saw as an embarrassment, while simultaneously asserting that God is "all wise, all good, all powerful." He defended his rejection of religious dogma with these words: "I think opinions should be judged by their influences and effects; and if a man holds none that tend to make him less virtuous or more vicious, it may be concluded that he holds none that are dangerous, which I hope is the case with me." After the disillusioning experience of seeing the decay in his own moral standards, and those of two friends in London whom he had converted to deism, Franklin decided that deism was true but it was not as useful in promoting personal morality as were the controls imposed by organized religion. Ralph Frasca contends that in his later life he can be considered a non-denominational Christian, although he did not believe Christ was divine. In a major scholarly study of his religion, Thomas Kidd argues that Franklin believed that true religiosity was a matter of personal morality and civic virtue. Kidd says Franklin maintained his lifelong resistance to orthodox Christianity while arriving finally at a "doctrineless, moralized Christianity." According to David Morgan, Franklin was a proponent of "generic religion." He prayed to "Powerful Goodness" and referred to God as "the infinite." John Adams noted that he was a mirror in which people saw their own religion: "The Catholics thought him almost a Catholic. The Church of England claimed him as one of them. The Presbyterians thought him half a Presbyterian, and the Friends believed him a wet Quaker." Adams himself decided that Franklin best fit among the "Atheists, Deists, and Libertines." Whatever else Franklin was, concludes Morgan, "he was a true champion of generic religion." In a letter to Richard Price, Franklin states that he believes religion should support itself without help from the government, claiming, "When a Religion is good, I conceive that it will support itself; and, when it cannot support itself, and God does not take care to support, so that its Professors are oblig'd to call for the help of the Civil Power, it is a sign, I apprehend, of its being a bad one." In 1790, just about a month before he died, Franklin wrote a letter to Ezra Stiles, president of Yale University, who had asked him his views on religion: As to Jesus of Nazareth, my Opinion of whom you particularly desire, I think the System of Morals and his Religion, as he left them to us, the best the world ever saw or is likely to see; but I apprehend it has received various corrupt changes, and I have, with most of the present Dissenters in England, some Doubts as to his divinity; tho' it is a question I do not dogmatize upon, having never studied it, and I think it needless to busy myself with it now, when I expect soon an Opportunity of knowing the Truth with less Trouble. I see no harm, however, in its being believed, if that belief has the good consequence, as it probably has, of making his doctrines more respected and better observed; especially as I do not perceive that the Supreme takes it amiss, by distinguishing the unbelievers in his government of the world with any particular marks of his displeasure. On July 4, 1776, Congress appointed a three-member committee composed of Franklin, Jefferson, and Adams to design the Great Seal of the United States. Franklin's proposal (which was not adopted) featured the motto: "Rebellion to Tyrants is Obedience to God" and a scene from the Book of Exodus he took from the frontispiece of the Geneva Bible, with Moses, the Israelites, the pillar of fire, and George III depicted as pharaoh. The design that was produced was not acted upon by Congress, and the Great Seal's design was not finalized until a third committee was appointed in 1782. Franklin strongly supported the right to freedom of speech: In those wretched countries where a man cannot call his tongue his own, he can scarce call anything his own. Whoever would overthrow the liberty of a nation must begin by subduing the freeness of speech ... Without freedom of thought there can be no such thing as wisdom, and no such thing as public liberty without freedom of speech, which is the right of every man ... — Silence Dogood no. 8, 1722 Franklin sought to cultivate his character by a plan of 13 virtues, which he developed at age 20 (in 1726) and continued to practice in some form for the rest of his life. His autobiography lists his 13 virtues as: Franklin did not try to work on them all at once. Instead, he worked on only one each week "leaving all others to their ordinary chance." While he did not adhere completely to the enumerated virtues, and by his own admission he fell short of them many times, he believed the attempt made him a better man, contributing greatly to his success and happiness, which is why in his autobiography, he devoted more pages to this plan than to any other single point and wrote, "I hope, therefore, that some of my descendants may follow the example and reap the benefit." Franklin's views and practices concerning slavery evolved over the course of his life. In his early years, Franklin owned seven slaves, including two men who worked in his household and his shop, but in his later years became an adherent of abolition. A revenue stream for his newspaper was paid ads for the sale of slaves and for the capture of runaway slaves and Franklin allowed the sale of slaves in his general store. He later became an outspoken critic of slavery. In 1758, he advocated the opening of a school for the education of black slaves in Philadelphia. He took two slaves to England with him, Peter and King. King escaped with a woman to live in the outskirts of London, and by 1758 he was working for a household in Suffolk. After returning from England in 1762, Franklin became more abolitionist in nature, attacking American slavery. In the wake of Somerset v Stewart, he voiced frustration at British abolitionists: O Pharisaical Britain! to pride thyself in setting free a single Slave that happens to land on thy coasts, while thy Merchants in all thy ports are encouraged by thy laws to continue a commerce whereby so many hundreds of thousands are dragged into a slavery that can scarce be said to end with their lives, since it is entailed on their posterity! Franklin refused to publicly debate the issue of slavery at the 1787 Constitutional Convention. At the time of the American founding, there were about half a million slaves in the United States, mostly in the five southernmost states, where they made up 40% of the population. Many of the leading American founders – such as Thomas Jefferson, George Washington, and James Madison – owned slaves, but many others did not. Benjamin Franklin thought that slavery was "an atrocious debasement of human nature" and "a source of serious evils." In 1787, Franklin and Benjamin Rush helped write a new constitution for the Pennsylvania Society for Promoting the Abolition of Slavery, and that same year Franklin became president of the organization. In 1790, Quakers from New York and Pennsylvania presented their petition for abolition to Congress. Their argument against slavery was backed by the Pennsylvania Abolitionist Society. In his later years, as Congress was forced to deal with the issue of slavery, Franklin wrote several essays that stressed the importance of the abolition of slavery and of the integration of African Americans into American society. These writings included: Franklin became a vegetarian when he was a teenager apprenticing at a print shop, after coming upon a book by the early vegetarian advocate Thomas Tryon. In addition, he would have also been familiar with the moral arguments espoused by prominent vegetarian Quakers in the colonial-era Province of Pennsylvania, including Benjamin Lay and John Woolman. His reasons for vegetarianism were based on health, ethics, and economy: When about 16 years of age, I happen'd to meet with a book written by one Tryon, recommending a vegetable diet. I determined to go into it ... [By not eating meat] I presently found that I could save half what [my brother] paid me. This was an additional fund for buying books: but I had another advantage in it ... I made the greater progress from that greater clearness of head and quicker apprehension which usually attend temperance in eating and drinking. Franklin also declared the consumption of fish to be "unprovoked murder." Despite his convictions, he began to eat fish after being tempted by fried cod on a boat sailing from Boston, justifying the eating of animals by observing that the fish's stomach contained other fish. Nonetheless, he recognized the faulty ethics in this argument and would continue to be a vegetarian on and off. He was "excited" by tofu, which he learned of from the writings of a Spanish missionary to Southeast Asia, Domingo Fernández Navarrete. Franklin sent a sample of soybeans to prominent American botanist John Bartram and had previously written to British diplomat and Chinese trade expert James Flint inquiring as to how tofu was made, with their correspondence believed to be the first documented use of the word "tofu" in the English language. Franklin's "Second Reply to Vindex Patriae," a 1766 letter advocating self-sufficiency and less dependence on England, lists various examples of the bounty of American agricultural products, and does not mention meat. Detailing new American customs, he wrote that, "[t]hey resolved last spring to eat no more lamb; and not a joint of lamb has since been seen on any of their tables ... the sweet little creatures are all alive to this day, with the prettiest fleeces on their backs imaginable." The concept of preventing smallpox by variolation was introduced to colonial America by an African slave named Onesimus via his owner Cotton Mather in the early eighteenth century, but the procedure was not immediately accepted. James Franklin's newspaper carried articles in 1721 that vigorously denounced the concept. However, by 1736 Benjamin Franklin was known as a supporter of the procedure. Therefore, when four-year-old "Franky" died of smallpox, opponents of the procedure circulated rumors that the child had been inoculated, and that this was the cause of his subsequent death. When Franklin became aware of this gossip, he placed a notice in the Pennsylvania Gazette, stating: "I do hereby sincerely declare, that he was not inoculated, but receiv'd the Distemper in the common Way of Infection ... I intended to have my Child inoculated." The child had a bad case of flux diarrhea, and his parents had waited for him to get well before having him inoculated. Franklin wrote in his Autobiography: "In 1736 I lost one of my sons, a fine boy of four years old, by the small-pox, taken in the common way. I long regretted bitterly, and still regret that I had not given it to him by inoculation. This I mention for the sake of parents who omit that operation, on the supposition that they should never forgive themselves if a child died under it; my example showing that the regret may be the same either way, and that, therefore, the safer should be chosen." Views on the future of technology In a letter to Joseph Priestley (Feb 8, 1780), Benjamin Franklin speculated that in the future "all Diseases may by sure means be prevented or cured, not excepting even that of Old Age, and our Lives lengthened at pleasure even beyond the antediluvian Standard". In the same letter, Franklin wrote: The rapid progress true science now makes, occasions my regretting sometimes that I was born so soon: it is impossible to imagine the height to which may be carried, in a thousand years, the power of man over matter; we may perhaps learn to deprive large masses of their gravity, and give them absolute levity for the sake of easy transport. Agriculture may diminish its labour and double its produce... In 1773, Franklin imagined a technology similar to cryonics: I wish it were possible to invent a method of embalming drowned persons in such a manner that they might be recalled to life at any period, however distant; for having a very ardent desire to see and observe the state of America a hundred years hence... Interests and activities Franklin is known to have played the violin, the harp, and the guitar. He also composed music, which included a string quartet in early classical style. While he was in London, he developed a much-improved version of the glass harmonica, in which the glasses rotate on a shaft, with the player's fingers held steady, instead of the other way around. He worked with the London glassblower Charles James to create it, and instruments based on his mechanical version soon found their way to other parts of Europe. Joseph Haydn, a fan of Franklin's enlightened ideas, had a glass harmonica in his instrument collection. Wolfgang Amadeus Mozart composed for Franklin's glass harmonica, as did Beethoven. Gaetano Donizetti used the instrument in the accompaniment to Amelia's aria "Par che mi dica ancora" in the tragic opera Il castello di Kenilworth (1821), as did Camille Saint-Saëns in his 1886 The Carnival of the Animals. Richard Strauss calls for the glass harmonica in his 1917 Die Frau ohne Schatten, and numerous other composers used Franklin's instrument as well.[citation needed] Franklin was an avid chess player. He was playing chess by around 1733, making him the first chess player known by name in the American colonies. His essay on "The Morals of Chess" in Columbian Magazine in December 1786 is the second known writing on chess in America. This essay in praise of chess and prescribing a code of behavior for the game has been widely reprinted and translated. He and a friend used chess as a means of learning the Italian language, which both were studying; the winner of each game between them had the right to assign a task, such as parts of the Italian grammar to be learned by heart, to be performed by the loser before their next meeting. Franklin was able to play chess more frequently against stronger opposition during his many years as a civil servant and diplomat in England, where the game was far better established than in America. He was able to improve his playing standard by facing more experienced players during this period. He regularly attended Old Slaughter's Coffee House in London for chess and socializing, making many important personal contacts. While in Paris, both as a visitor and later as ambassador, he visited the famous Café de la Régence, which France's strongest players made their regular meeting place. No records of his games have survived, so it is not possible to ascertain his playing strength in modern terms. Franklin was inducted into the U.S. Chess Hall of Fame in 1999. The Franklin Mercantile Chess Club in Philadelphia, the second oldest chess club in the U.S., is named in his honor. Legacy Franklin bequeathed £1,000 (about $4,400 at the time, or about $125,000 in 2021 dollars) each to the cities of Boston and Philadelphia, in trust to gather interest for 200 years. The trust began in 1785 when the French mathematician Charles-Joseph Mathon de la Cour, who admired Franklin greatly, wrote a friendly parody of Franklin's Poor Richard's Almanack called Fortunate Richard. The main character leaves a smallish amount of money in his will, five lots of 100 livres, to collect interest over one, two, three, four or five full centuries, with the resulting astronomical sums to be spent on impossibly elaborate utopian projects. Franklin, who was 79 years old at the time, wrote thanking him for a great idea and telling him that he had decided to leave a bequest of 1,000 pounds each to his native Boston and his adopted Philadelphia. By 1990, more than $2,000,000 (~$4.23 million in 2024) had accumulated in Franklin's Philadelphia trust, which had loaned the money to local residents. From 1940 to 1990, the money was used mostly for mortgage loans. When the trust came due, Philadelphia decided to spend it on scholarships for local high school students. Franklin's Boston trust fund accumulated almost $5,000,000 during that same time; at the end of its first 100 years a portion was allocated to help establish a trade school that became the Franklin Institute of Boston, and the entire fund was later dedicated to supporting this institute. In 1787, a group of prominent ministers in Lancaster, Pennsylvania, proposed the foundation of a new college named in Franklin's honor. Franklin donated £200 towards the development of Franklin College (now called Franklin & Marshall College). As the only person to have signed the Declaration of Independence in 1776, Treaty of Alliance with France in 1778, Treaty of Paris in 1783, and U.S. Constitution in 1787, Franklin is considered one of the leading Founding Fathers of the United States. His pervasive influence in the early history of the nation has led to his being jocularly called "the only president of the United States who was never president of the United States." Franklin's likeness is ubiquitous. Since 1914, it has adorned American $100 bills. From 1948 to 1963, Franklin's portrait was on the half-dollar. He has appeared on a $50 bill and on several varieties of the $100 bill from 1914 and 1918. Franklin also appears on the $1,000 Series EE savings bond. On April 12, 1976, as part of a bicentennial celebration, Congress dedicated a 20-foot (6 m) tall marble statue in Philadelphia's Franklin Institute as the Benjamin Franklin National Memorial. Vice President Nelson Rockefeller presided over the dedication ceremony. Many of Franklin's personal possessions are on display at the institute. In London, his house at 36 Craven Street, which is the only surviving former residence of Franklin, was first marked with a blue plaque and has since been opened to the public as the Benjamin Franklin House. In 1998, workmen restoring the building dug up the remains of six children and four adults hidden below the home. A total of 15 bodies have been recovered. The Friends of Benjamin Franklin House (the organization responsible for the restoration) note that the bones were likely placed there by William Hewson, who lived in the house for two years and who had built a small anatomy school at the back of the house. They note that while Franklin likely knew what Hewson was doing, he probably did not participate in any dissections because he was much more of a physicist than a medical man. He has been honored on U.S. postage stamps many times. The image of Franklin, the first postmaster general of the United States, occurs on the face of U.S. postage more than any other American save that of George Washington. He appeared on the first U.S. postage stamp issued in 1847. From 1908 through 1923, the U.S. Post Office issued a series of postage stamps commonly referred to as the Washington–Franklin Issues, in which Washington and Franklin were depicted many times over a 14-year period, the longest run of any one series in U.S. postal history. However, he only appears on a few commemorative stamps. Some of the finest portrayals of Franklin on record can be found on the engravings inscribed on the face of U.S. postage. See also Notes Citations Bibliography References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Alexander_Hamilton] | [TOKENS: 17867] |
Contents Alexander Hamilton Alexander Hamilton (January 11, 1755 or 1757[a] – July 12, 1804) was an American military officer, statesman, and Founding Father who served as the first U.S. secretary of the treasury from 1789 to 1795 under the presidency of George Washington. He also founded America's first political party, the Federalist Party, in 1791. Born out of wedlock in Charlestown, Nevis, Hamilton was orphaned as a child and taken in by a prosperous merchant. He was given a scholarship and pursued his education at King's College (now Columbia University) in New York City where, despite his young age, he was an anonymous but prolific and widely read pamphleteer and advocate for the American Revolution. He then served as an artillery officer in the American Revolutionary War, where he saw military action against the British Army in the New York and New Jersey campaign, served for four years as aide-de-camp to Continental Army commander in chief George Washington, and fought under Washington's command in the war's climactic battle, the Siege of Yorktown, which secured American victory in the war and with it the independence of the United States. After the Revolutionary War, Hamilton served as a delegate from New York to the Congress of the Confederation in Philadelphia. He resigned to practice law and founded the Bank of New York. In 1786, Hamilton led the Annapolis Convention, which sought to strengthen the power of the loose confederation of independent states under the limited authorities granted the Congress by the Articles of Confederation. The following year he was a delegate to the Philadelphia Convention, which drafted the U.S. Constitution creating a more centralized federal national government. He then authored 51 of the 85 installments of The Federalist Papers, which proved persuasive in securing its ratification by the states. As a trusted member of President Washington's first cabinet, Hamilton served as the first U.S. secretary of the treasury. He envisioned a central government led by an energetic executive, a strong national defense, and a more diversified economy with significantly expanded industry. He successfully argued that the implied powers of the U.S. Constitution provided the legal basis to create the First Bank of the United States, and assume the states' war debts, which was funded by a tariff on imports and a whiskey tax. Hamilton opposed American entanglement with the succession of unstable French Revolutionary governments. In 1790, he persuaded the U.S. Congress to establish the U.S. Revenue Cutter Service to protect American shipping. In 1793, he advocated in support of the Jay Treaty under which the U.S. resumed friendly trade relations with the British Empire. Hamilton's views became the basis for the Federalist Party, which was opposed by the Democratic-Republican Party, led by Thomas Jefferson. Hamilton and other Federalists supported the Haitian Revolution, and Hamilton helped draft Haiti's constitution in 1801. After resigning as the nation's secretary of the treasury in 1795, Hamilton resumed his legal and business activities and helped lead the abolition of the Atlantic slave trade. In the Quasi-War, fought at sea between 1798 and 1800, Hamilton called for mobilization against France, and President John Adams appointed him major general. The U.S. Army, however, did not see combat in the conflict. Outraged by Adams' response to the crisis, Hamilton opposed his 1800 presidential re-election. Jefferson and Aaron Burr tied for the presidency in the electoral college and, despite philosophical differences, Hamilton endorsed Jefferson over Burr, whom he found unprincipled. When Burr ran for Governor of New York in 1804, Hamilton again opposed his candidacy, arguing that he was unfit for the office. Taking offense, Burr challenged Hamilton to a pistol duel, which took place in Weehawken, New Jersey, on July 11, 1804. Hamilton was mortally wounded and immediately transported back across the Hudson River in a delirious state to the home of William Bayard Jr. in Greenwich Village, New York, for medical attention. The following day, on July 12, 1804, Hamilton succumbed to his wounds. Scholars generally regard Hamilton as an astute and intellectually brilliant administrator, politician, and financier who was sometimes impetuous. His ideas are credited with influencing the founding principles of American finance and government. Early life and education Hamilton was born on January 11, 1755, or 1757,[a] in Charlestown, the capital of Nevis in the British Leeward Islands, where he spent his childhood. Hamilton and his older brother, James Jr., were born out of wedlock to Rachel Lavien (née Faucette),[b] a married woman of half-British and half-Huguenot descent, and James A. Hamilton, a Scotsman and the fourth son of Alexander Hamilton, the laird of Grange, Ayrshire. His maternal grandfather was John Faucette (born c. 1680–1684) a French Huguenot, planter and medical doctor born in the Saintonge province of France who had settled on the British west indies island of Nevis, after the 1685 Revocation of the Edict of Nantes by Louis XIV. Prior to Alexander's birth, in 1745, Rachel Lavien married Johann Lavien in Saint Croix. Together, they had one son, Peter. However, Rachel Lavien left her husband and first son in 1750, traveling to Saint Kitts, where she met James Hamilton. Hamilton and Lavien moved together to Nevis, her birthplace, where she had inherited a seaside lot in town from her father. While their mother was living, Alexander and James Jr. received individual tutoring and classes in a private school led by a Jewish headmistress. Alexander supplemented his education with a family library of 34 books. James Hamilton later abandoned Rachel Lavien and their two sons, ostensibly to "spar[e] [her] a charge of bigamy...after finding out that her first husband intend[ed] to divorce her under Danish law on grounds of adultery and desertion." Lavien then moved with their two children back to Saint Croix, where she supported them by managing a small store in Christiansted. Both his mother and Hamilton contracted yellow fever. On February 19, 1768, Hamilton's mother died from the disease, leaving him orphaned. In probate court, Lavien's "first husband seized her estate" and obtained the few valuables that she had owned, including some household silver. Many items were auctioned off, and a friend purchased the family's books, returning them to Hamilton. The brothers were briefly taken in by their cousin Peter Lytton. However, Lytton took his own life in July 1769, leaving his property to his mistress and their son, and the propertyless Hamilton brothers were subsequently separated. James Jr. apprenticed with a local carpenter, while Alexander was given a home by Thomas Stevens, a merchant from Nevis. Hamilton became a clerk at Beekman and Cruger, a local import-export firm that traded with the Province of New York and New England. Though still a teenager, Hamilton proved capable enough as a trader to be left in charge of the firm for five months in 1771 while the owner was at sea. He remained an avid reader, and later developed an interest in writing and a life outside Saint Croix. He wrote a detailed letter to his father regarding a hurricane that devastated Christiansted on August 30, 1772. The Presbyterian Reverend Hugh Knox, a tutor and mentor to Hamilton, submitted the letter for publication in the Royal Danish-American Gazette. Biographer Ron Chernow found the letter astounding because "for all its bombastic excesses, it does seem wondrous [that a] self-educated clerk could write with such verve and gusto" and that a teenage boy produced an apocalyptic "fire-and-brimstone sermon" viewing the hurricane as a "divine rebuke to human vanity and pomposity." The essay impressed community leaders, who collected a fund to send Hamilton to the North American colonies for his education. In October 1772, Hamilton arrived by ship in Boston and then proceeded to New York City, where he boarded with Hercules Mulligan, the Irish-born brother of a trader known to Hamilton's benefactors. Mulligan assisted Hamilton in selling the cargo that Hamilton was to use to pay for his education and support. Hamilton sought to fill gaps in his education in preparation for college, and later that year began to attend Elizabethtown Academy, a preparatory school run by Francis Barber in Elizabeth, New Jersey. While there, he was introduced to William Livingston, a local leading intellectual and revolutionary who influenced him, and he boarded with the Livingstons while studying. In fall 1773, Hamilton returned to New York and entered Mulligan's alma mater, King's College (now Columbia University). Hamilton began as a private student and boarded again with Mulligan until he matriculated into the college the following year, in May 1774. His college roommate and lifelong friend Robert Troup spoke glowingly of Hamilton's clarity in concisely explaining the Patriot case against the British during the American Revolution in what was Hamilton's first public appearance, on July 6, 1774. As King's College students, Hamilton, Troup, and four other undergraduates formed an unnamed literary society that is regarded as a precursor to what is now the Philolexian Society. Later in 1774, Church of England clergyman Samuel Seabury in New York published a series of pamphlets promoting the Loyalist cause, seeking to provoke fear in the Thirteen Colonies, which he hoped would discourage them from uniting against the British. Hamilton countered anonymously with his first published political writings, A Full Vindication of the Measures of Congress and The Farmer Refuted. He published two additional pieces attacking the Quebec Act, and may have also authored the 15 anonymous installments of "The Monitor" for Holt's New York Journal. Hamilton supported the revolutionary cause before the war began, but he disapproved of mob violence against the Loyalists. On May 10, 1775, he was credited with saving King College president Myles Cooper, a Loyalist, from an angry mob by speaking to the crowd long enough to allow Cooper to escape. Hamilton was forced to discontinue his studies before graduating when the college closed its doors during the British occupation of New York City and his subsequent military service. Revolutionary War (1775–1782) In 1775, following the outbreak of the Revolutionary War with the Battles of Lexington and Concord, Hamilton and other King's College students joined a New York volunteer militia company, the Corsicans, whose name reflected the Corsican Republic that was suppressed six years earlier and which young American patriots regarded as a political model to be emulated. Hamilton drilled with the company before classes in the graveyard of nearby St. Paul's Chapel, studied military history and tactics on his own, and was soon elected an officer. Under fire from HMS Asia, and coordinating with Hercules Mulligan and the Sons of Liberty, he led his newly renamed unit, the "Hearts of Oak," on a successful raid for British cannons in the Battery. The seizure of the cannons resulted in the unit being re-designated an artillery company.: 13 Through his connections with influential New York patriots, including Alexander McDougall and John Jay, Hamilton was commissioned by the revolutionary government to raise the New York Provincial Company of Artillery of 60 men in 1776, and was then appointed captain. The company took part in the campaign of 1776 in and around New York City; as rearguard of the Continental Army's retreat up Manhattan, serving at the Battle of Harlem Heights shortly after, and at the Battle of White Plains a month later. At the Battle of Trenton, the company was stationed at the high point of town at the intersection of present-day Warren and Broad streets to keep the Hessians pinned down in their barracks. Hamilton participated in the Battle of Princeton on January 3, 1777. After an initial setback, Washington rallied the Continental Army troops and led them in a successful charge against the British forces. After making a brief stand, the British fell back, some leaving Princeton, and others taking up refuge in Nassau Hall. Hamilton transported three cannons to the hall, and had them fire upon the building as others rushed the front door and broke it down. The British subsequently put a white flag outside one of the windows; 194 British soldiers walked out of the building and laid down their arms, ending the battle in an American victory. Hamilton was invited to become an aide to Continental Army general William Alexander, Lord Stirling, and another general, perhaps Nathanael Greene or Alexander McDougall. He declined these invitations, believing his best chance for improving his station in life was glory on the Revolutionary War's battlefields. Hamilton eventually received an invitation he felt he could not refuse: to serve as George Washington's aide with the rank of lieutenant colonel. Washington believed that "Aides de camp are persons in whom entire confidence must be placed and it requires men of abilities to execute the duties with propriety and dispatch." Hamilton served four years as Washington's chief staff aide. He handled letters to the Continental Congress, state governors, and the most powerful generals of the Continental Army. He drafted many of Washington's orders and letters under Washington's direction, and he eventually issued orders on Washington's behalf over his own signature. Hamilton was involved in a wide variety of high-level duties, including intelligence, diplomacy, and negotiation with senior army officers as Washington's emissary. While stationed at the army's winter headquarters in Morristown, New Jersey from December 1779 to March 1780, Hamilton met Elizabeth Schuyler, a daughter of General Philip Schuyler and Catherine Van Rensselaer. They married on December 14, 1780, at the Schuyler Mansion in Albany, New York. They had eight children, Philip, Angelica, Alexander, James, John, William, Eliza, and another Philip. During the Revolutionary War, Hamilton became the close friend of several fellow officers. His letters to the Marquis de Lafayette and to John Laurens, employing the sentimental literary conventions of the late 18th century and alluding to Greek history and mythology, have been read by Jonathan Ned Katz as revelatory of a homosocial or even homosexual relationship. Biographer Gregory D. Massey amongst others, by contrast, dismisses all such speculation as unsubstantiated, describing their friendship as purely platonic camaraderie instead and placing their correspondence in the context of the flowery diction of the time. While on Washington's staff, Hamilton long sought command and a return to active combat. As the war drew nearer to an end, he knew that opportunities for military glory were diminishing. On February 15, 1781, Hamilton was reprimanded by Washington after a minor misunderstanding. Although Washington quickly tried to mend their relationship, Hamilton insisted on leaving his staff. He officially left in March, and settled with his new wife Elizabeth Schuyler close to Washington's headquarters. He continued to repeatedly ask Washington and others for a field command. Washington continued to demur, citing the need to appoint men of higher rank. This continued until early July 1781, when Hamilton submitted a letter to Washington with his commission enclosed, "thus tacitly threatening to resign if he didn't get his desired command." On July 31, Washington relented and assigned Hamilton as commander of a battalion of light infantry companies of the 1st and 2nd New York Regiments and two provisional companies from Connecticut. In the planning for the assault on Yorktown, Hamilton was given command of three battalions, which were to fight in conjunction with the allied French troops in taking Redoubts No. 9 and No. 10 of the British fortifications at Yorktown. Hamilton and his battalions took Redoubt No. 10 with bayonets alone so as not to risk accidental gunfire and discovery in a nighttime action, as planned. The French also suffered heavy casualties and took Redoubt No. 9. These actions forced the British surrender of an entire army at Yorktown, marking the de facto end of the war, although small battles continued for two more years until the signing of the Treaty of Paris and the departure of the last British troops. Return to civilian life (1782–1789) After Yorktown, Hamilton returned to New York City and resigned his commission in March 1782. He passed the bar in July after six months of self-directed education and, in October, was licensed to argue cases before the Supreme Court of New York. He also accepted an offer from Robert Morris to become receiver of continental taxes for the New York state. Hamilton was appointed in July 1782 to the Congress of the Confederation as a New York representative for the term beginning in November 1782. Before his appointment to Congress in 1782, Hamilton was already sharing his criticisms of Congress. He expressed these criticisms in his letter to James Duane dated September 3, 1780: "The fundamental defect is a want of power in Congress ... the confederation itself is defective and requires to be altered; it is neither fit for war, nor peace." While on Washington's staff, Hamilton had become frustrated with the decentralized nature of the wartime Continental Congress, particularly its dependence upon the states for voluntary financial support that was not often forthcoming. Under the Articles of Confederation, Congress had no power to collect taxes or to demand money from the states. This lack of a stable source of funding had made it difficult for the Continental Army both to obtain its necessary provisions and to pay its soldiers. During the war, and for some time after, Congress obtained what funds it could from subsidies from the King of France, European loans, and aid requested from the several states, which were often unable or unwilling to contribute. An amendment to the Articles had been proposed by Thomas Burke, in February 1781, to give Congress the power to collect a five percent impost, or duty on all imports, but this required ratification by all states; securing its passage as law proved impossible after it was rejected by Rhode Island in November 1782. James Madison joined Hamilton in influencing Congress to send a delegation to persuade Rhode Island to change its mind. Their report recommending the delegation argued the national government needed not just some level of financial autonomy, but also the ability to make laws that superseded those of the individual states. Hamilton transmitted a letter arguing that Congress already had the power to tax, since it had the power to fix the sums due from the several states; but Virginia's rescission of its own ratification of this amendment ended the Rhode Island negotiations. While Hamilton was in Congress, discontented soldiers began to pose a danger to the young United States. Most of the army was then posted at Newburgh, New York. Those in the army were funding much of their own supplies, and they had not been paid in eight months. Furthermore, after Valley Forge, the Continental officers had been promised in May 1778 a pension of half their pay when they were discharged. By the early 1780s, due to the structure of the government under the Articles of Confederation, it had no power to tax to either raise revenue or pay its soldiers. In 1782, after several months without pay, a group of officers organized to send a delegation to lobby Congress, led by Captain Alexander McDougall. The officers had three demands: the army's pay, their own pensions, and commutation of those pensions into a lump-sum payment if Congress were unable to afford the half-salary pensions for life. Congress rejected the proposal. Several congressmen, including Hamilton, Robert Morris, and Gouverneur Morris, attempted to use the so-called Newburgh Conspiracy as leverage to secure support from the states and in Congress for funding of the national government. They encouraged MacDougall to continue his aggressive approach, implying unknown consequences if their demands were not met, and defeated proposals designed to end the crisis without establishing general taxation: that the states assume the debt to the army, or that an impost be established dedicated to the sole purpose of paying that debt. Hamilton suggested using the Army's claims to prevail upon the states for the proposed national funding system. The Morrises and Hamilton contacted General Henry Knox to suggest he and the officers defy civil authority, at least by not disbanding if the army were not satisfied. Hamilton wrote Washington to suggest that Hamilton covertly "take direction" of the officers' efforts to secure redress, to secure continental funding but keep the army within the limits of moderation. Washington wrote Hamilton back, declining to introduce the army. After the crisis had ended, Washington warned of the dangers of using the army as leverage to gain support for the national funding plan. On March 15, Washington defused the Newburgh situation by addressing the officers personally. Congress ordered the Army officially disbanded in April 1783. In the same month, Congress passed a new measure for a 25-year impost—which Hamilton voted against—that again required the consent of all the states; it also approved a commutation of the officers' pensions to five years of full pay. Rhode Island again opposed these provisions, and Hamilton's robust assertions of national prerogatives in his previous letter were widely held to be excessive. In June 1783, a different group of disgruntled soldiers from Lancaster, Pennsylvania, sent Congress a petition demanding their back pay. When they began to march toward Philadelphia, Congress charged Hamilton and two others with intercepting the mob. Hamilton requested militia from Pennsylvania's Supreme Executive Council, but was turned down. Hamilton instructed Assistant Secretary of War William Jackson to intercept the men. Jackson was unsuccessful. The mob arrived in Philadelphia, and the soldiers proceeded to harangue Congress for their pay. Hamilton argued that Congress ought to adjourn to Princeton, New Jersey. Congress agreed, and relocated there. Frustrated with the weakness of the national government, Hamilton while in Princeton, drafted a call to revise the Articles of Confederation. This resolution contained many features of the future Constitution of the United States, including a strong federal government with the ability to collect taxes and raise an army. It also included the separation of powers into the legislative, executive, and judicial branches. Hamilton resigned from Congress in 1783. When the British left New York in 1783, he practiced there in partnership with Richard Harison. He specialized in defending Tories and British subjects, as in Rutgers v. Waddington, in which he defeated a claim for damages done to a brewery by the Englishmen who held it during the military occupation of New York. He pleaded for the mayor's court to interpret state law consistent with the 1783 Treaty of Paris, which had ended the Revolutionary War.: 64–69 In 1784, Hamilton founded the Bank of New York. Long dissatisfied with the Articles of Confederation as too weak to be effective, Hamilton played a major leadership role at the 1786 Annapolis Convention. He drafted its resolution for a constitutional convention, and in doing so brought one step closer to reality his longtime desire to have a more effectual, more financially self-sufficient federal government. As a member of the legislature of New York, Hamilton argued forcefully and at length in favor of a bill to recognize the sovereignty of the State of Vermont, against numerous objections to its constitutionality and policy. Consideration of the bill was deferred to a later date. From 1787 to 1789, Hamilton exchanged letters with Nathaniel Chipman, a lawyer representing Vermont. After the Constitution of the United States went into effect, Hamilton said, "One of the first subjects of deliberation with the new Congress will be the independence of Kentucky, for which the southern states will be anxious. The northern will be glad to send a counterpoise in Vermont." Vermont was admitted to the Union in 1791. In 1788, he was awarded a Master of Arts degree from his alma mater, the former King's College, now reconstituted as Columbia College. It was during this post-war period that Hamilton served on the college's board of trustees, playing a part in the reopening of the college and placing it on firm financial footing. In 1787, Hamilton served as assemblyman from New York County in the New York State Legislature and was chosen as a delegate at the Constitutional Convention in Philadelphia by his father-in-law Philip Schuyler.: 191 Even though Hamilton had been a leader in calling for a new Constitutional Convention, his direct influence at the Convention itself was quite limited. Governor George Clinton's faction in the New York legislature had chosen New York's other two delegates, John Lansing Jr. and Robert Yates, and both of them opposed Hamilton's goal of a strong national government. Thus, whenever the other two members of the New York delegation were present, they decided New York's vote, to ensure that there were no major alterations to the Articles of Confederation.: 195 Early in the convention, Hamilton made a speech proposing a president-for-life; it had no effect upon the deliberations of the convention. He proposed to have an elected president and elected senators who would serve for life, contingent upon "good behavior" and subject to removal for corruption or abuse; this idea contributed later to the hostile view of Hamilton as a monarchist sympathizer, held by James Madison. According to Madison's notes, Hamilton said in regards to the executive, "The English model was the only good one on this subject. The hereditary interest of the king was so interwoven with that of the nation, and his personal emoluments so great, that he was placed above the danger of being corrupted from abroad... Let one executive be appointed for life who dares execute his powers." Hamilton argued, "And let me observe that an executive is less dangerous to the liberties of the people when in office during life than for seven years. It may be said this constitutes as an elective monarchy ... But by making the executive subject to impeachment, the term 'monarchy' cannot apply ..." In his notes of the convention, Madison interpreted Hamilton's proposal as claiming power for the "rich and well born". Madison's perspective all but isolated Hamilton from his fellow delegates and others who felt they did not reflect the ideas of revolution and liberty. During the convention, Hamilton constructed a draft for the Constitution based on the convention debates, but he never presented it. This draft had most of the features of the actual Constitution. In this draft, the Senate was to be elected in proportion to the population, being two-fifths the size of the House, and the president and senators were to be elected through complex multistage elections, in which chosen electors would elect smaller bodies of electors; they would hold office for life, but were removable for misconduct. The president would have an absolute veto. The Supreme Court was to have immediate jurisdiction over all lawsuits involving the United States, and state governors were to be appointed by the federal government. At the end of the convention, Hamilton was still not content with the final Constitution, but signed it anyway as a vast improvement over the Articles of Confederation, and urged his fellow delegates to do so also. Since the other two members of the New York delegation, Lansing and Yates, had already withdrawn, Hamilton was the only New York signer to the United States Constitution.: 206 He then took a highly active part in the successful campaign for the document's ratification in New York in 1788, which was a crucial step in its national ratification. He first used the popularity of the Constitution by the masses to compel George Clinton to sign, but was unsuccessful. The state convention in Poughkeepsie in June 1788 pitted Hamilton, Jay, James Duane, Robert Livingston, and Richard Morris against the Clintonian faction led by Melancton Smith, Lansing, Yates, and Gilbert Livingston. Clinton's faction wanted to amend the Constitution, while maintaining the state's right to secede if their attempts failed, and members of Hamilton's faction were against any conditional ratification, under the impression that New York would not be accepted into the Union. During the state convention, New Hampshire and Virginia becoming the ninth and tenth states to ratify the Constitution, respectively, had ensured any adjournment would not happen and a compromise would have to be reached. Hamilton's arguments used for the ratifications were largely iterations of work from The Federalist Papers, and Smith eventually went for ratification, though it was more out of necessity than Hamilton's rhetoric. The vote in the state convention was ratified 30 to 27, on July 26, 1788. Hamilton recruited John Jay and James Madison to write The Federalist Papers, a series of essays, to defend the proposed Constitution. He made the largest contribution to that effort, writing 51 of the 85 essays published. Hamilton supervised the entire project, enlisted the participants, wrote the majority of the essays, and oversaw the publication. During the project, each person was responsible for their areas of expertise. Jay covered foreign relations. Madison covered the history of republics and confederacies, along with the anatomy of the new government. Hamilton covered the branches of government most pertinent to him: the executive and judicial branches, with some aspects of the Senate, as well as covering military matters and taxation. The papers first appeared in The Independent Journal on October 27, 1787. Hamilton wrote the first paper signed as Publius, and all of the subsequent papers were signed under the name.: 210 Jay wrote the next four papers to elaborate on the confederation's weakness and the need for unity against foreign aggression and against splitting into rival confederacies, and, except for No. 64, was not further involved.: 211 Hamilton's highlights included discussion that although republics have been culpable for disorders in the past, advances in the "science of politics" had fostered principles that ensured that those abuses could be prevented, such as the division of powers, legislative checks and balances, an independent judiciary, and legislators that were represented by electors (No. 7–9). Hamilton also wrote an extensive defense of the constitution (No. 23–36), and discussed the Senate and executive and judicial branches (No. 65–85). Hamilton and Madison worked to describe the anarchic state of the confederation (No. 15–22), and the two have been described as not being significantly different in thought during this time period—in contrast to their stark opposition later in life. Subtle differences appeared with the two when discussing the necessity of standing armies. First U.S. secretary of the treasury (1789–1795) In 1789, Washington—who had become the first president of the United States—appointed Hamilton to be his cabinet's secretary of the treasury on the advice of Robert Morris, Washington's initial pick. On September 11, 1789, Hamilton was nominated and confirmed in the Senate and sworn in the same day as the first United States secretary of the treasury. Before the adjournment of the House in September 1789, they requested Hamilton to make a report on suggestions to improve the public credit by January 1790. Hamilton had written to Morris as early as 1781, that fixing the public credit will win their objective of independence. The sources that Hamilton used ranged from Frenchmen such as Jacques Necker and Montesquieu to British writers such as Hume, Hobbes, and Malachy Postlethwayt. While writing the report he also sought out suggestions from contemporaries such as John Witherspoon and Madison. Although they agreed on additional taxes such as distilleries and duties on imported liquors and land taxes, Madison feared that the securities from the government debt would fall into foreign hands.: 244–245 Hamilton divided the debt into national and state, and further divided the national debt into foreign and domestic debt. While there was agreement on how to handle the foreign debt, especially with France, there was not with regards to the national debt held by domestic creditors. During the Revolutionary War, affluent citizens had invested in bonds, and war veterans had been paid with promissory notes and IOUs that plummeted in price during the Confederation. In response, the war veterans sold the securities to speculators for as little as fifteen to twenty cents on the dollar. Hamilton felt the money from the bonds should not go to the soldiers who had shown little faith in the country's future, but the speculators that had bought the bonds from the soldiers. The process of attempting to track down the original bondholders along with the government showing discrimination among the classes of holders if the war veterans were to be compensated also weighed in as factors for Hamilton. As for the state debts, Hamilton suggested consolidating them with the national debt and label it as federal debt, for the sake of efficiency on a national scale. In the report, Hamilton felt that the securities should be paid at full value to their legitimate owners, including those who took the financial risk of buying government bonds that most experts thought would never be redeemed. He argued that liberty and property security were inseparable, and that the government should honor the contracts, as they formed the basis of public and private morality. To Hamilton, the proper handling of the government debt would also allow America to borrow at affordable interest rates and would also be a stimulant to the economy. The last portion of the report dealt with eliminating the debt by utilizing a sinking fund that would retire five percent of the debt annually until it was paid off. Due to the bonds being traded well below their face value, the purchases would benefit the government as the securities rose in price.: 300 When the report was submitted to the House of Representatives, detractors soon began to speak against it. Some of the negative views expressed in the House were that the notion of programs that resembled British practice were wicked, and that the balance of power would be shifted away from the representatives to the executive branch. William Maclay suspected that several congressmen were involved in government securities, seeing Congress in an unholy league with New York speculators.: 302 Congressman James Jackson also spoke against New York, with allegations of speculators attempting to swindle those who had not yet heard about Hamilton's report.: 303 The involvement of those in Hamilton's circle such as Schuyler, William Duer, James Duane, Gouverneur Morris, and Rufus King as speculators was not favorable to those against the report, either, though Hamilton personally did not own or deal a share in the debt.: 304 : 250 Madison eventually spoke against it by February 1790. Although he was not against current holders of government debt to profit, he wanted the windfall to go to the original holders. Madison did not feel that the original holders had lost faith in the government but sold their securities out of desperation.: 305 The compromise was seen as egregious to both Hamiltonians and their dissidents such as Maclay, and Madison's vote was defeated 36 votes to 13 on February 22.: 305 : 255 The fight for the national government to assume state debt was a longer issue and lasted over four months. During the period, the resources that Hamilton was to apply to the payment of state debts was requested by Alexander White, and was rejected due to Hamilton's not being able to prepare information by March 3, and was even postponed by his own supporters in spite of configuring a report the next day, which consisted of a series of additional duties to meet the interest on the state debts.: 297–298 Duer resigned as Assistant Secretary of the Treasury, and the vote of assumption was voted down 31 votes to 29 on April 12.: 258–259 During this period, Hamilton bypassed the rising issue of slavery in Congress, after Quakers petitioned for its abolition, returning to the issue the following year. Another issue in which Hamilton played a role was the temporary location of the capital from New York City. Tench Coxe was sent to speak to Maclay to bargain about the capital being temporarily located to Philadelphia, as a single vote in the Senate was needed and five in the House for the bill to pass.: 263 Thomas Jefferson wrote years afterward that Hamilton had a discussion with him, around this time period, about the capital of the United States being relocated to Virginia by means of a "pill" that "would be peculiarly bitter to the Southern States, and that some concomitant measure should be adopted to sweeten it a little to them".: 263 The bill passed in the Senate on July 21 and in the House 34 votes to 28 on July 26, 1790.: 263 Hamilton's Report on a National Bank was a projection from the first Report on the Public Credit. Although Hamilton had been forming ideas of a national bank as early as 1779,: 268 he had gathered ideas in various ways over the past eleven years. These included theories from Adam Smith, extensive studies on the Bank of England, the blunders of the Bank of North America and his experience in establishing the Bank of New York. He also used American records from James Wilson, Pelatiah Webster, Gouverneur Morris, and from his assistant treasury secretary Tench Coxe. He thought that this plan for a National Bank could help in any sort of financial crisis. Hamilton suggested that Congress should charter the national bank with a capitalization of $10 million, one-fifth of which would be handled by the government. Since the government did not have the money, it would borrow the money from the bank itself, and repay the loan in ten even annual installments.: 194 The rest was to be available to individual investors. The bank was to be governed by a twenty-five-member board of directors that was to represent a large majority of the private shareholders, which Hamilton considered essential for his being under a private direction.: 268 Hamilton's bank model had many similarities to that of the Bank of England, except Hamilton wanted to exclude the government from being involved in public debt, but provide a large, firm, and elastic money supply for the functioning of normal businesses and usual economic development, among other differences.: 194–195 The tax revenue to initiate the bank was the same as he had previously proposed, increases on imported spirits: rum, liquor, and whiskey.: 195–196 The bill passed through the Senate practically without a problem, but objections to the proposal increased by the time it reached the House of Representatives. It was generally held by critics that Hamilton was serving the interests of the Northeast by means of the bank, and those of the agrarian lifestyle would not benefit from it.: 270 Among those critics was James Jackson of Georgia, who also attempted to refute the report by quoting from The Federalist Papers.: 270 Madison and Jefferson also opposed the bank bill. The potential of the capital not being moved to the Potomac if the bank was to have a firm establishment in Philadelphia was a more significant reason, and actions that Pennsylvania members of Congress took to keep the capital there made both men anxious.: 199–200 The Whiskey Rebellion also showed how in other financial plans, there was a distance between the classes as the wealthy profited from the taxes. Madison warned the Pennsylvania congress members that he would attack the bill as unconstitutional in the House, and followed up on his threat. Madison argued his case of where the power of a bank could be established within the Constitution, but he failed to sway members of the House, and his authority on the constitution was questioned by a few members.: 200–201 The bill eventually passed in an overwhelming fashion 39 to 20, on February 8, 1791.: 271 Washington hesitated to sign the bill, as he received suggestions from Attorney General Edmund Randolph and Thomas Jefferson. Jefferson dismissed the Necessary and Proper Clause as reasoning for the creation of a national bank, stating that the enumerated powers "can all be carried into execution without a bank.": 271–272 Along with Randolph and Jefferson's objections, Washington's involvement in the movement of the capital from Philadelphia is also thought to be a reason for his hesitation.: 202–203 In response to the objection of the clause, Hamilton stated that "Necessary often means no more than needful, requisite, incidental, useful, or conductive to", and the bank was a "convenient species of medium in which [taxes] are to be paid.": 272–273 Washington would eventually sign the bill into law.: 272–273 Hamilton's push for a national bank was not an isolated event but a broader, long-running effort to establish a central banking system in the United States—one that would ultimately result in the Federal Reserve. While Hamilton's vision laid the groundwork for a structured financial system, the concept of centralized banking has remained one of the most polarizing economic debates in American history, garnering both staunch criticism and fervent support from economists and the public alike. In 1791, Hamilton submitted the Report on the Establishment of a Mint to the House of Representatives. Many of Hamilton's ideas for this report were from European economists, resolutions from the 1785 and 1786 Continental Congress meetings, and people such as Robert Morris, Gouverneur Morris and Thomas Jefferson.: 197 Because the most circulated coins in the United States at the time were Spanish currency, Hamilton proposed that minting a United States dollar weighing almost as much as the Spanish peso would be the simplest way to introduce a national currency. Hamilton differed from European monetary policymakers in his desire to overprice gold relative to silver, on the grounds that the United States would always receive an influx of silver from the West Indies.: 197 Despite his own preference for a monometallic gold standard, he ultimately issued a bimetallic currency at a fixed 15:1 ratio of silver to gold.: 197 Hamilton proposed that the U.S. dollar should have fractional coins using decimals, rather than eighths like the Spanish coinage. This innovation was originally suggested by Superintendent of Finance Robert Morris, with whom Hamilton corresponded after examining one of Morris's Nova Constellatio coins in 1783. He also desired the minting of small value coins, such as silver ten-cent and copper cent and half-cent pieces, for reducing the cost of living for the poor.: 198 One of his main objectives was for the general public to become accustomed to handling money on a frequent basis.: 198 By 1792, Hamilton's principles were adopted by Congress, resulting in the Coinage Act of 1792, and the creation of the mint. There was to be a ten-dollar gold Eagle coin, a silver dollar, and fractional money ranging from one-half to fifty cents. The coining of silver and gold was issued by 1795. Smuggling off American coasts was an issue before the Revolutionary War, and after the Revolution it was more problematic. Along with smuggling, lack of shipping control, pirating, and a revenue imbalance were also major problems. In response, Hamilton proposed to Congress to enact a naval police force called revenue cutters in order to patrol the waters and assist the custom collectors with confiscating contraband. This idea was also proposed to assist in tariff controlling, boosting the American economy, and promote the merchant marine. It is thought that his experience obtained during his apprenticeship with Nicholas Kruger was influential in his decision-making. Concerning some of the details of the System of Cutters, Hamilton wanted the first ten cutters in different areas in the United States, from New England to Georgia. Each of those cutters was to be armed with ten muskets and bayonets, twenty pistols, two chisels, one broad-ax and two lanterns. The fabric of the sails was to be domestically manufactured; and provisions were made for the employees' food supply and etiquette when boarding ships. Congress established the Revenue Cutter Service on August 4, 1790, which is viewed as the birth of the United States Coast Guard. One of the principal sources of revenue Hamilton prevailed upon Congress to approve was an excise tax on whiskey. In his first Tariff Bill in January 1790, Hamilton proposed to raise the three million dollars needed to pay for government operating expenses and interest on domestic and foreign debts by means of an increase on duties on imported wines, distilled spirits, tea, coffee, and domestic spirits. It failed, with Congress complying with most recommendations excluding the excise tax on whiskey. The same year, Madison modified Hamilton's tariff to involve only imported duties; it was passed in September. In response of diversifying revenues, as three-fourths of revenue gathered was from commerce with Great Britain, Hamilton attempted once again during his Report on Public Credit when presenting it in 1790 to implement an excise tax on both imported and domestic spirits. The taxation rate was graduated in proportion to the whiskey proof, and Hamilton intended to equalize the tax burden on imported spirits with imported and domestic liquor. In lieu of the excise on production citizens could pay 60 cents by the gallon of dispensing capacity, along with an exemption on small stills used exclusively for domestic consumption. He realized the loathing that the tax would receive in rural areas, but thought of the taxing of spirits more reasonable than land taxes. Opposition initially came from Pennsylvania's House of Representatives protesting the tax. William Maclay had noted that not even the Pennsylvanian legislators had been able to enforce excise taxes in the western regions of the state. Hamilton was aware of the potential difficulties and proposed inspectors the ability to search buildings that distillers were designated to store their spirits, and would be able to search suspected illegal storage facilities to confiscate contraband with a warrant. Although the inspectors were not allowed to search houses and warehouses, they were to visit twice a day and file weekly reports in extensive detail. Hamilton cautioned against expedited judicial means, and favored a jury trial with potential offenders. As soon as 1791, locals began to shun or threaten inspectors, as they felt the inspection methods were intrusive. Inspectors were also tarred and feathered, blindfolded, and whipped. Hamilton had attempted to appease the opposition with lowered tax rates, but it did not suffice. Strong opposition to the whiskey tax by cottage producers in remote, rural regions erupted into the Whiskey Rebellion in 1794; in Western Pennsylvania and western Virginia, whiskey was the basic export product and was fundamental to the local economy. In response to the rebellion, believing compliance with the laws was vital to the establishment of federal authority, Hamilton accompanied to the rebellion's site President Washington, General Henry "Light Horse Harry" Lee, and more federal troops than Washington had usually commanded during the Revolution. This overwhelming display of force intimidated the leaders of the insurrection, ending the rebellion virtually without bloodshed. Hamilton's next report was his Report on Manufactures. Although he was requested by Congress on January 15, 1790, for a report for manufacturing that would expand the United States' independence, the report was not submitted until December 5, 1791.: 274, 277 In the report, Hamilton quoted from The Wealth of Nations and used the French physiocrats as an example for rejecting agrarianism and the physiocratic theory, respectively.: 233 Hamilton also refuted Smith's ideas of government noninterference, as it would have been detrimental for trade with other countries.: 244 Hamilton also thought that the United States, being a primarily agrarian country, would be at a disadvantage in dealing with Europe. In response to the agrarian detractors, Hamilton stated that the agriculturists' interest would be advanced by manufactures, and that agriculture was just as productive as manufacturing.: 233 : 276 Hamilton argued for industrial policy to support a modern manufacturing industry in the United States. Among the ways that the government should assist manufacturing, Hamilton argued for government assistance to "infant industries" so they can achieve economies of scale, by levying protective duties on imported foreign goods that were also manufactured in the United States, for withdrawing duties levied on raw materials needed for domestic manufacturing,: 277 and pecuniary boundaries.: 277 He also encouraged immigration as a way to improve the American work force. Congress shelved the report without much debate, except for Madison's objection to Hamilton's formulation of the general welfare clause, which Hamilton construed liberally as a legal basis for his extensive programs. In 1791, Hamilton, along with Coxe and several entrepreneurs from New York City and Philadelphia formed the Society for the Establishment of Useful Manufactures, a private industrial corporation. In May 1792, the directors decided to examine the Great Falls of the Passaic River in New Jersey as a possible location for a manufacturing center. On July 4, 1792, the society directors met Philip Schuyler at Abraham Godwin's hotel on the Passaic River, where they led a tour prospecting the area for the national manufactory. It was originally suggested that they dig mile-long trenches and build the factories away from the falls, but Hamilton argued that it would be too costly and laborious. The location at Great Falls of the Passaic River in New Jersey was selected due to access to raw materials, it being densely inhabited, and having access to water power from the falls of the Passaic.: 231 The factory town was named Paterson after New Jersey's Governor William Paterson, who signed the charter.: 232 The profits were to derive from specific corporates rather than the benefits to be conferred to the nation and the citizens, which was unlike the report. Hamilton also suggested the first stock to be offered at $500,000 and to eventually increase to $1 million, and welcomed state and federal government subscriptions alike.: 280 The company was never successful, with numerous shareholders reneged on stock payments and some going bankrupt. William Duer, the governor of the program, was sent to debtors' prison, where he died. In spite of Hamilton's efforts to mend the disaster, the company folded. When France and Britain went to war in early 1793, all four members of the Cabinet were consulted on what to do. They and Washington unanimously agreed to remain neutral, and to have the French ambassador who was raising privateers and mercenaries on American soil, Edmond-Charles Genêt, recalled.: 336–341 However, in 1794, policy toward Britain became a major point of contention between the two parties. Hamilton and the Federalists wished for more trade with Britain, the largest trading partner of the newly formed United States. The Republicans saw monarchist Britain as the main threat to republicanism and proposed instead to start a trade war.: 327–328 To avoid war, Washington sent Chief Justice John Jay to negotiate with the British, with Hamilton largely writing Jay's instructions. The result was a treaty denounced by the Republicans, but Hamilton mobilized support throughout the land. The Jay Treaty passed the Senate in 1795 by exactly the required two-thirds majority. The treaty resolved issues remaining from the Revolution, averted war, and made possible ten years of peaceful trade between the United States and Britain.: Ch 9 Historian George Herring notes the "remarkable and fortuitous economic and diplomatic gains" produced by the Treaty. Several European states had formed the Second League of Armed Neutrality against incursions on their neutral rights; the cabinet was also consulted on whether the United States should join the alliance and decided not to. It kept that decision secret, but Hamilton revealed it in private to George Hammond, the British minister to the United States, without telling Jay or anyone else. His act remained unknown until Hammond's dispatches were read in the 1920s. This revelation may have had limited effect on the negotiations; Jay did threaten to join the League at one point, but the British had other reasons not to view the alliance as a serious threat.: 411–412 Hamilton's wife suffered a miscarriage while he was absent during his armed repression of the Whiskey Rebellion. In the wake of this, Hamilton tendered his resignation from office on December 1, 1794, giving Washington two months' notice, Before leaving his post on January 31, 1795, Hamilton submitted the Report on a Plan for the Further Support of Public Credit to Congress to curb the debt problem. Hamilton grew dissatisfied with what he viewed as a lack of a comprehensive plan to fix the public debt. He wished to have new taxes passed with older ones made permanent and stated that any surplus from the excise tax on liquor would be pledged to lower public debt. His proposals were included in a bill by Congress within slightly over a month after his departure as treasury secretary. Some months later, Hamilton resumed his law practice in New York to remain closer to his family. After Jay resigned as Chief Justice in June 1795 to become Governor of New York, Attorney General William Bradford implored Hamilton to take the position, but Hamilton declined to focus on New York state politics. Hamilton's vision was challenged by Virginia agrarians Thomas Jefferson and James Madison, who formed the Democratic-Republican Party. They favored strong state governments based in rural America and protected by state militias as opposed to a strong national government supported by a national army and navy. They denounced Hamilton as insufficiently devoted to republicanism, too friendly toward corrupt Britain and the monarchy in general, and too oriented toward cities, industry and banking. The two-party system began to emerge as political parties coalesced around competing interests. A congressional caucus, led by Madison, Jefferson, and William Branch Giles, began as an opposition group to Hamilton's financial programs. Hamilton and his allies began to call themselves the Federalists. Hamilton assembled a nationwide coalition to garner support for the administration, including the expansive financial programs Hamilton had made administration policy and especially the president's policy of neutrality in the European war between Britain and France. Hamilton publicly denounced French minister Genêt, who commissioned American privateers and recruited Americans for private militias to attack British ships and colonial possessions of British allies. Eventually, even Jefferson joined Hamilton in seeking Genêt's recall. If Hamilton's administrative republic was to succeed, Americans had to see themselves first as citizens of a nation and experience an administration that proved firm and demonstrated the concepts found within the Constitution. The Federalists did impose some internal direct taxes, but they departed from most implications of Hamilton's administrative republic as risky. The Republicans opposed banks and cities and favored the series of unstable revolutionary governments in France. They built their own national coalition to oppose the Federalists. Both sides gained the support of local political factions, and each side developed its own partisan newspapers. Noah Webster, John Fenno, and William Cobbett were energetic editors for the Federalists, while Benjamin Franklin Bache and Philip Freneau were fiery Republican editors. All of their newspapers were characterized by intense personal attacks, major exaggerations, and invented claims. In 1801, Hamilton established a daily newspaper, the New York Evening Post, and brought in William Coleman as its editor. Hamilton's and Jefferson's incompatibility was heightened by the unavowed wish of each to be Washington's principal and most trusted advisor. An additional partisan irritant to Hamilton was the 1791 United States Senate election in New York, which resulted in the election of Democratic-Republican candidate Aaron Burr over Federalist candidate Philip Schuyler, the incumbent and Hamilton's father-in-law. Hamilton blamed Burr personally for this outcome, and negative characterizations of Burr began to appear in his correspondence thereafter. The two men did work together from time to time thereafter on various projects, including Hamilton's army of 1798 and the Manhattan Water Company. 1796 presidential election Hamilton's resignation as secretary of the treasury in 1795 did not remove him from public life. With the resumption of his law practice, he remained close to Washington as an advisor and friend. Hamilton influenced Washington in the composition of his farewell address by writing drafts for Washington to compare with the latter's draft, although when Washington contemplated retirement in 1792, he had consulted Madison for a draft that was used in a similar manner to Hamilton's. In the election of 1796, under the Constitution as it stood then, each of the presidential electors had two votes, which they were to cast for different men from different states. The one who received the most votes would become president, the second-most, vice president. This system was not designed with the operation of parties in mind, as they had been thought disreputable and factious. The Federalists planned to deal with this by having all their electors vote for John Adams, then vice president, and all but a few for Thomas Pinckney. Adams resented Hamilton's influence with Washington and considered him overambitious and scandalous in his private life; Hamilton compared Adams unfavorably with Washington and thought him too emotionally unstable to be president. Hamilton took the election as an opportunity: he urged all the northern electors to vote for Adams and Pinckney, lest Jefferson get in; but he cooperated with Edward Rutledge to have South Carolina's electors vote for Jefferson and Pinckney. If all this worked, Pinckney would have more votes than Adams, Pinckney would become president, and Adams would remain vice president, but it did not work. The Federalists found out about it and northern Federalists voted for Adams but not for Pinckney, in sufficient numbers that Pinckney came in third and Jefferson became vice president. Adams resented the intrigue since he felt his service to the nation was much more extensive than Pinckney's. Reynolds affair In summer 1797, Hamilton became the first major American politician publicly involved in a sex scandal. Six years earlier, in summer 1791, 34-year-old Hamilton became involved in an affair with 23-year-old Maria Reynolds. According to Hamilton, Maria approached him at his house in Philadelphia, claiming that her husband James Reynolds was abusive and had abandoned her, and she wished to return to her relatives in New York but lacked the means.: 366–369 Hamilton recorded her address and subsequently delivered $30 personally to her boarding house, where she led him into her bedroom and "Some conversation ensued from which it was quickly apparent that other than pecuniary consolation would be acceptable". The two began an intermittent illicit affair that lasted approximately until June 1792. James Reynolds was aware of his wife's infidelity, and likely orchestrated it from the beginning. He continually supported their relationship to extort blackmail money regularly from Hamilton. The common practice of the day for men of equal social standing was for the wronged husband to seek retribution in a duel. But Reynolds, of a lower social status and realizing how much Hamilton had to lose if his activity was made public, resorted to extortion. After an initial request of $1,000, with which Hamilton complied, Reynolds invited Hamilton to renew his visits to his wife "as a friend" only to extort forced "loans" after each visit that, most likely in collusion, Maria solicited with her letters. In the end, the blackmail payments totaled over $1,300 including the initial extortion.: 369 Hamilton at this point may have been aware of both spouses being involved in the blackmail, and he welcomed and strictly complied with James Reynolds' eventual request to end the affair. In November 1792, James Reynolds and his associate Jacob Clingman were arrested for counterfeiting and speculating in Revolutionary War veterans' unpaid back wages. Clingman was released on bail and relayed information to Democratic-Republican congressman James Monroe that Reynolds had evidence incriminating Hamilton in illicit activity as Treasury Secretary. Monroe consulted with congressmen Muhlenberg and Venable on what actions to take and the congressmen confronted Hamilton on December 15, 1792. Hamilton refuted the suspicions of financial speculation by exposing his affair with Maria and producing as evidence the letters by both of the Reynolds, proving that his payments to James Reynolds related to blackmail over his adultery, and not to treasury misconduct. The trio agreed on their honor to keep the documents privately with the utmost confidence.: 366–369 Five years later however, in the summer of 1797, the "notoriously scurrilous" journalist James T. Callender published A History of the United States for the Year 1796.: 334 The pamphlet contained accusations based on documents from the confrontation of December 15, 1792, taken out of context, that James Reynolds had been an agent of Hamilton. On July 5, 1797, Hamilton wrote to Monroe, Muhlenberg, and Venable, asking them to confirm that there was nothing that would damage the perception of his integrity while Secretary of Treasury. All but Monroe complied with Hamilton's request. This led to Hamilton and Monroe engaging in an argument that almost culminated in a duel, before the conflict was averted by Aaron Burr. Hamilton then published a 100-page booklet, later usually referred to as the Reynolds Pamphlet, and discussed the affair in indelicate detail for the time. Hamilton's wife Elizabeth eventually forgave him, but never forgave Monroe. Although Hamilton faced ridicule from the Democratic-Republican faction, he maintained his availability for public service.: 334–336 Quasi-War During the military build-up preceding the Quasi-War with France, and with the strong endorsement of Washington, Adams reluctantly appointed Hamilton a major general of the army. At Washington's insistence, Hamilton was made the senior major general prompting Henry Knox, who had served as United States Secretary of War and years earlier in wartime as a Continental Army major general, to decline the appointment to serve as Hamilton's junior believing it would be degrading to rank beneath him. Hamilton served as inspector general of the United States Army from July 18, 1798, to June 15, 1800. Because Washington was unwilling to leave Mount Vernon unless it were to command an army in the field, Hamilton was the de facto head of the army, to Adams's considerable displeasure. If full-scale war broke out with France, Hamilton argued that the army should conquer the North American colonies of France's ally, Spain, bordering the United States. Hamilton was prepared to march the army through the Southern United States if necessary. To fund the army, Hamilton wrote regularly to Oliver Wolcott Jr., his successor at the treasury, Representative William Loughton Smith, and U.S. senator Theodore Sedgwick. He urged them to pass a direct tax to fund the war. Smith resigned in July 1797, as Hamilton complained to him for slowness, and urged Wolcott to tax houses instead of land. The eventual program included taxes on land, houses, and slaves, calculated at different rates in different states and requiring assessment of houses, and a stamp act like that of the British before the Revolution, though this time Americans were taxing themselves through their own representatives. This provoked resistance in southeastern Pennsylvania nevertheless, led primarily by men such as John Fries who had marched with Washington against the Whiskey Rebellion. Hamilton aided in all areas of the army's development, and after Washington's death he was by default the senior officer of the United States Army from December 14, 1799, to June 15, 1800. The army was to guard against invasion from France. Adams, however, derailed all plans for war by opening negotiations with France that led to peace. There was no longer a direct threat for the army Hamilton was commanding to respond to. Adams discovered that key members of his cabinet, namely Secretary of State Timothy Pickering and Secretary of War James McHenry, were more loyal to Hamilton than himself; Adams fired them in May 1800. 1800 presidential election In November 1799, the Alien and Sedition Acts had left one Democratic-Republican newspaper functioning in New York City. When the last newspaper, the New Daily Advertiser, reprinted an article saying that Hamilton had attempted to purchase the Philadelphia Aurora to close it down, and said the purchase could have been funded by "British secret service money". Hamilton urged the New York Attorney General to prosecute the publisher for seditious libel, and the prosecution compelled the owner to close the paper. In the 1800 presidential election, Hamilton worked to defeat both the Democratic-Republicans and also his party's own nominee, John Adams.: 392–399 Aaron Burr had won New York for Jefferson in May via the New York City legislative elections, as the legislature was to choose New York's electors; now Hamilton proposed a direct election, with carefully drawn districts where each district's voters would choose an elector—such that the Federalists would split the electoral vote of New York. Jay, who had resigned from the Supreme Court to become Governor of New York, wrote on the back of a letter, "Proposing a measure for party purposes which it would not become me to adopt," and declined to reply. Adams was running this time with Charles Cotesworth Pinckney, the elder brother of former vice presidential candidate Thomas. Hamilton toured New England, again urging northern electors to hold firm for Pinckney in the renewed hope of making Pinckney president; and he again intrigued in South Carolina.: 350–351 Hamilton's ideas involved coaxing middle-state Federalists to assert their non-support for Adams if there was no support for Pinckney and writing to more of the modest supports of Adams concerning his supposed misconduct while president.: 350–351 Hamilton expected to see southern states such as the Carolinas cast their votes for Pinckney and Jefferson, and would result in the former being ahead of both Adams and Jefferson.: 394–395 In accordance with these plans, and a recent personal rift with Adams,: 351 Hamilton wrote a pamphlet called Letter from Alexander Hamilton, Concerning the Public Conduct and Character of John Adams, Esq. President of the United States that was highly critical of him, though it closed with a tepid endorsement.: 396 Jefferson defeated Adams. But both he and Aaron Burr received 73 votes in the Electoral College. With Jefferson and Burr tied, the U.S. House of Representatives, under electoral laws of the time, had to choose between the two candidates.: 352 : 399 Several Federalists who opposed Jefferson supported Burr, and for the first 35 ballots, Jefferson was denied a majority. Before the 36th ballot, Hamilton threw his weight behind Jefferson, supporting the arrangement reached by James A. Bayard of Delaware, in which five Federalist representatives from Maryland and Vermont abstained from voting, allowing those states' delegations to go for Jefferson, ending the impasse and electing Jefferson president rather than Burr.: 350–351 Even though Hamilton disliked Jefferson and disagreed with him on many issues, he viewed Jefferson as the lesser of two evils. Hamilton spoke of Jefferson as being "by far not so a dangerous man" and of Burr as a "mischievous enemy" to the principal measure of the past administration. It was for that reason, along with the fact that Burr was a northerner and not a Virginian, that many Federalist representatives voted for him.[contradictory] Hamilton wrote many letters to friends in Congress to convince the members to see otherwise.: 352 : 401 In the end, Burr would become vice president after losing to Jefferson. However, according to several historians, the Federalists had rejected Hamilton's diatribe as reasons to not vote for Burr.: 353 : 401 In his book American Machiavelli: Alexander Hamilton and the Origins of US Foreign Policy, historian John Lamberton Harper stated Hamilton could have "perhaps" contributed "to a degree" in Burr's defeat. Ron Chernow, alternatively, claimed that Hamilton "squelched" Burr's chance at becoming president. When it became clear that Jefferson had developed his own concerns about Burr and would not support his return to the vice presidency, Burr sought the New York governorship in 1804 with Federalist support, against the Jeffersonian Morgan Lewis, but was defeated by forces including Hamilton. Duel with Burr and death Soon after Lewis' gubernatorial victory, the Albany Register published Charles D. Cooper's letters, citing Hamilton's opposition to Burr and alleging that Hamilton had expressed "a still more despicable opinion" of the vice president at an Upstate New York dinner party. Cooper claimed that the letter was intercepted after relaying the information, but stated he was "unusually cautious" in recollecting the information from the dinner. Sensing an attack on his honor, and recovering from his defeat, Burr demanded an apology in the form of a letter. Hamilton wrote a letter in response and ultimately refused because he could not recall the instance of insulting Burr. Hamilton was also accused of recanting Cooper's letter out of cowardice.: 423–424 After a series of attempts to reconcile differences between the two failed, a duel was arranged through liaisons on June 27, 1804.: 426 The concept of honor was fundamental to Hamilton's vision of himself and of the nation. As evidence of the importance that honor held in Hamilton's value system, historians observe that Hamilton previously was a party to seven "affairs of honor" as a principal, and to three as an advisor or second. Such affairs of honor were often concluded prior to reaching the final stage of a duel. Before the duel, Hamilton wrote an explanation of his decision to participate while at the same time intending to "throw away" his shot. His desire to be available for future political matters also played a factor. A week before the duel, Hamilton and Burr both attended an annual Independence Day dinner held by the Society of the Cincinnati. Separate accounts confirm that Hamilton was uncharacteristically effusive while Burr was, by contrast, uncharacteristically withdrawn. Accounts also agree that Burr became roused when Hamilton, again uncharacteristically, sang a favorite song, which recent scholarship indicates was "How Stands the Glass Around", an anthem sung by military troops about fighting and dying in war. The duel began at dawn on July 11, 1804, along the west bank of the Hudson River on a rocky ledge in Weehawken, New Jersey. Both opponents were rowed over from Manhattan separately from different locations, since the location of the duel was not accessible from the west due to the steepness of the adjoining cliffs. Coincidentally, the duel took place relatively close to the location of the duel that had ended the life of Hamilton's eldest son, Philip Hamilton, three years earlier. Lots were cast for the choice of position and which second should start the duel. Both were won by Hamilton's second, who chose the upper edge of the ledge for Hamilton facing the city and the rising sun to the east. After the seconds measured the paces, Hamilton, according to both William P. Van Ness and Burr, raised his pistol "as if to try the light" and had to wear his glasses to prevent his vision from being obscured. Hamilton also refused the more sensitive hairspring setting for the dueling pistols offered by Nathaniel Pendleton, and Burr was unaware of the option. Burr shot Hamilton, delivering what proved to be a fatal wound, while Hamilton apparently "deloped", as he had indicated was his intention in his letter beforehand; and his shot went well above Burr's head, breaking a tree branch. The seconds, Pendleton and Van Ness, disagreed on which man fired first in the duel. Soon after, they measured and triangulated the shooting, but could not determine from which angle Hamilton had fired. Biographer Ron Chernow contends that, after taking deliberate aim, Burr fired second. Biographer James Earnest Cooke, however, believes that Burr took careful aim and shot first, and Hamilton fired while falling after being struck by Burr's bullet. The shot hit Hamilton in the lower abdomen above his right hip. The ball ricocheted off Hamilton's second or third false rib, fracturing it and causing considerable damage to his internal organs, particularly his liver and diaphragm, before becoming lodged in his first or second lumbar vertebra.: 429 The paralyzed Hamilton was immediately attended by the same surgeon who tended to Hamilton's son Philip. Hamilton was ferried to Greenwich Village and the boarding house of his friend William Bayard Jr., who was waiting on the dock. On his deathbed, Hamilton asked the Episcopal Bishop of New York, Benjamin Moore, to give him holy communion. Moore initially declined to do so on the grounds that participating in a duel was a mortal sin and that Hamilton, although undoubtedly sincere in his faith, was not a member of the Episcopalian denomination. After leaving, Moore was persuaded to return that afternoon by the urgent pleas of Hamilton's friends. After hearing Hamilton's solemn assurance that he repented his role in the duel, Moore gave him communion. After final visits from his family, friends, and considerable suffering for at least 31 hours, Hamilton died at two o'clock the following afternoon, July 12, 1804, at Bayard's home just below present-day Gansevoort Street in Greenwich Village, New York City. The city fathers halted all business at noon two days later for Hamilton's funeral. The procession route of about two miles organized by the Society of the Cincinnati had so many participants of every class of citizen that it took hours to complete and was widely reported nationwide by newspapers. Moore conducted Hamilton's funeral service at Trinity Church at present-day 89 Broadway in Manhattan. Gouverneur Morris gave the eulogy and secretly established a fund to support his widow and children. Hamilton was buried in the church's cemetery. Religion As a youth in the West Indies, Hamilton was an Orthodox and conventional Presbyterian of the New Lights; he was mentored there by a former student of John Witherspoon, a moderate of the New School. He wrote two or three hymns, which were published in the local newspaper. Robert Troup, his college roommate, noted that Hamilton was "in the habit of praying on his knees night and morning".: 10 During the American Revolution, however, Hamilton became less religious and instead became "a conventional liberal with theistic inclinations who was an irregular churchgoer at best," according to Brown University historian Gordon S. Wood. In his final years of life, though, Hamilton returned to his Protestant faith, and was Episcopalian. Historian Ron Chernow wrote: [H]e was not clearly affiliated with the denomination and did not seem to attend church regularly or take communion. Like Adams, Franklin, and Jefferson, Hamilton had probably fallen under the sway of deism, which sought to substitute reason for revelation and dropped the notion of an active God who intervened in human affairs. At the same time, he never doubted God's existence, embracing Christianity as a system of morality and cosmic justice. When the Constitutional Convention opened in Philadelphia in May 1787, stories circulated that Hamilton made two quips about God at the convention. Asked by a Presbyterian minister why God was not referenced in the Constitution, Hamilton responded "Indeed, Doctor, we forgot it." When Benjamin Franklin asked that each session of the Constitutional Convention be opened with prayer, Hamilton is reported to have replied that there was no need for "foreign aid". During the French Revolution, Hamilton displayed a utilitarian approach to using religion for political ends, including maligning Thomas Jefferson as "the atheist", and insisting that Christianity and Jeffersonian democracy were incompatible.: 316 After 1801, Hamilton expressed his belief in Christianity, proposing a Christian Constitutional Society in 1802 to take hold of "some strong feeling of the mind" to elect "fit men" to office, and advocating "Christian welfare societies" for the poor. After being shot in his duel with Aaron Burr on July 11, 1804, Hamilton spoke of his belief in God's mercy.[c] On his deathbed, Hamilton asked the Episcopal Bishop of New York, Benjamin Moore, to give him holy communion. Moore initially declined to do so, on two grounds: that to participate in a duel was a mortal sin, and that Hamilton, although undoubtedly sincere in his faith, was not a member of the Episcopalian denomination. After leaving, Moore was persuaded to return that afternoon following urgent pleas of Hamilton's friends. After receiving Hamilton's solemn assurance that he never intended to shoot Burr and repented for his part in the duel, Moore gave him communion. Bishop Moore returned the next morning, stayed with Hamilton for several hours until his death, and conducted his subsequent funeral service at Trinity Church. Hamilton's birthplace had a large Jewish community, comprising roughly a quarter of Charlestown's white population by the 1720s. He came into contact with Jews on a regular basis; as a small boy, he was tutored by a Jewish schoolmistress, and had learned to recite the Ten Commandments in the original Hebrew. Hamilton exhibited a respect for Jews, which was described as "a lifelong reverence." He believed that Jewish achievement was a result of divine providence: The state and progress of the Jews, from their earliest history to the present time, has been so entirely out of the ordinary course of human affairs, is it not then a fair conclusion, that the cause also is an extraordinary one—in other words, that it is the effect of some great providential plan? The man who will draw this conclusion, will look for the solution in the Bible. He who will not draw it ought to give us another fair solution. Based primarily on the phonetic similarity of Lavien to a common Jewish surname, it has been suggested that Johann Lavien, the first husband of Hamilton's mother, was Jewish or of Jewish descent. On this contested foundation, it was rumored that Hamilton himself was born Jewish, a claim that gained some popularity early in the 20th century and which was given serious consideration in 2021 by Andrew Porwancher. The belief that Lavien was Jewish was popularized by Gertrude Atherton in her 1902 novel The Conqueror, a fictionalized biography of Hamilton which made the earliest known written assertion that Hamilton was Jewish. The consensus of mainstream scholars and historians, however, is that Hamilton was not Jewish. Legacy Hamilton's interpretations of the Constitution, which are set forth in The Federalist Papers, remain highly influential, and continue to be cited in scholarly studies and court decisions. Although the Constitution was ambiguous as to the exact balance of power between national and state governments, Hamilton consistently took the side of greater federal power at the expense of the states, which placed him at odds with Thomas Jefferson and other Founding Fathers. Jefferson especially opposed Hamilton's support of a de facto central bank, which Hamilton believed was permissible under Congress's constitutional authority to issue currency, regulate interstate commerce, and do anything else that would be "necessary and proper" to enact the provisions of the Constitution. Jefferson, however, took a differing view. Parsing text carefully, Jefferson argued that no specific authorization for the establishment of a national bank existed. The controversy between the two was addressed in McCulloch v. Maryland, which largely adopted Hamilton's view, granting the federal government broad freedom to select the best means to execute its constitutionally enumerated powers and confirmed the doctrine of implied powers. The American Civil War and the Progressive Era, Hamilton's defenders argue, demonstrated the sorts of crises and politics that Hamilton's administrative republic sought to avoid.[how?] Hamilton's policies have proven greatly influential on the development of the U.S. government. His constitutional interpretation, particularly of the Necessary and Proper Clause, set precedents for federal authority that are still cited by courts and are considered an authority on constitutional interpretation. French diplomat Charles Maurice de Talleyrand-Périgord, who spent 1794 in the United States, wrote, "I consider Napoleon, Fox, and Hamilton the three greatest men of our epoch, and if I were forced to decide between the three, I would give without hesitation the first place to Hamilton," adding that Hamilton understood the problems of European conservatives trying to adapt to a liberalizing world. Both John Adams and Jefferson, however, viewed Hamilton as unprincipled and dangerously aristocratic. Hamilton's reputation was mostly negative in the Jeffersonian democracy and Jacksonian democracy eras. During the Jeffersonian era, Hamilton was criticized as a centralizer, sometimes to the point of accusing him of being a proponent of monarchy. Conversely, during the later Progressive Era, such figures as Herbert Croly, Henry Cabot Lodge, and Theodore Roosevelt praised Hamilton's leadership as a proponent of a strong national government. In the 19th and 20th centuries several Republicans wrote laudatory biographies on Hamilton prior to entering politics. According to Princeton University historian Sean Wilentz, Hamilton has been generally viewed favorably among contemporary scholars, who portray him as a visionary architect of a modern liberal capitalist economy and of a dynamic federal government headed by an energetic executive. Conversely, these modern scholars favoring Hamilton portray Jefferson and his allies as relatively naïve and dreamy idealists. Hamilton is not known to have ever owned slaves, although members of his family did. At the time of her death, Hamilton's mother owned two slaves and wrote a will leaving them to her sons. Due to their illegitimacy, however, Hamilton and his brother were held ineligible to inherit her property and never took ownership of the slaves.: 17 As a youth in Saint Croix, Hamilton later worked for a company that traded slaves as well as sugar and other staples of the Transatlantic economy.: 17 Historians have discussed whether Hamilton personally owned slaves later in life. Ron Chernow, in his 2004 biography of Hamilton, argued that, while there is "no definite proof" that Hamilton personally owned slaves, "oblique hints" in Hamilton's papers suggest "he and Eliza may have owned one or two household slaves." Hamilton handled slave transactions as the legal representative of his own family members, and his grandson, Allan McLane Hamilton, interpreted some of these journal entries as being purchases for himself. In 1840, however, his son John maintained that his father "never owned a slave; but on the contrary, having learned that a domestic whom he had hired was about to be sold by her master, he immediately purchased her freedom." Hamilton expressed support for limited emancipation during the American Revolutionary War, when he endorsed a plan to recruit enslaved men to serve in the Continental Army. As a necessary inducement, Hamilton wrote, the Black soldiers should be promised their freedom upon enlistment. He dismissed objections that enslaved men were "too stupid" to fight well, arguing that their "want of cultivation" and "habit of subordination" made them ideal soldiers. Whereas officers should be "men of sense and sentiment," good enlisted men were unthinking "machines," a role to which white men, unaccustomed to a "life of servitude," were comparatively less suited than Blacks. In 1785, he joined his close associate John Jay and more than 30 fellow New Yorkers in founding the New York Manumission Society. The Society lobbied successfully for legislation to gradually abolish slavery in New York. Rather than legally emancipate all enslaved people in the state, the 1799 act declared all children born after July 4, 1799, free pending a period of apprenticeship lasting 28 years for men and 25 years for women. Enslaved people born prior to that date were not emancipated, and the final end of slavery in New York did not occur until 1827. In his letter recommending the enlistment of Black soldiers in the Continental Army, Hamilton rejected the racial essentialism found in the contemporaneous writings of Jefferson and other leading white intellectuals, asserting "their natural faculties are as good as ours." He never advocated for the colonization of free people of color outside the United States, which many contemporaries considered essential to any plan for emancipation.: 22 In the 1790s, Hamilton's political agenda sometimes came into conflict with proslavery interests. When the enslaved population of Saint-Domingue rose up against their French enslavers, Hamilton and other Federalists supported the revolutionaries and urged closer economic and diplomatic ties with new nation of Haiti.: 23 His suggestions shaped the Haitian constitution, promulgated the year after his death.: 23 At other times, political expediency led Hamilton to form close relationships with slaveholders like William Loughton Smith whose support was critical to the strength of the Federalist Party in South Carolina. Hamilton has been portrayed as the patron saint of the American School economic philosophy that, according to historian Michael Lind, later dominated American economic policy after 1861. Hamilton's ideas and work influenced 19th century German economist Friedrich List and Henry Charles Carey, who served as Abraham Lincoln's chief economic advisor during the Lincoln administration. In fall 1781, Hamilton firmly supported government intervention in favor of business after the manner of Jean-Baptiste Colbert. In contrast to the British policy of international mercantilism, which he believed skewed benefits to colonial and imperial powers, Hamilton was a pioneering advocate of protectionism. He is credited with the idea that industrialization was only possible with tariffs that protected the "infant industries" of an emerging nation. Political theorists credit Hamilton with the creation of the modern administrative state, citing his arguments in favor of a strong executive, linked to the electoral support of the people, as the linchpin of an administrative republic. The dominance of executive leadership in the formulation and carrying out of policy was, in Hamilton's view, essential to resist the deterioration of a republican government. As evidence of Hamilton's global influence, some scholars have compared Hamilton's recommendations to the development of Meiji Japan. In popular culture Hamilton has appeared as a significant figure in popular works of historical fiction, including many that focused on other American political figures of his time. In comparison to other Founding Fathers, however, Hamilton attracted relatively little attention in American popular culture in the 20th century.[original research?] In 2015, he gained significant mainstream attention with the debut of the Broadway musical Hamilton, which is based on a biography by Ron Chernow. Lin-Manuel Miranda played Hamilton in the Original Broadway Cast. The musical was described by The New Yorker in February 2015 as "an achievement of historical and cultural reimagining. In Miranda's telling, the headlong rise of one self-made immigrant becomes the story of America." The Off-Broadway production of Hamilton won the 2015 Drama Desk Award for Outstanding Musical and seven other Drama Desk Awards. In 2016, Hamilton received the Pulitzer Prize for Drama, and set a record for Tony Award nominations with 16, winning 11, including Best Musical. During the Obama administration, a plan to replace Hamilton on the ten-dollar bill was shelved due partly to the musical's popularity. On July 3, 2020, Disney+ released the movie Hamilton, an authorized film of the Broadway stage production performed by the original cast. Alexander Hamilton is mentioned several times in the American adaptation of Ghosts by main character, Isaac Higgintoot, who has a troubled history and strong dislike for Hamilton. Alexander Hamilton's backstory with Isaac and why he hates him so much, is finally revealed in the fourteenth episode of the fourth season, and Isaac learns of Hamilton's death earlier in the series. References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Comedy_music] | [TOKENS: 4725] |
Contents Comedy music Comedy music or musical comedy is a genre of music that is comedic in nature. Its history can be traced back to the first century in ancient Greece and Rome, moving forward in time to the Medieval Period, Classical and Romantic eras, and the 20th century. Various forms of comedic musical theatre, including "musical play", "musical comedy", "operetta" and "light opera", evolved from the comic operas first developed in late 17th-century Italy. Popular music artists in the 20th century interested in comedy include Allan Sherman, Frank Zappa, Tiny Tim, Barenaked Ladies, Randy Newman, and "Weird Al" Yankovic. Artists in the 21st century include Tenacious D, Flight of the Conchords, The Lonely Island, Ninja Sex Party and The Axis of Awesome. Comedy music is often associated with counterculture, due to the subversive messages it displays. This informative nature of comedy music also contributes to the improvement of learning inside and outside the classroom. Forms of entertainment like musical theatre often incorporate comedy music as well. To create comic effects in music, composers have developed several principal compositional techniques, including the use of comic text, musical parody, and unexpected juxtapositions of syntactical elements among others. Comedy music can be further categorized into several types, such as parody music, novelty song, comedy rock, and comedy hip hop. Awards dedicated to comedy music include the Grammy Award for Best Comedy Album, the Golden Globe Award for Best Motion Picture – Musical or Comedy, and the Musical Comedy Awards. Comedy-music relationship Comedy is a form of art that addresses comic or humorous situations, or even serious ones with a light or satirical approach. Music is also a form of art, and it is concerned with the rhythm, melody, and harmony of vocal, instrumental, or mechanical sounds. One similarity between comedy and music is the way they both evoke psychological and emotional effects in their listeners, without them fully understanding the specific reason for their emotions of hilarity. Comedy in entertainment is also established as musical codes set up and confirm the audience's understanding of the symbolic meaning of a scene, before subverting that understanding to play with the audience's response. Thus, a multi-faceted musical experience has the ability to elicit emotions such as humor and comedy in its listeners. This type of musical experience can be identified as comedy music. History The first uses of comedy in music can be traced back to the first century in ancient Greece and Rome, where poets and playwrights entertained with puns and wordplay. The origins of comedy play in ancient Greece are first recorded on pottery in the 6th century BCE, on which illustrations of actors dressed as horses, satyrs, and dancers in exaggerated costumes are painted on. Another early origin are the explicit sexually humorous poems of Hipponax in the 6th century BCE and Archilocus in the 7th century BCE. The third origin are the phallic songs sung during Dionysiac festivals, as mentioned by Aristotle. Playwrights of comedic theatre include Aristophanes and Menander whose works mocked politicians, philosophers, and fellow artists. In the Medieval Period, minstrels, troubadours and court jesters would continue performing comedic music, some satirical, accompanied by musical instruments. Court jesters in particular would display their wit and humor through songs, jokes, and physical comedy as a way to offer critique on society and authority, working in public squares or officially hired as licensed fools to work directly under the king or queen. Forms of comic opera first developed in late 17th-century Italy, leading to the emergence of opera buffa as an alternative to opera seria. It quickly made its way to France, where it became opéra comique, and eventually, in the following century, French operetta, with Jacques Offenbach as its most accomplished practitioner. Many countries developed their own genres of comic opera, incorporating the Italian and French models along with their own musical traditions. Examples include German singspiel, Viennese operetta, Spanish zarzuela, Russian comic opera, English ballad and Savoy opera, North American operetta and musical comedy. In the Classical and Romantic eras, composers like Haydn, Beethoven, and Schumann would place comic passages side by side with the more serious sections to bring out the contrast between them. This technique is called juxtaposition, which is a basic element of comedy. Haydn's Symphony No 45 of 1772 (the Farewell Symphony) and his Symphony No 94 of 1792 (the Surprise Symphony), are the most famous examples. A tradition of toy symphonies – featuring toy musical instruments – began in the classical era and continued into the 19th century and beyond. Progress in comedy music continued over years, until vaudeville entertainers of the early 20th century added lyrics to musical numbers. In 1923, one of the first comedy music hits 'Yes! We Have No Bananas' sung by Eddie Cantor was released. In 1924 Billy Rose asked, "Does the Spearmint Lose Its Flavor on the Bedpost Overnight?". In 1958 the song was rereleased as, "Does Your Chewing Gum Lose Its Flavour (On the Bedpost Overnight?)" by Lonnie Donegan, the King of Skiffle. In the 1940s, Spike Jones created songs with a comedy technique of replacing several musical notes with humorous sound effects. Followed in 1951, Stan Freberg released a series of cover songs that addressed the issue of commercialism in that age. In the 1950s Fritz Spiegl organised a popular series of "April Fools" concerts in Liverpool. The idea was subsequently taken up by Gerard Hoffnung in London at the Royal Festival Hall. The 1956 "Hoffnung Music Festival" played to a sell-out audience in the hall and to BBC viewers throughout Britain. Two more Hoffnung Festivals followed, the second in 1958 and the third in 1961, presented as a tribute after his death. Contributions included Donald Swann's revised version of Haydn's Surprise Symphony to make it considerably more surprising, and Malcolm Arnold's A Grand, Grand Overture, scored for orchestra and three vacuum cleaners, (dedicated to US President Hoover). After Hoffnung's death, similar concerts were promoted by his widow Annetta. Malcolm Arnold's Toy Symphony was first performed at a Savoy Hotel fund raising dinner in London on 28 November 1957, with toy instruments played by a group of eminent composers, musicians and personalities, including Thomas Armstrong, Edric Cundell, Gerard Hoffnung, Eileen Joyce, Steuart Wilson and Leslie Woodgate. On 17 July 1958 the 'Mammoth Concert of Comic Music' was held at the Royal Albert Hall. Pieces performed included a concerto for motor horn and orchestra by Antony Hopkins, Overture: The Masterdrinkers by Spike Hughes and a concertino for piano tuner and orchestra by Lambert Williamson. The 1960s and 1970s saw the rise of numerous comedy music artists whose careers went on for decades. These artists include: Allan Sherman, Shel Silverstein, Frank Zappa, Tiny Tim, and Randy Newman. Particularly in 1970, the radio host Barret Hansen – better known as Dr. Demento – appeared. He played tracks sent in by amateur artists, one of which was a 16-year-old 'Weird Al' Yankovic. Yankovic released his first album in 1983, which eventually led to a 14-album contract that he did not complete until 2014. For over four decades, he released multiple hit parodies and originals, which made him a major player in the genre of comedy music and the counterculture associated with it. In 1994, The Actors' Gang members Jack Black and Kyle Gass formed the iconic comedy rock duo Tenacious D and went on to release their debut album in 2001. A popular 21st century musical comedy act is Flight of the Conchords, a New Zealand duo composed of musicians Bret McKenzie and Jemaine Clement, which became the basis of the self-titled BBC radio series (2004) and then the HBO American television series (2007–2009). At the turn of the millennium, the band Steel Panther formed in Los Angeles with songs, live shows and videos parodying the stereotypical glam metal genre and lifestyle of the 1980s. In 2001, The Lonely Island formed in Berkeley, California with members Akiva Schaffer, Andy Samberg and Jorma Taccone, who starred in a series of SNL Digital Shorts featuring songs like Motherlover, Dick in a Box, I'm on a Boat, I Just Had Sex and more. Through the rest of the 2000s, a movement of comedy rock acts started to take place in Australia with bands such as The Axis of Awesome, The Beards, The Kransky Sisters and Tripod. When musician Matt Farley discovered the only songs from his band Moes Haven that were getting any plays had more-comedic titles, he switched his focus to novelty songs in 2008. Since then, Farley has written over 22,000 songs about potty humor, celebrities, food and more under band names like The Toilet Bowl Cleaners, Papa Razzi and the Photogs, The Very Nice Interesting Singer Man and The Hungry Food Band. Taking rock and synth-pop influence in the more comedic direction, the duo Ninja Sex Party formed in 2009 with members Dan "Danny Sexbang" Avidan and Brian "Ninja Brian" Wecht who went on to record five albums of original material, three cover albums and one re-recording album. Ever since their album Under the Covers, NSP has been backed by the band TWRP. For three albums, they collaborated with animator and internet personality Arin "Egoraptor" Hanson to create the video game themed side project Starbomb. In 2010, rappers Peter "Nice Peter" Shukoff and Lloyd "Epic Lloyd" Ahlquist created the web-series Epic Rap Battles of History, a show that pinned famous figures both real and fictional in rap battles against each other. It has run for seven seasons, featuring stars like "Weird Al" Yankovic, Snoop Dogg, T-Pain and more. The beginning of the 2010s saw Nerf Herder front man Parry Gripp starting to release a long series of successful tween pop songs such as "It's Raining Tacos", "Space Unicorn" and "Do You Like Waffles?" dealing with themes of animals and food, gaining him the nickname "the "Weird Al" Yankovic of YouTube". Associations Counterculture is associated with comedy music due to the individual natures of comedy and music. Comedy often contains progressive and subversive messages that intend to provide listeners with information about issues, Injustices, and other topics that are important to the artist. Music has the ability to explain political issues in a way that is easily acceptable for a wide range of listeners. Both comedy and music have the power to create movements and spread ideas, allowing them to effectively advocate counterculture through the ages, one of them being the challenge of authority. 'Weird Al' Yankovic spread his message about the privilege of the upper class through his comedy music song, titled 'First World Problems': My maid is cleaning the bathroom, so I can't take a shower / When I do, the water starts getting cold after an hour / I couldn't order off the breakfast menu, cause I slept in till two / Then I filled up on bread, didn't leave any room for tiramisu / Oh no, there's a pixel out in the corner of my laptop screen / I don't have any bills in my wallet small enough for the vending machine / Some idiot just called me up on the phone, what!? Don't they know how to text? OMG! / I got first world, first world problems. Kevin Bloody Wilson's song – 'Living Next Door to Alan' – is about an indigenous family claiming land neighboring the millionaire Alan Bond: They came down from Meekatharra / In a burned-out blue FJ / That farted and just shit itself in Jutland Parade / Right next door to Bondy's / When the smoke had cleared a voice said: / 'Eh .. this place look all right / We'll tell the government it's a sacred site / Dead fuckin' easy' / 'Good day Mr Alan Bond, how you goin' bloke? / Hey, I'm your brand-new neighbour ... hey, mate you got a smoke? / And I think I'm gonna like it here / Livin' next door to Alan'. Comedy and music have both been found to improve the effectiveness of learning inside and outside the classroom. Comedy improves short-term issue recognition, and can improve a student's learning by attracting and holding their attention for a longer duration of the class, also ensuring their continued motivation and engagement. Music improves a student's vocabulary and comprehensive skills, simultaneously encouraging them to think creatively. An example of the implementation of comedy music in education is the incorporation of parody songs to learn the English language. In the 1920s and 1930s, musical theatre is a form of entertainment that often incorporates comedy. In a musical setting, rhetorico-musical techniques contribute in creating comedic effect, and an example of this is aposiopesis, which is the device of suddenly breaking off in musical speech for dramatic or emotional effect. Another contributing aspect to it is dance – particularly tap dance. Musical comedies differ from book musicals as they focus more on comedy and dance rather than on drama and character development. This era's musical comedies include works created by brothers George and Ira Gershwin, and these musicals are: 'Strike Up the Band', 'Lady, Be Good', 'Oh, Kay!', 'Girl Crazy', 'Crazy for You', and 'Of Thee I Sing'. Principal techniques To create comic effects in music, composers have developed the following principal compositional techniques. The use of comic text or funny words immediately conveys humor. This can be traced back to 13th century Motets, but it is the 18th century opera buffa that first explored deeply all the aspects of verbal comedy. An example of this is Mozart's Le nozze di Figaro composed in 1786. Musical parodies satirize certain styles or particular works of music. An example of this is Mozart's Ein musikalischer Spass composed in 1787, which parodies the style of incompetent composers and Siegfried Ochs's variations on 'Kommt ein Vogel geflogen' that models the style of particular composers for each variation. The use of unexpected juxtapositions of syntactical elements include changing the lengths of phrases, startling turns of melody and dynamics, and contrasting textures. An example of this is a minuet from Haydn's Symphony No. 104 composed in 1796, where rests and a crescendo of the timpani interrupt the regular flow of music. Musical description includes animal or even nonsensical sound effects that illustrate certain events or situations within the music piece. Examples of this are the bird calls in Beethoven's Pastoral Symphony composed in 1808, the bleating of sheep in Strauss's Don Quixote composed in 1897, and sound effects that illustrate hunting or market scenes in Medieval Italian caccie. Inclusion of folk or popular music techniques in certain passages creates humorous effect. Examples of this are the clumsy folk-like dance technique incorporated in the last movement of Haydn's Symphony No. 82 – nicknamed The Bear – composed in 1786 and Hindemith's use of the Shimmy in his Suite 1922 for Piano. The use of incongruency creates contrasts between music styles and techniques, and this is done with parodistic intent. An example of this is Haydn's Symphony No. 60 – nicknamed Il Distratto – composed in 1774. The use of unusual orchestral devices creates the element of surprise. Examples of this are the tuning of violins in the last movement of Haydn's Symphony No. 60 composed in 1774, the use of col legno in the last movement of Berlioz's Symphonie fantastique composed in 1830, and the use of toy instruments in various classical pieces from the 1760s to the 21st century. The descriptive use of music can be used to allude to famous comic characters. Examples of this are Elgar's symphonic poem on Falstaff composed in 1913 and Strauss's depiction of Till Eulenspiegel composed in 1895. The use of unusual effects of texture, dynamics, rhythm, and melodic design creates comic features within the music piece. Examples of this are the exaggerated large intervals of the bass voice in 18th century opera buffa and the two Sopranos showing off their high register singing in Mozart's Der Schauspieldirektor composed in 1785. The use of strange keys and distant modulations respectively create dissonance and distant harmonic movements. These musical devices create a subtle humorous effect. Examples of this are Renaissance Madrigals and Motets and Baroque Cantatas. References to past styles and techniques are presented in a new context, and this is played with the assumption that the audience is familiar with the referenced style and technique. An example of this is the referencing of 18th century forms and instrumentation by Neoclassic composers Stravinsky and Hindemith in the 20th century. Musical quotations are blended together in vertical and horizontal orders to form a medley. In the Renaissance era, this type of musical composition is called the quodlibet. In the Romantic era, they are often medleys performed in Operas. Examples of this are C. Hopfner's operetta for men's voices – Das Gastspiel der Lucca – composed in 1875 and Charles Ives's Holiday Symphony composed in 1913. Composers like Haydn and Beethoven often use specific movement titles to identify their work as humorous, labeling them as 'scherzo', which means 'joke'. An example of this is the scherzo from Tchaikovsky's Symphony No. 4 composed in 1878. Tempo modifications not only sets the pace of music, but also imply mood and style. An example of this is Haydn's symphony finales in the late 18th century, where tempo modifications are used to display character. The use of visually uncommon notations has been employed in the complex polyphony of the late 14th century, puzzle canons of the Renaissance and Baroque eras, and aleatoric music of the 20th century. An example of this is Baude Cordier's 'Belle bonne' heart-shaped manuscript composed in the late 14th century. The use of specific terms in genre designations identifies certain types of music as humorous. Obvious designations include opera buffa, while subtler ones include terms like canzonetta, chansonetta, and operetta. An example of this is Schumann's use of the term 'humoresque' to designate humorous music, as demonstrated in his own work – Humoreske – composed in 1838. Composers make fun of certain performance styles through the use of parody. Examples of this are Victor Borge who made fun of conventional classical music by mimicking well-known pieces of music and Anna Russell who satirized Wagner. Satiric texts are incorporated within instrumental works to convey humor. An example of this is a vocal arrangement of Mozart's overture to Die Zauberflote that begins with "Vivat Carl Maria Weber". The use of chance to combine phrases in musical composition is known as ars combinatoria. In the 20th century, this genre is called aleatoric music or chance music. An example of this is John Cage's Music of Changes composed in 1951. Soggetto cavato is a technique that substitutes syllables from solmization for letters, creating a musical cryptogram. An example of this is the use of the letters ASCH and SCHA Schumann's Carnaval composed in 1835. Types Parody music is a subgenre of comedy music that incorporates comic or satirical features, and is a reinterpretation of the original it is based upon. Bart Baker parodies Nicki Minaj's song – 'Anaconda' – by replacing original lyrics with new ones: I'm dry humping bamboo in a jungle / My butt's so big it's like two gigantic bubbles / And I always show it off 'cause it's my greatest asset / But it's enhanced by surgery, yes, it's made out of plastic / It's not real, real, real. Peter Schickele composed and performed music allegedly written by the fictional P. D. Q. Bach, the "only forgotten son" of the Bach family. Novelty song is a subgenre of comedy music that is humorous, unique and original, sounding different from everything else being played in the media. Examples of novelty song artists include Tom Lehrer and Alan Sherman. Comedy rock is a subgenre of comedy music that focuses on Dissenting humor, a merge of youthful silliness and rebellious instincts. Stephen Lynch sings about the death of his grandfather in his song, titled 'Grandfather': When Grandfather dies / Life will be strange / When Grandfather dies / My whole world will change / When Grandfather dies / I'll scream and I'll yell / 'Cause I'll be fuckin' rich as hell. Comedy hip hop is a subgenre of comedy music that incorporates humor in the rap lyrics and in the music itself. The Lonely Island released their first comedy hip hop song – 'Ka-Blamo!' – in 2001: When you're mining for coal and you forget what coal is / And you're sure to be fired, because that's your job! / When a mole's in your ass and you wonder where the mole is / You're screwed man, a mole is in your ass. Job! Awards The Grammy Award for Best Comedy Album acknowledges both spoken word and musical comedy albums. It is presented by the National Academy of Recording Arts and Sciences of the United States, and is first awarded in 1959 until the present day. The Golden Globe Award for Best Motion Picture – Musical or Comedy recognizes musical or comedy films. It is presented by the Hollywood Foreign Press Association of the United States, and is first awarded in 1952 until the present day. The Musical Comedy Awards is an annual competition that acknowledges the United Kingdom's up-and-coming as well as established artists in the musical comedy genre. It is first set up in 2008 by founder Ed Chappel. See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Firstborn_hypothesis] | [TOKENS: 313] |
Contents Firstborn hypothesis The firstborn hypothesis is a proposed solution to the Fermi paradox which states that no extraterrestrial intelligent life has been discovered because humanity is the first form of intelligent life in the universe. Background There is no reliable or reproducible evidence that aliens have visited Earth. No transmissions or evidence of intelligent extraterrestrial life have been observed anywhere other than Earth in the universe. This runs counter to the knowledge that the universe is filled with a very large number of planets, some of which likely hold the conditions hospitable for life. Life typically expands until it fills all available niches. These contradictory facts form the basis for the Fermi paradox, of which the firstborn hypothesis is one proposed solution. Avi Loeb, an astrophysicist and cosmologist, has suggested that Earth may be a very early example of a life-bearing planet and that life-bearing planets may be more likely trillions of years from now. He has put forward the view that the Universe has only recently reached a state in which life becomes possible and this is the reason humanity has not detected extraterrestrial life. Relationship to other proposed Fermi paradox solutions The firstborn hypothesis is a special case of the Hart–Tipler conjecture (the idea that the lack of evidence for interstellar probes is evidence that no intelligent life other than humanity exists in the universe) which asserts a time-dependent curve towards discovery. The firstborn hypothesis is also a special time-dependent case of the rare Earth hypothesis, which states that conditions for creating intelligent life are exceedingly rare. References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Hart%E2%80%93Tipler_conjecture] | [TOKENS: 522] |
Contents Hart–Tipler conjecture The Hart–Tipler conjecture is the idea that an absence of detectable Von Neumann probes is contrapositive evidence that no intelligent life exists outside of the Solar System. This idea was first proposed in opposition to the Drake equation in a 1975 paper by Michael H. Hart titled "Explanation for the Absence of Extraterrestrials on Earth". Assuming that the probes traveled at 1/10 the speed of light and that no time was lost in building new ships upon arriving at the destination, Hart surmised that a wave of Von Neumann probes could cross the galaxy in approximately 650,000 years, a comparatively minimal span of time relative to the estimated age of the universe at 13.7 billion years. Hart’s argument was extended by cosmologist Frank Tipler in his 1981 paper entitled "Extraterrestrial intelligent beings do not exist". Tipler's article prompted a response from Drake, as well as peers like Gregory Benford and John Daugman. The conjecture is the first of many proposed solutions to the Fermi paradox (the conflict between the lack of obvious evidence for alien life and various high probability estimates for its existence). In this case, the solution is that there is no other intelligent life because such estimates are incorrect. The conjecture is named after astrophysicist Michael H. Hart and mathematical physicist and cosmologist Frank Tipler. Background There is no reliable or reproducible evidence that aliens have visited Earth. No transmissions or evidence of intelligent extraterrestrial life have been detected or observed anywhere other than Earth in the Universe. If intelligent life existed, it would have produced enough self-replicating spacecraft, known as von Neumann probes, to cover the universe by now, which runs counter to the knowledge that the Universe is filled with a very large number of planets, some of which likely hold the conditions hospitable for life. Life typically expands until it fills all available niches. These contradictory facts form the basis for the Fermi paradox, of which the Hart–Tipler conjecture is one proposed solution. Relationship to other proposed Fermi paradox solutions The firstborn hypothesis is a special case of the Hart–Tipler conjecture which states that no other intelligent life has been discovered because humanity is the first intelligent life in the universe. According to the Berserker hypothesis, the absence of interstellar probes is not evidence of life's absence, since such probes could "go berserk" and destroy other civilizations, before self-destructing. References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Neocatastrophism] | [TOKENS: 396] |
Contents Neocatastrophism Neocatastrophism is the hypothesis that life-exterminating events such as gamma-ray bursts have acted as a galactic regulation mechanism in the Milky Way upon the emergence of complex life in its habitable zone. It is one of several proposed solutions to the Fermi paradox since it provides a mechanism which would have delayed the advent of intelligent beings in local galaxies near Earth. The problem It is estimated that Earth-like planets in the Milky Way started forming 9 billion years ago, and that their median age is 6.4 ± 0.7 Ga. Moreover, 75% of stars in the galactic habitable zone are older than the Sun. This makes the existence of potential planets with evolved intelligent life more likely than not to be older than that of the Earth (4.54 Ga). This creates an observational dilemma since even slower-than-lightspeed interstellar travel could in theory take only 5 to 50 million years to colonize the galaxy. This leads to a conundrum first posed in 1950 by the physicist Enrico Fermi in his namesake paradox: "Why are no aliens or their artifacts physically here?" The neocatastrophism resolution The hypothesis posits that astrobiological evolution is subject to regulation mechanisms that arrest or postpone the advent of complex creatures capable of interstellar communication and traveling technology. These regulation mechanisms act to temporarily sterilize planets of biology in the galactic habitable zone. The main proposed regulation mechanism is gamma-ray bursts. Part of the neocatastrophism hypothesis is that stellar evolution produces a decreasing frequency of such catastrophic events increasing the length of the "window" in which intelligent life might arise as galaxies age. According to modeling, this creates the possibility of a phase transition at which point a galaxy turns from a place that is essentially dead (with a few pockets of simple life) to one that is crowded with complex life forms. See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/History_of_the_Jews_in_Guatemala] | [TOKENS: 10790] |
Contents History of the Jews in Latin America and the Caribbean The history of the Jews in Latin America and the Caribbean began with conversos who joined the Spanish and Portuguese expeditions to the continents. The Alhambra Decree of 1492 led to the mass conversion of Spain's Jews to Catholicism and the expulsion of those who refused to do so. Many conversos, Jews who converted to Christianity under pressure during the Spanish Inquisition, did travel to the New World. While the Spanish Crown required settlers to be of Catholic lineage, conversos often presented themselves as devout Catholics to meet this requirement. Some sought refuge in the Americas to escape persecution of the Inquisition, which followed them even to the Spanish viceregal towns. In places like Mexico and New Mexico, conversos maintained their faith in secret while outwardly adhering to Catholic practices. Their migration was driven by both the hope for greater economic opportunities and the desire to escape religious oppression. The first Jews came with the first expedition of Christopher Columbus, including Rodrigo de Triana and Luis De Torres. Throughout the 15th and 16th centuries a number of converso families migrated to the Netherlands, France and eventually Italy, from where they joined other expeditions to the Americas. Others migrated to England or France and accompanied their colonists as traders and merchants. By the late 16th century, fully functioning Jewish communities were founded in the Portuguese colony of Brazil, the Dutch Suriname and Curaçao; Spanish Santo Domingo, and the English colonies of Jamaica and Barbados. In addition, there were unorganized communities of Jews in Spanish and Portuguese territories where the Inquisition was active, including Colombia, Cuba, Puerto Rico, Mexico and Peru. Many in such communities were crypto-Jews, who had generally concealed their identity from the authorities. By the mid-17th century, the largest Jewish communities in the Western Hemisphere were located in Suriname and Brazil. Several Jewish communities in the Caribbean, Central and South America flourished, particularly in those areas under Dutch and English control, which were more tolerant. More immigrants went to this region as part of the massive emigration of Jews from eastern Europe in the late 19th century. During and after World War II, many Ashkenazi Jews emigrated to South America for refuge. In the 21st century, fewer than 300,000 Jews live in Latin America. They are concentrated in Argentina, Brazil, Chile, Cuba, Mexico and Uruguay. Argentina Jews fleeing the Inquisition settled in Argentina, where they intermarried with native women. Portuguese traders and smugglers in the Virreinato del Río de la Plata were considered by many to be crypto-Jewish, but no community emerged after Argentina achieved independence. After 1810 (and about mid-nineteenth century), more Jews, especially from France, began to settle in Argentina. By the end of the century in Argentina, as in America, many Jewish immigrants were coming from Eastern Europe (mainly Russia and Poland) fleeing Tsarist persecution. Upon arrival they were generally called "Russians" in reference to their region of origin. Jewish individuals and families emigrated from Europe to Argentina before and after World War II, in an attempt to escape the Holocaust and later postwar antisemitism. Between 250,000 and 300,000 Jews now live in Argentina, the vast majority of whom reside in the cities of Buenos Aires, Rosario, Córdoba, Mendoza, La Plata and San Miguel de Tucumán. Argentina has the third-largest Jewish community in the Americas after the United States and Canada, and the sixth largest in the world. According to recent surveys, more than a million Argentines have at least one grandparent of Jewish ethnicity. The Jewish Argentine community legally receives seven holidays per year, with both days of Rosh Hashanah, Yom Kippur, and the first and last two days of Passover, according to the law 26,089. Bahamas 200 Jews lived in the Bahamas in 2022. Bolivia Jewish presence in Bolivia started at the beginning of the Spanish colonial period. Santa Cruz de la Sierra, was founded in 1557 by Ñuflo de Chávez who was accompanied by a small group of pioneers, including several crypto-Jews from Ascuncion and Buenos Aires. The city became known as a safe haven for Jews during the Inquisition in the region. The second wave of Conversos came to Santa Cruz de la Sierra after 1570, when the Spanish Inquisition began operating in Lima. Alleged marranos (that is, New Christians whom others rightly or wrongly suspected of crypto-Judaism), settled in Potosi, La Paz and La Plata. After they gained economic success in mining and commerce, they faced suspicion and persecution from the Inquisition and local authorities. Most of these marrano families moved to Santa Cruz de la Sierra, as it was an isolated urban settlement where the Inquisition did not bother the conversos. Most of the converso settlers were men, and many intermarried with indigenous or mestizo women, founding mixed-race or mestizo families. Conversos also settled in adjacent towns of Vallegrande, Postrervalle, Portachuelo, Terevinto, Pucara, Cotoca and others. Many of Santa Cruz's oldest families are of partial Jewish heritage; Some traces of Jewish culture can still be found in family traditions, as well as local customs. For example, some families have family-heirloom seven-branched candle sticks or the custom of lighting candles on Friday at sunset. The typical local dishes can be all prepared with kosher practices (none mix milk and meat, pork is served, but never mixed with other foods). Scholars disagree on provenance and recency of these practices. After almost five centuries, some of the descendants of these families claim awareness of Jewish origins, but practice Catholicism (in certain cases with some Jewish syncretism). From independence in 1825 to the end of the 19th century, some Jewish merchants and traders (both Sephardim and Ashkenazim) immigrated to Bolivia. Most took local women as wives, founding families that eventually merged into the mainstream Catholic society. This was often the case in the eastern regions of Santa Cruz, Tarija, Beni and Pando, where these merchants came from Brazil or Argentina. During the 20th century, substantial Jewish settlement began in Bolivia. In 1905, a group of Russian Jews, followed by Argentines, settled in Bolivia. In 1917, it was estimated that there were 20 to 25 professing Jews in the country. By 1933, when the Nazi era in Germany started, there were 30 Jewish families. The first large Jewish immigration occurred during the 1930s; the population had climbed to an estimated 8,000 at the end of 1942. During the 1940s, 2,200 Jews emigrated from Bolivia to other countries. But the ones who remained have created communities in La Paz, Cochabamba, Oruro, Santa Cruz, Sucre, Tarija and Potosí. After World War II, a small number of Polish Jews immigrated to Bolivia. By 2006, approximately 700 Jews remained in Bolivia. There are synagogues in the cities of Santa Cruz de la Sierra, La Paz, and Cochabamba. Most Bolivian Jews live in Santa Cruz de la Sierra. Brazil Jews settled early in Brazil, especially in areas of Dutch rule. They set up a synagogue in Recife in 1636, which is considered the first synagogue in the Americas. Most of these Jews were conversos who had fled Spain and Portugal to the religious freedom of the Netherlands when the Inquisition began in Portugal in 1536. In 1656, following the Portuguese reconquest of Brazil, Jews left for the Caribbean islands and New Amsterdam under Dutch rule; the latter was taken over by the English in 1664 and was renamed as New York City. After independence in the 19th century, Brazil attracted more Jews among its immigrants, and pressure in Europe convinced more Jews to leave. Jewish immigration rose throughout the 19th and early 20th centuries, at a time of massive emigration from the Russian Empire (including Poland and Ukraine). Jewish immigration to Brazil was rather low between 1881 and 1900 although this was the height of other international immigration to Brazil; many were going to more industrialized countries. Between 1921 and 1942 worldwide immigration to Brazil fell by 21%, but Jewish immigration to Brazil increased by 57,000. This was in response to anti-immigration legislation and immigration quotas passed by the United States, Argentina, Canada and South Africa, persisting even after the crisis of Jews under the Third Reich became clear. The Brazilian government generally did not enforce its own immigration legislation. Lastly, the Jews in Brazil developed strong support structures and economic opportunities, which attracted Eastern European and Polish Jewish immigration. Brazil has the 9th largest Jewish community in the world, about 107,329 by 2010, according to the IBGE census. The Jewish Confederation of Brazil (CONIB) estimates that there are more than 120,000 Jews in Brazil. Brazilian Jews play an active role in politics, sports, academia, trade and industry, and are well integrated in all spheres of Brazilian life. The majority of Brazilian Jews live in the state of São Paulo, but there are also sizable communities in Rio de Janeiro, Rio Grande do Sul, Minas Gerais and Paraná. Chile Although a relatively small community amounting to no more than 1% of the country's religious minorities, Jews in Chile have achieved prominent positions in its society. They have had key roles both before and after its independence in 1810. Most Chilean Jews today reside in Santiago and Valparaíso, but there are significant communities in the north and south of the country. Mario Kreutzberger, otherwise known as "Don Francisco" and host of Sábado Gigante, the longest-running TV show in the world, is a Chilean Jew of German origin. Other Chilean Jews who have achieved recognition in arts and culture include Alejandro Jodorowsky, now established in France and best known internationally for his literary and filmic work. Others include Nissim Sharim (actor), Shlomit Baytelman (actress) and Anita Klesky (actress). Volodia Teitelboim, poet and former leader of the Chilean Communist Party, is one of the many Jews to have held important political positions in the country. Tomás Hirsch is leader of the radical Green-Communist coalition and former presidential candidate in 2005. State ministers Karen Poniachick (Minister for Mining) and Clarisa Hardy (Minister for Social Affairs) are also Jewish. In the field of sport, tennis player Nicolás Massú (gold medalist in Athens 2004 and former top-ten in the ATP rankings) has Jewish background. Many of the country's most important companies, particularly in the retail and commercial field, have been set up by Jews. Examples are Calderón, Gendelman, Hites, and Pollak (commercial retailers) and Rosen (Mattress and Bed Industries). Colombia "New Christians", fled the Iberian peninsula to escape persecution and seek religious freedom during the 16th and 17th centuries. It is estimated that some reached northern areas of Colombia, which at the time was known as New Granada. Most if not all of these people assimilated into Colombian society. Some continue to practice traces of Sephardic Jewish rituals as family traditions. In the 18th century, practicing Spanish and Portuguese Jews came from Jamaica and Curaçao, where they had flourished under English and Dutch rule. These Jews started practicing their religion openly in Colombia at the end of the 18th century, although it was not officially legal to do so, given the established Catholic Church. After independence, Judaism was recognized as a legal religion. The government granted the Jews land for a cemetery. Many Jews who came during the 18th and 19th centuries achieved prominent positions in Colombian society. Some married local women and felt they had to abandon or diminish their Jewish identity. These included author Jorge Isaacs of English Jewish ancestry, the industrialist James Martin Eder (who adopted the more Christian name of Santiago Eder when he translated his name to Spanish) born into the Latvian Jewish community, as well as the De Lima, Salazar, Espinoza, Arias, Ramirez, Perez and Lobo families of Caribbean Sephardim. Coincidentally, these persons and their families settled in the Cauca Valley region of Colombia. They have continued to be influential members of society in cities such as Cali. Over the generations most of their descendants were raised as secular Christians. During the early part of the 20th century, numerous Sephardic Jewish immigrants came from Greece, Turkey, North Africa and Syria. Shortly after, Jewish immigrants began to arrive from Eastern Europe. A wave of immigrants came after the rise of Nazism in 1933 and the imposition of antisemitic laws and practices, including more than 7,000 German Jews. From 1939 until the end of World War II, immigration was put to a halt by anti-immigrant feelings in the country and restrictions on immigration from Germany. Colombia asked Germans who were on the U.S. blacklist to leave and allowed Jewish refugees in the country illegally to stay. The Jewish population increased dramatically in the 1950s and 1960s, and institutions such as synagogues, schools and social clubs were established throughout the largest cities in the country. The changing economy and wave of kidnappings during the last decade of the 20th century led many members of Colombia's Jewish community to emigrate. Most settled in Miami and other parts of the United States. Successes in the nation's Democratic security Policy has encouraged citizens to return; it has drastically reduced violence in the rural areas and criminality rates in urban areas, as well as in spurring the economy. The situation in Colombia has improved to the extent that many Venezuelan Jews are now seeking refuge in Colombia. In the early 21st century, most of the Jews in Colombia are concentrated in Bogotá, with about 20,000 members, and Barranquilla, with about 7,000 members. Large communities are found in Cali and Medellín, but very few practicing Jews. Smaller communities are found in Cartagena and the island of San Andres. There are 14 official synagogues throughout the country. In Bogotá, Jews each run their own religious and cultural institutions. The Confederación de Asociaciones Judías de Colombia, located in Bogotá, is the central organization that coordinates Jews and Jewish institutions in Colombia. In the new millennium, after years of study, a group of Colombians with Jewish ancestry formally converted to Judaism to be accepted as Jews according to the halakha. Costa Rica The first Jews in Costa Rica were probably conversos, who arrived in the 16th and 17th centuries with Spanish expeditions. In the 19th century Sephardic merchants from Curaçao, Jamaica, Panama and the Caribbean followed. They lived mostly in Central Valley, married local women, and were soon assimilated into the country's general society. Most eventually gave up Judaism altogether. A third wave of Jewish immigrants came before World War I and especially in the 1930s, as Ashkenazi Jews fled a Europe threatened by Nazi Germany. Most of these immigrants came from the Polish town Żelechów. The term Polacos, which was originally a slur referring to these immigrants, has come to mean door-to-door salesman in colloquial Costa Rican Spanish. The country's first synagogue, the Orthodox Shaarei Zion, was built in 1933 in the capital San José (it is located along 3rd Avenue and 6th Street). Along with a wave of nationalism, in the 1940s there was some antisemitism in Costa Rica, but generally there have been few problems. Since the late 20th century there has been a fourth wave of Jewish immigration made up of American and Israeli expatriates who are retiring here or doing business in the country. The Jewish community is estimated to number 2,500 to 3,000 people, most of them living in the capital. The San José suburb of Rohrmoser has a strong Jewish influence due to its residents. A couple of synagogues are located here, as well as a kosher deli and restaurant. The Plaza Rohrmoser shopping center had the only kosher Burger King in the country. The Centro Israelita Sionista (Zionist Israeli Center) is a large Orthodox compound where a synagogue, library and museum are located. In 2015, the Chaim Weizmann comprehensive school in San Jose had over 300 students in kindergarten, primary, and secondary grades learning in both Spanish and Hebrew. Cuba Jews have lived on the island of Cuba for centuries. Some Cubans trace Jewish ancestry to crypto-Jews, called Marranos, who fled the Spanish Inquisition. Early colonists generally married native women and few of their descendants, after centuries of residence, practice Judaism today. There was significant Jewish immigration to Cuba in the first half of the 20th century, as noted in other countries of Latin America. During this time, Beth Shalom Temple in Havana was constructed and became the most prominent Latin American Jewish synagogue. There were 15,000 Jews in Cuba in 1959, but many Jewish businessmen and professionals left Cuba for the United States after the Cuban revolution, fearing class persecution under the Communists. In the early 1990s, Operation Cigar was launched, and in the period of five years, more than 400 Cuban Jews secretly immigrated to Israel. In February 2007 The New York Times estimated that about 1,500 Jews live in Cuba, most of them (about 1,000) in Havana. Beth Shalom Temple is an active synagogue that serves many Cuban Jews. Curaçao Curaçao has the oldest active Jewish congregation in the Americas—dating to 1651—and the oldest synagogue of the Americas, in continuous use since its completion in 1732 on the site of a previous synagogue. The Jewish community of Curaçao also played a key role in supporting early Jewish congregations in the United States in the 18th and 19th centuries, including in New York City and Newport, Rhode Island, where the Touro Synagogue was built. Growth in Latin American Jewish communities, primarily in Colombia and Venezuela, resulted from the influx of Curaçaoan Jews. In 1856 and 1902 the Jews of Coro (Venezuela) were plundered, maltreated, and driven to seek refuge in their native Curaçao. Dominican Republic Converso Merchants of Sephardic origin arrived in southern Hispaniola during the 15th, 16th and 17th centuries, fleeing the outcome of the Spanish Inquisition. Over the centuries, many Jews and their descendants assimilated into the general population and some have converted into the Catholic religion, although many of the country's Jews still retain elements of the Sephardic culture of their ancestors. Later, in the 18th and 19th centuries, many Sephardic families from Curaçao emigrated to the Dominican Republic. Sosua, meanwhile, is a small town close to Puerto Plata was founded by Jews fleeing the rising Nazi regime of the 1930s. Rafael Trujillo, the country's dictator, welcomed many Jewish refugees to his island mainly for their skills rather than for religious persecution. Present-day Sosua still possesses a synagogue and a museum of Jewish history. Descendants of those Jews can still be found in many other villages and towns on the north of the island close to Sosua.[citation needed] Ecuador For some time, prior to the 20th century, many Jews in Ecuador were of Sephardic ancestry and some retained their use of the Judaeo-Spanish (Ladino) language. However, today, most Jewish people in Ecuador are of Ashkenazi ancestry. Some assume that these groups were among the European settlers of Ecuador. Many Jewish people came from Germany in 1939, on a ship called the "Koenigstein". During the years 1933–43, there were a population of 2,700 Jewish immigrants. In 1939, the Jewish population, mostly German and Polish Jews, were expelled by a decree of the Italian influenced government of Alberto Enriquez Gallo. The antisemitism spread in the population, but was stopped by the intervention of the American embassy. In 1945, there was a reported population of 3,000. About 85% of them were European refugees. The rise of Jewish immigration to Ecuador was when the Holocaust started. In 1950, there was an estimation of 4,000 persons living in Ecuador. Most of the active Jewish communities in Ecuador are from German origin. The majority of Ecuadorian Jews live in Quito and Guayaquil. There is a Jewish school in Quito. In Guayaquil, there is a Jewish Community under the auspices of Los Caminos de Israel called Nachle Emuna Congregation. Now in 2017 in Ecuador there are only 290 reported Jews in the country. "Among the Jewish immigrants who came to Ecuador were also professionals, intellectuals and artists, some of whom were professors and writers. Other Alberto Capua, Giorgio Ottolenghi, Aldo Mugla, Francisco Breth, Hans Herman, Leopold Levy, Paul Engel, Marco Turkel, Henry Fente, Benno Weiser, Otto Glass, Egon Fellig, and Karl Kohn. Olga Fis valued and spread the Ecuadorian folk art, Constanza Capua conducted archaeological, anthropological and colonial art. From Sephardic ancestry were Leonidas Gilces and his younger brother Angel Theodore Gilces whom helped many immigrants such as Charles Liebman who reach the capital with his library, which became the most important of the capital. Simon Goldberg who had a library in Berlin, Goethe library of old books that contributed to the dissemination of reading. Vera Kohn was a psychologist and teacher, tasks that at mid-century were not of interest of Ecuadorian women who used to live in their homes given away, devoid of intellectual curiosity and only care about social life. They were not interested in politics, with the exception of Paul Beter, belonging to the second generation of Jews, who became Minister of Economy and Central Bank President. El Salvador The first Jews arrived in El Salvador as the first Spanish Settlers since the 16th century , the conversos who practiced Judaism in secret. Alsatian-born Bernardo Haas, who came to El Salvador in 1868, was believed to be the country's first "recognized" Jewish immigrant. Another Jew, Leon Libes, was documented as the first German Jew in 1888. Sephardic families also arrived from countries such as Turkey, Egypt, Tunisia Spain and France. De Sola helped to found the first synagogue and became an invaluable member of the Jewish community. In 1936, World War II caused the Jewish community to help their ancestors escape from Europe. Some had their relatives in El Salvador. But some were forced to go into countries such as Brazil, Ecuador, Guatemala and Panama. On 30 July 1939, President Martinez barred an entry of fifty Jewish refugees going to El Salvador on the German ship Portland. On 11 September 1948, the community started and continues to support a school "Colegio Estado de Israel". According to the latest Census, there are currently about 100 Jews living in El Salvador, mostly in the capital city of San Salvador. Most of them have Sephardic roots. There is a small town called Armenia in rural El Salvador where people practice Orthodox Sephardic Judaism since the inquisition. French Guiana History of the Jews in French Guiana redirects here. Jews arrived in French Guiana by the way of the Dutch West India Company. Later on 12 September 1659, Jews arrived from Dutch colonies in Brazil. The company appointed David Nassy, a Brazilian refugee, patron of an exclusive Jewish settlement on the western side of the island of Cayenne, an area called Remire or Irmire. From 1658 to 1659, Paulo Jacomo Pinto began negotiating with the Dutch authorities in Amsterdam to allow a group of Jews from Livorno, Italy to settle in the Americas. On 20 July 1600, more than 150 Sephardic Jews left Livorno (Leghorn) and settled in Cayenne. The French agreed to those terms, an exceptional policy that was not common among the French colonies. Nevertheless, nearly two-thirds of the population left for the Dutch colony of Suriname. Over the decades, the Leghorn Jews of Cayenne immigrated to Suriname. In 1667, the remaining Jewish community was captured by the occupying British forces and moved the population to either Suriname or Barbados to work in sugarcane production. Since the late 17th century, few Jews have lived in French Guiana. In 1992, 20 Jewish families from Suriname and North Africa attempted to re-establish the community in Cayenne. A Chabad organization exists in the country and maintains Jewish life within the community. Today, 800 Jews live in French Guiana, predominately in Cayenne. Guatemala History of the Jews in Guatemala redirects here. The first Jewish migrations to Guatemala date back to the Spanish period. Historical records from the Mexican Inquisition reveal that the earliest Jewish settlers were Crypto-Jews and converts. And there is still descendants of them. ( mainly converted ) As in other spanish america countries. The modern Jewish community in Guatemala, however, traces its roots to German Jewish immigrants who arrived in the mid-19th century. The Jews in Guatemala are mainly descendants from immigrants from Germany, Eastern Europe and the Middle East that arrived in the second half of the 19th century and first half of the 20th. The first Jewish families arrived from the town of Kempen, Posen, Prussia (today Kepno, Poland), establishing themselves in Guatemala City and Quetzaltenango. Immigrants from the Middle East (mainly Turkey) immigrated during the first three decades of the 20th century. Many immigrated during World War II. There are approximately 900 Jews living in Guatemala today. Most live in Guatemala City. Today, the Jewish community in Guatemala is made up of Orthodox Jews, Sephardi, Eastern European and German Jews. In 2014, numerous members of the communities Lev Tahor and Toiras Jesed, who practice a particularly austere form of Orthodox Judaism, began settling in the village of San Juan La Laguna. Mainstream Jewish communities felt concerned about the reputation following this group, who had left both the US and Canada under allegations of child abuse, underage marriage and child neglect. Despite the tropical heat, the members of the community continued to wear the long black cloaks for men and full black chador for women. Haiti When Christopher Columbus arrived in Santo Domingo, as he named it, among his crew was an interpreter, Luis de Torres, who was Jewish. Luis was one of the first Jews to settle on Santo Domingo in 1492. When the western part of the island was taken over by France in 1633, many Dutch Sephardic Jews came from Curaçao, arriving in 1634, after the Portuguese had taken over there. Others immigrated from English colonies such as Jamaica, contributing to the merchant trade. In 1683, Louis XIV banned all religions except Catholicism in the French colonies, and ordered the expulsion of Jews, but this was lightly enforced. Sephardic Jews remained in Saint-Domingue as leading officials in French trading companies. After the French Revolution instituted religious freedom in 1791, additional Jewish merchants returned to Saint-Domingue and settled in several cities. Some likely married free women of color, establishing families. In the 21st century, archaeologists discovered a synagogue of Crypto-Jews in Jérémie in the southwest area of the island. In Cap-Haïtien, Cayes and Jacmel, a few Jewish tombstones have been uncovered. In the late eighteenth century at the time of the French Revolution, the free people of color pressed for more rights in Saint-Domingue, and a slave revolt led by Toussaint L'Ouverture broke out in 1791 in the North of the island. Slaves considered Jews to be among the white oppressor group.[citation needed] Through the years of warfare, many people of the Jewish community were among the whites killed; some Jews were expelled when the slaves and free blacks took power and instituted restrictions on foreign businessmen.[citation needed] Haiti achieved independence in 1804 but was not recognized by other nations for some time and struggled economically, based on a peasant culture producing coffee as a commodity crop. Foreigners were prohibited from owning land and subject to other restrictions. Planters and other whites were killed in 1805, and Jews were among the whites and people of color who fled to the United States, many settling in New Orleans or Charleston. Race, as defined in slavery years, and nationality became more important in Haiti in the 19th century than religion, and Jews were considered whites and nationals of their groups. Later in the century, Polish Jews immigrated to Haiti due to the civil strife in Poland and settled in Cazale, in the North-West region of the country. Most Jews settled in port cities, where they worked as traders and merchants. In 1881 a crowd in Port-au-Prince attacked a group of Jews but was drawn back by militia men. By the end of the 19th century, a small number of Mizrahi Jewish families immigrated to Haiti from Lebanon, Syria and Egypt; there were a higher number of Levantine Christian traders arriving at the same time. German Jews arrived with other German businessmen; they were highly acculturated and were considered part of the German community. In 1915, there were 200 Jews in Haiti. During the 20 years of American occupation, many of the Jews emigrated to the United States. The US and Haiti had joint interests in reducing the number and influence of foreign businessmen. In 1937, the government issued passports and visas to Jews of Germany and Eastern Europe, to help them escape the Nazi persecution. They retained control of any naturalization of foreigners, restricting it. During this time, 300 Jews lived on the island. Most of the Jews stayed until the late 1950s, when they moved on to the United States or Israel. As of 2010, the number of known Jews in Haiti is estimated at 25, residing in the relatively affluent suburb of Pétion-Ville, outside Port-au-Prince. Haiti and Israel maintain full diplomatic relations, but Israel's nearest permanent diplomat to the region is based in neighboring Dominican Republic.[citation needed] Honduras During the 20th century-1980s, Jewish immigrants came to Honduras, mainly from Russia, Poland, Germany, Hungary and Romania. There were also immigration from Greece, who are of Sephardic origin and Turkey and North Africa, who are of Mizrachi origin. Throughout the 1970s and 1980s, it has been absorbed a huge number of Jewish immigrants from Israel. Through the past two decades, the Honduras experienced a resurgence of Jewish life. Communities in Tegucigalpa and San Pedro Sula grew more active. In 1998, Hurricane Mitch destroyed the synagogue, which was part of the Jewish community center in the Honduras. But the Jewish community contributed money to re-build the temple. Most Honduran Jews live in Tegucigalpa. Jamaica The history of the Jews in Jamaica predominantly dates back to the 1490s when many Jews from Portugal and Spain fled the persecution of the Holy Inquisition. When the English captured the colony of Jamaica from Spain in 1655, Jews who were living as conversos began to practice Judaism openly. In 1719, the synagogue Kahal Kadosh Neve Tsedek in Port Royal was built. By the year 1720, 18 percent of the population the capital Kingston was Jewish. For the most part, Jews practiced Orthodox rituals and customs. A recent study has now estimated that nearly 424,000 Jamaicans are descendants of Jewish (Sephardic) immigrants to Jamaica from Portugal and Spain from 1494 to the present, either by birth or ancestry. Jewish documents, gravestones written in Hebrew and recent DNA testing have proven this. While many are non-practicing, it is recorded that over 20,000 Jamaicans religiously identify as Jews.[citation needed] Common Jewish surnames in Jamaica are Abrahams, Alexander, Isaacs, Levy, Marish, Lindo, Lyon, Sangster, Myers, Da Silva, De Souza, De Cohen, De Leon, DeMercado, Barrett, Babb, Magnus, Codner, Pimentel, DeCosta, Henriques and Rodriques.[citation needed] In 2006 Jamaican Jewish Heritage Center opened to celebrate of 350 years of Jews living in Jamaica. [citation needed] Mexico New Christians arrived in Mexico as early as 1521. Due to the strong Catholic Church presence in Mexico, few conversos and even fewer Jews migrated there after the Spanish Conquest of Mexico. Then, in the late 19th century, a number of German Jews settled in Mexico as a result of invitations from Maximilian I of Mexico, followed by a huge wave of Ashkenazic Jews fleeing pogroms in Russia and Eastern Europe. A second large wave of immigration occurred as the Ottoman Empire collapsed, leading many Sephardic Jews from Turkey, Morocco, and parts of France to flee. Finally, a wave of immigrants fled the increasing Nazi persecutions in Europe during World War II. According to the 2010 Census, there are 67,476 Jews in Mexico, making them the third largest Jewish community in Latin America. Based in Cancún, they reached out to the whole Quintana Roo and Mexican Caribbean including Playa del Carmen, Cozumel, Isla Mujeres and Mérida. In 2010 they opened a Chabad branch in Playa del Carmen to expand their activities. Rabbi Mendel Goldberg along with his wife Chaya and two daughters where assigned to direct the activities there and open a new center. The State of Baja California has also had a Jewish presence for the last few hundred years. La Paz, Mexico was home to many Jewish traders who would dock at the port and do business. Many locals in La Paz descend from the prominent Schcolnik, Tuschman and Habiff families, although most are assimilated into Mexican life. In recent years, the tourist industry has picked up in Baja California Sur, which saw many American retirees purchase and live in properties around the Baja. In 2009, with a grassroots Jewish Community formulating and with the help of Tijuana-based businessman Jose Galicot, Chabad sent out Rabbi Benny Hershcovich and his family to run the operations of the Cabo Jewish Center, located in Los Cabos, Mexico, but providing Jewish services and assistance to Jews scattered throughout the Baja Sur region, including La Paz, Todos Santos and the East Cape. Nicaragua In the 20th century, Nicaragua's Jewish community consisted mostly of immigrants from Eastern Europe who arrived in Nicaragua after 1929. The Jews in Nicaragua were a relatively small community, with most living in Managua. The Jews made significant contributions to Nicaragua's economic development while dedicating themselves to farming, manufacturing and retail sales. It was approximated that the highest number of Jews in Nicaragua reached a peak of 250 in 1972. Some 60 Jews left the country after the 1972 earthquake that devastated Managua, it having destroyed many Jewish businesses, while others fled during the violence and unrest of the 1978-1979 Sandinista Revolution. When Nicaraguan dictator Anastasio Somoza was deposed in 1979, almost all of the remaining Nicaraguan Jews left the country, concerned about their future under the incoming socialist government. Beginning in 1983, the Reagan administration in the U.S. made a concerted effort to increase domestic support for funding the Contras by persuading American Jews that the Sandinista government was antisemitic. According to Contra leader Edgar Chamorro, CIA officers told him of this plan in a 1983 meeting, justifying it with the antisemitic argument that Jews controlled the media and winning them over would be key to a public relations success. The Anti-Defamation League supported the Reagan administration's charges of Sandinista antisemitism, having actively worked with Nicaraguan Jews to reclaim a synagogue that had been firebombed by Sandinista militants in 1978 and seized by the Sandinista government in 1979. However, a variety of left-wing organizations that opposed the Reagan administration's policies in Latin America, including the progressive New Jewish Agenda, the leftist NGO the Council on Hemispheric Affairs, as well as the American Jewish Committee, all found that there was no evidence to support the U.S. charge of government antisemitism. Anthony Quainton, U.S. ambassador to Nicaragua, also reported no evidence of government antisemitism after an investigation by embassy staff. The dozens of Nicaraguan Jews who had fled the country supported the Reagan administration's charges of antisemitism, citing several instances of intimidation, harassment, and arbitrary arrest, but two of the Jews who remained in Nicaragua denied their accuracy, and they were widely cited in the media at the time. After Daniel Ortega lost the 1990 presidential election, Nicaraguan Jews started returning to Nicaragua. Prior to 1979 the Jewish community had no rabbi or mohel (circumcision practitioner). In 2005, the Jewish community numbered about 50 people and included 3 mohalim, but had no ordained rabbi. In 2017, there was a mass conversion of 114 Nicaraguans to Judaism. Panama The presence of Anusim or Crypto-Jews was recorded as early as the first migrations of Spaniards and Portuguese to the territory. Researcher and writer Elyjah Byrzdett explains that the Judeo-Converso phenomenon in Panama can be divided into two main periods: the Castilian period and the Portuguese period. This period was marked by the arrival of Crypto-Jews of Castilian origin, who played an active role in the colonization of the territory. When Rodrigo de Bastidas arrived on the Isthmus of Panama in 1501, he was accompanied by recent converts to Christianity. From the first Spanish expeditions and throughout the conquest, Judeo-Conversos were present in the region. The governor and founder of the city of Panama, Pedro Arias Dávila (known as Pedrarias), had Jewish ancestry on both his paternal and maternal sides. His paternal grandfather, Ysaque Abenazar, was an influential member of the Jewish community of Segovia, who later converted to Catholicism and adopted the name Diego Arias Dávila. Although his religious beliefs remain uncertain, it is established that he protected Judeo-Conversos from persecution led by Franciscan friar Juan de Quevedo. Other notable figures of Converso origin include the following captains and governors: In his work The Pisa Family: A Converso Lineage, Byrzdett documents the detailed genealogy of the Pisa family, whose descendants arrived in Panama before settling in other regions. Although not all Crypto-Jews bore the name "de Pisa," the author uses it as a reference due to its significance as a common lineage among several Converso families in the region. The Portuguese period began in 1580, following the dynastic union of Portugal with the Spanish Crown. During this period, Portuguese Crypto-Jews, who were better organized and had more resources, managed to establish a house of prayer on Calafates Street, located behind the old cathedral of Panama la Vieja. However, the Inquisition intensified its persecution against Judaizers, culminating in 1640 with an event known as the "Great Conspiracy," which dismantled much of the Crypto-Jewish network on the Isthmus. From then on, their presence in historical records became more difficult to trace, as fear of persecution led many to further conceal their identity. One of the most documented episodes of this persecution was the arrest of the Portuguese Sebastián Rodríguez, accused of being a Judaizer, meaning a practitioner of Judaism. Rodríguez led a group of Crypto-Jews, including Antonio de Ávila, González de Silva, Domingo de Almeyda, and a Mercedarian friar, all secretly practicing Judaism. During the trial, four doctors confirmed the presence of a circumcision mark on Rodríguez, which was used as evidence against him. When the isthmus joined Simón Bolívar’s federation project, a new wave of Jewish migration took place, revitalizing Mosaic faith in the region. These early Jewish immigrants arrived under a new policy that encouraged religious freedom in the newly independent territories. Thanks to their proficiency in languages such as German, Spanish, French, English, Dutch, and Papiamento, they played a crucial role as intermediaries and translators, facilitating communication between the local population and foreigners arriving in or passing through the region. Sephardic (Judeo-Spanish) mainly from nearby islands such as Curaçao, St. Thomas and Jamaica, and Jewish immigrants from Central and Eastern Europe began arriving in Panama in large quantities until the mid-nineteenth century, attracted by economic incentives such as bi-oceanic railway construction and the California gold rush. And Ashkenazi (Judeo-German) Jews began arriving in significant numbers in Panama in the mid-19th century, attracted by economic opportunities such as the construction of the interoceanic railroad and the California Gold Rush. This migratory flow marked an important chapter in the history of Panama’s Jewish community. The Republic of Panama, in its current form, would be significantly different without the notable contributions of the Panamanian Jewish community. Its role in the struggle for the country's independence in 1903 was crucial and prevented the failure of the separatist movement. Prominent members of the Kol Shearith Israel Congregation, such as Isaac Brandon, M.D. Cardoze, M.A. De León, Joshua Lindo, Morris Lindo, Joshua Piza, and Isaac L. Toledano, provided essential financial support to the Revolutionary Junta when the promised funds from Philippe Jean Bunau-Varilla failed to materialize. Without their contribution, the lives of the leaders responsible for Panama’s separation from Colombia could have been in jeopardy. For this reason, the commitment of the Jewish community was of vital importance at this historical moment for Panama. They were followed by other waves of immigration: during the First World War the Ottoman Empire from disintegrating, before and after the Second World War from Europe, from Arab countries because of the exodus caused in 1948 and more recently from South American countries suffering economic crises. The center of Jewish life in Panama is Panama City, although historically small groups of Jews settled in other cities, like Colón, David, Chitre, La Chorrera, Santiago de Veraguas and Bocas del Toro. Those communities are disappearing as families move to the capital in search of education for their children and for economic reasons. Today Jewish community numbers some 20,000. Panama is the only country in the world except for Israel that has had two Jewish presidents in the twentieth century. In the sixties Max Delvalle was first vice president, then president. His nephew, Eric Arturo Delvalle, was president between 1985 and 1988. The two were members of Kol Shearith Israel synagogue and were involved in Jewish life. Paraguay Toward the 19th century, Jewish immigrants arrived in Paraguay from countries such as France, Switzerland and Italy. During World War I Jews from Palestine (Jerusalem), Egypt and Turkey arrived in Paraguay, mostly Sephardic Jews. In the 1920s, there was a second wave of immigrants from Ukraine and Poland. Between 1933 and 1939, between 15,000 and 20,000 Jews from Germany, Austria and Czechoslovakia took advantage of Paraguay's liberal immigration laws to escape from Nazi-occupied Europe. After World War II, most Jews that arrived in Paraguay were survivors of concentration camps. Today, there are 1,000 Jews mostly living in Paraguay's capital, Asunción. Most are of German descent. Peru In Peru, conversos arrived at the time of the Spanish Conquest. At first, they had lived without restrictions because the Inquisition was not active in Peru at the beginning of the Viceroyalty. Then, with the advent of the Inquisition, New Christians began to be persecuted, and, in some cases, executed. In this period, these people were sometimes called "marranos", converts ("conversos"), and "cristianos nuevos" (New Christians) even if they had not been among the original converts from Judaism and had been reared as Catholics. The descendants of these Colonial Sephardic Jewish descent converts to Christianity settled mainly in the northern highlands and northern high jungle, and they were assimilated to local people: Cajamarca, the northern highlands of Piura as Ayabaca and Huancabamba, among others, due to cultural and ethnic contact with the southern highlands of Ecuador. In modern times, before and after the Second World War, some Ashkenazic Jews, Western and Eastern Slavic and Hungarians mainly, migrated to Peru, mostly to Lima. Today, Peruvian Jews represent an important part of the economics and politics of Peru; the majority of them are from the Ashkenazi community. Puerto Rico Puerto Rico is currently home to the largest Jewish community in the Caribbean, with over 3,000 Jews supporting four synagogues; three in the capital city of San Juan: one each Reform, Conservative and Chabad, as well as a Satmar community in the western part of the island in the town of Mayagüez known as Toiras Jesed for Minyanim information. Many Jews managed to settle in the island as secret Jews and settled in the island's remote mountainous interior as did the early Jews in all Spanish and Portuguese colonies. In the late 1800s during the Spanish–American War many Jewish American servicemen gathered together with local Puerto Rican Jews at the Old Telegraph building in Ponce to hold religious services. Many Central and Eastern European Jews came after World War II. Suriname Suriname has the oldest Jewish community in the Americas. During the Inquisition in Portugal and Spain around 1500, many Jews fled to the Netherlands and the Dutch colonies to escape social discrimination and inquisitorial persecution, sometimes including torture and condemnation to the stake. Those who were converted to the Catholic faith were called New Christians, conversos, and, less often, "Marranos". The stadtholder of the King of Portugal gave those who wanted to depart some time to let them settle, and supplied them with 16 ships and safe conduct to leave for the Netherlands. The Dutch government gave an opportunity to settle in Brazil. But most found their home in Recife, and merchants became cocoa growers. But the Portuguese in Brazil forced many Jews to move into the northern Dutch colonies in the Americas, The Guyanas. Jews settled in Suriname in 1639.[citation needed] Suriname was one of the most important centers of the Jewish population in the Western Hemisphere, and Jews there were planters and slaveholders. For a few years, when World War II arrived, many Jewish refugees from the Netherlands and other parts of Europe fled to Suriname. Today, 2,765 Jews live in Suriname.[citation needed] Trinidad and Tobago History of the Jews in Trinidad and Tobago redirects here. Trinidad and Tobago, a former British colony, is home to over 500 Jews. Uruguay Uruguay is home to the fifth-largest Jewish community in Latin America, but the largest as a proportion of the country's total population. Jewish presence began during the colonial era, with the arrival of conversos to the Banda Oriental, fleeing the Spanish Inquisition. However, considerable Jewish immigration began at the end of the 19th century with the arrival of some Sephardic Jews from neighboring countries, and spread during the first half of the 20th century with the arrival of a large number of Ashkenazim. By the first decades of the 20th century, the Jewish community had already set up an educational network, and its presence was notable in several areas of the capital Montevideo, such as the Villa Muñoz neighborhood, which became known as the city's Jewish quarter. In addition, Jews from Belarus and Bessarabia formed an agricultural community in the rural area of the Paysandú Department. Most of the Jewish immigration to Uruguay took place in the 1920s and 1930s, although in this latter period, there were some Fascist and liberal anti-immigration sectors that opposed all foreign immigration, weighing heavily on Jewish immigration. However, the country has traditionally been the destination of a large number of Jewish refugees during and after World War II. In 1940, the Central Israelite Committee of Uruguay was founded, uniting the different Jewish communities that had been formed based on the place of origin of the Jews who arrived in the country. It is estimated that between the 1950s and 1960s the Jewish community in Uruguay was made up of approximately 50,000 people. Venezuela The history of Venezuelan New Christians most likely began in the middle of the 17th century, when some records suggest that groups of conversos lived in Caracas and Maracaibo. At the turn of the 19th century, Venezuela and Colombia were fighting against their Spanish colonizers in wars of independence. Simón Bolívar, Venezuela's liberator and his sister, found refuge and material support for his army in the homes of Jews from Curaçao. After independence, in 1826 practicing Jews came from Curaçao to Santa Ana Coro, where they had flourished under Dutch rule. Judaism was recognized as a legal religion. The government granted the Jews land for a cemetery. According to a national census taken at the end of the 19th century, 247 Jews lived in Venezuela as citizens in 1891. In 1907, the Israelite Beneficial Society, which became the Israelite Society of Venezuela in 1919, was created as an organization to bring all the Jews who were scattered through various cities and towns throughout the country together. By 1943, nearly 600 German Jews had entered the country, with several hundred more becoming citizens after World War II. By 1950, the community had grown to around 6,000 people, even in the face of immigration restrictions. During the first decades of the 21st century, many Venezuelan Jews decided to emigrate due to the growth of antisemitism and to the political crisis and instability. Currently, there are around 10,000 Jews living in Venezuela, with more than half living in the capital Caracas. Venezuelan Jewry is split equally between Sephardim and Ashkenazim. All but one of the country's 15 synagogues are Orthodox. The majority of Venezuela's Jews are members of the middle class. The current president of Venezuela, Nicolas Maduro, claims to be of Sephardic Jewish descent. Jewish groups, such as the Latin American Jewish Congress, have criticized Maduro and his predecessor, Hugo Chavez, of fostering antisemitism. Reported Jewish populations in the Americas and the Caribbean in 2014 1 CIA World Factbook, with most estimates current as of July 2014; Jewish Virtual Library: Vital Statistics: Jewish Population of the World (1882 – Present). See also References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/w/index.php?title=OpenAI&action=edit§ion=13] | [TOKENS: 1430] |
Editing OpenAI (section) Copy and paste: – — ° ′ ″ ≈ ≠ ≤ ≥ ± − × ÷ ← → · § Cite your sources: <ref></ref> {{}} {{{}}} | [] [[]] [[Category:]] #REDIRECT [[]] <s></s> <sup></sup> <sub></sub> <code></code> <pre></pre> <blockquote></blockquote> <ref></ref> <ref name="" /> {{Reflist}} <references /> <includeonly></includeonly> <noinclude></noinclude> {{DEFAULTSORT:}} <nowiki></nowiki> <!-- --> <span class="plainlinks"></span> Symbols: ~ | ¡ ¿ † ‡ ↔ ↑ ↓ • ¶ # ∞ ‹› «» ¤ ₳ ฿ ₵ ¢ ₡ ₢ $ ₫ ₯ € ₠ ₣ ƒ ₴ ₭ ₤ ℳ ₥ ₦ ₧ ₰ £ ៛ ₨ ₪ ৳ ₮ ₩ ¥ ♠ ♣ ♥ ♦ 𝄫 ♭ ♮ ♯ 𝄪 © ¼ ½ ¾ Latin: A a Á á À à  â Ä ä Ǎ ǎ Ă ă Ā ā à ã Å å Ą ą Æ æ Ǣ ǣ B b C c Ć ć Ċ ċ Ĉ ĉ Č č Ç ç D d Ď ď Đ đ Ḍ ḍ Ð ð E e É é È è Ė ė Ê ê Ë ë Ě ě Ĕ ĕ Ē ē Ẽ ẽ Ę ę Ẹ ẹ Ɛ ɛ Ǝ ǝ Ə ə F f G g Ġ ġ Ĝ ĝ Ğ ğ Ģ ģ H h Ĥ ĥ Ħ ħ Ḥ ḥ I i İ ı Í í Ì ì Î î Ï ï Ǐ ǐ Ĭ ĭ Ī ī Ĩ ĩ Į į Ị ị J j Ĵ ĵ K k Ķ ķ L l Ĺ ĺ Ŀ ŀ Ľ ľ Ļ ļ Ł ł Ḷ ḷ Ḹ ḹ M m Ṃ ṃ N n Ń ń Ň ň Ñ ñ Ņ ņ Ṇ ṇ Ŋ ŋ O o Ó ó Ò ò Ô ô Ö ö Ǒ ǒ Ŏ ŏ Ō ō Õ õ Ǫ ǫ Ọ ọ Ő ő Ø ø Œ œ Ɔ ɔ P p Q q R r Ŕ ŕ Ř ř Ŗ ŗ Ṛ ṛ Ṝ ṝ S s Ś ś Ŝ ŝ Š š Ş ş Ș ș Ṣ ṣ ß T t Ť ť Ţ ţ Ț ț Ṭ ṭ Þ þ U u Ú ú Ù ù Û û Ü ü Ǔ ǔ Ŭ ŭ Ū ū Ũ ũ Ů ů Ų ų Ụ ụ Ű ű Ǘ ǘ Ǜ ǜ Ǚ ǚ Ǖ ǖ V v W w Ŵ ŵ X x Y y Ý ý Ŷ ŷ Ÿ ÿ Ỹ ỹ Ȳ ȳ Z z Ź ź Ż ż Ž ž ß Ð ð Þ þ Ŋ ŋ Ə ə Greek: Ά ά Έ έ Ή ή Ί ί Ό ό Ύ ύ Ώ ώ Α α Β β Γ γ Δ δ Ε ε Ζ ζ Η η Θ θ Ι ι Κ κ Λ λ Μ μ Ν ν Ξ ξ Ο ο Π π Ρ ρ Σ σ ς Τ τ Υ υ Φ φ Χ χ Ψ ψ Ω ω {{Polytonic|}} Cyrillic: А а Б б В в Г г Ґ ґ Ѓ ѓ Д д Ђ ђ Е е Ё ё Є є Ж ж З з Ѕ ѕ И и І і Ї ї Й й Ј ј К к Ќ ќ Л л Љ љ М м Н н Њ њ О о П п Р р С с Т т Ћ ћ У у Ў ў Ф ф Х х Ц ц Ч ч Џ џ Ш ш Щ щ Ъ ъ Ы ы Ь ь Э э Ю ю Я я ́ IPA: t̪ d̪ ʈ ɖ ɟ ɡ ɢ ʡ ʔ ɸ β θ ð ʃ ʒ ɕ ʑ ʂ ʐ ç ʝ ɣ χ ʁ ħ ʕ ʜ ʢ ɦ ɱ ɳ ɲ ŋ ɴ ʋ ɹ ɻ ɰ ʙ ⱱ ʀ ɾ ɽ ɫ ɬ ɮ ɺ ɭ ʎ ʟ ɥ ʍ ɧ ʼ ɓ ɗ ʄ ɠ ʛ ʘ ǀ ǃ ǂ ǁ ɨ ʉ ɯ ɪ ʏ ʊ ø ɘ ɵ ɤ ə ɚ ɛ œ ɜ ɝ ɞ ʌ ɔ æ ɐ ɶ ɑ ɒ ʰ ʱ ʷ ʲ ˠ ˤ ⁿ ˡ ˈ ˌ ː ˑ ̪ {{IPA|}} This page is a member of 8 hidden categories (help): |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.