text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/List_of_open-source_codecs] | [TOKENS: 89] |
Contents List of open-source codecs This is a listing of open-source codecs—that is, open-source software implementations of audio or video coding formats, audio codecs and video codecs respectively. Many of the codecs listed implement media formats that are restricted by patents and are hence not open formats. For example, x264 is a widely used open source implementation of the heavily patent encumbered MPEG-4 AVC video compression standard. Video codecs Audio codecs See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Igboland] | [TOKENS: 3805] |
Contents Igboland – in Africa (green & dark grey)– in Nigeria (green) Igbo land (Standard Igbo: Àlà Ị̀gbò) is a cultural and common linguistic region in southeastern Nigeria which is the indigenous homeland of the Igbo people. Geographically, it is divided into two sections, eastern (the larger of the two) and western. Its population is characterized by the diverse Igbo culture.: 307 : 315 Politically, Igboland is divided into several southern Nigerian states; culturally, it has included several subgroupings, including the Awka-Enugu-Nsukka, Anioma-Enuani, the Umueri-Aguleri-Anam groups, the Ngwa, the Orlu-Okigwe-Owerri communities, the Mbaise, the Ezza, Bende, the Ikwuano-Umuahia (these include Ohuhu, Ubakala, Oboro, Ibeku, etc.), the Omuma, the Abam-Aro-Ohafia (Abiriba and Nkporo), the Waawa, the Ndoki, the Etche, the Ekpeye, and the Ogba. Territorial boundaries Igboland is surrounded on all sides by large rivers, and other southern and central Nigeria indigenous tribes, namely Igala, Tiv, Yako, Idoma and Ibibio. In the words of William B. Baikie: Igbo settlement, extends east and west in the Niger-Delta region which is owned by the Middle-Belt, formerly known as Bendel, from the Old Kalabar river to the banks of the Kwora, Niger River, and live in some territory at Aboh, to the west-ward of the latter stream. On the north it borders on Igara, Igala and A'kpoto, and it is separated from the sea only by petty tribes, all of which trace their origin to this great race.: 307 It is primarily situated in the Lowland forest region of Nigeria. They can also be found in some parts of the Niger-Delta. Here the Niger river fans out into the Atlantic Ocean in a vast network of creeks and mangrove swamps on the Bight of Bonny. The earliest found settlements in Igboland date to 900 BCE in the central area, from where the majority of the Igbo-speaking population is believed to have migrated. The northern Igbo Kingdom of Nri, which rose around the 10th century CE, is credited with the foundation of much of Igboland's culture, customs, and religious practices. It is the oldest existing monarchy in present-day Nigeria. In southern Igboland several groups developed, of which the most notable was the Aro Confederacy and in Western Igboland the Anioma kingdom of Aboh. During the late 19th century, Igboland was made part of the Southern Nigeria Protectorate of the British Empire and was amalgamated into modern-day Nigeria in 1914. Nigeria gained independence in 1960. Shortly afterward, Igboland was involved in its biggest war during Biafra's movement for secession. It ended in 1970, when the war was declared "No victory, no vanquish". Geography and biodiversity Historically, Igboland has taken up a large part of southeastern Nigeria, mostly on the eastern side of the Niger River. Their territory extends westward across the Niger to the regions of Aniocha, Ndokwa, Ukwuani, and Ika in present-day Delta State. Its eastern side is terminated by the Cross River, although micro-communities exist over on the other side of the river; its northernmost point enters the Savannah climate around Nsukka. In Nigeria today, Igboland is roughly made up of Abia, Anambra, Ebonyi, Enugu, Imo, Northern Delta and Rivers states. More than 30 million people inhabit Igboland and with a population density ranging from 140 to 390 inhabitants per square kilometre (350 to 1,000/sq mi) it could be the most densely populated area in Africa after the Nile Valley. Altogether Igboland has an area of some 40,900 to 41,400 km2 (15,800 to 16,000 sq mi). Ancient trade routes Igboland's culture has been shaped by its rainforest climate, its ancient trade along the rivers, migration, and social history within its various clans and peoples. It has been influenced by its ancient trading neighbours, allies, and more recently by relations with Europeans. Mid-nineteenth century trader W. B. Baikie said, "I seized the moment, and, by our interpreter, told Tshukuma, that we had come to make his acquaintance and his friendship, and to ascertain if the people were willing to trade with us." He signed a trade agreement with Igbo chief, Tshukuma (Chukwuma) Obi from Aboh which engaged in early active trading with Europe.: 45 Similarly, Baikie recounted that "after our salutations, I spoke of friendship, of trade, and of education, and particularly enlarged upon the evils of war, and the benefits of peace, all of which was well received", when signing a trade agreement on August 30, 1885, with Ezebogo, an Igbo chief in Asaba.: 296 Due to the native common linguistic standard and interrelated cultures in Igboland, the lower Niger River, which divides Igboland into unequal eastern and western parts, has from ancient times provided easy means of communication, trading and unity amongst the Igbo on both sides of the river.: 300 It also enabled ancient trade and migration of people into Igboland, and between Igboland and rest of the world. Some of the notable ancient trade and export routes in Igboland included the famous lower Niger and Njaba-Oguta lake-Orashi navigational routes via Asaba-Onitsha-Aboh,: 315 and Awo-omamma-Oguta-Ogba–Egbema–Ndoni-Aboh ferry services, respectively.: 300 History There is evidence of Late Stone Age (late Paleolithic) human presence from at least 10,000 years ago. Early settlement of Igboland is dated to 6000 BC based on pottery found in the Okigwe, Oka Igwe, and known today as Awka. In 1978 a team led by Thurstan Shaw, with the University of Nigeria at Nsukka, excavated a rock quarry. They found that it was a mine for tool and pottery making for a 'stone civilisation' nearby at Ibagwa. Anthropologists at the University of Benin have discovered fossils and use of monoliths dating to 4500 BC at Ngodo in Uturu town. Further evidence of ancient settlements were uncovered at what researchers believe may be an Nsukka metal cultural area from 3000 BC, and later settlements attributed to Ngwa culture at AD 8–18. It is unclear what cultural links there are between these pre-historic artefacts and the people of the region today. Later human settlement in the region may have links with other discoveries made in the wider area, particularly with the culture associated with the terracotta discoveries at Nok, which spanned a wide area of present-day north-central Nigeria. Some local villagers retain what they believe are original names of settlements, such as Umuzuoka, The Blacksmiths Ụzụoka, Ọkigwe, Ịmọka, etc.[clarification needed] The Nsukka-Okigwe axis forms a basis for a proposed Proto-Igbo cultural heartland antecedent to contemporary Igbo culture. Much of the Igbo population is believed to have expanded from a smaller area in this region, diverging into several independent Igbo-speaking tribes, village-groups, kingdoms and states. The movements were generally broken into two trends in migration: a more northerly group that expanded towards the banks of the Niger and the upper quadrant of the Cross River; the other, following a southerly trail, had risen from the Isu populations based closer to the axis from which the majority of southern Igbo communities emerged. Mbaise are notably the best examples of an Igbo group claiming autochthony; they reject theories of many migratory histories about their origins. Based on the proximity of traditions to those of their neighbours, and familial and political ties, many of these groups are apparently culturally northern or southern Igbo. The first Igbo Ukwu metal and precious artefacts finds were made accidentally in 1939, when a resident named Isiah Anozie found them in the process of digging a cistern. This led to the discovery of a larger network of linked metal works from the 9th century. The works were based in Igbo Ukwu. Further finds were made by archaeology teams led by Thurstan Shaw in 1959–60, and in 1964 in the compound of Jonah Anozie. Initially, throughout the 1960s and 1970s, scholars believed that the Igbo Ukwu bronze and copper items found here had been made elsewhere and were trade goods or were influenced by outside technology due to their technical sophistication. The opposite was revealed to be true: local copper deposits had been exploited by the 9th century and anthropological evidence, such as the Ichi-like scarifications on the human figures, show the items were of local Igbo cultural origin. The works have since been attributed to an isolated bronze industry, which had developed without outside influence over time and reached great sophistication. Igbo trade routes of the early second millennium reached the cities of Mecca, Medina and Jeddah through a network of trade routes journeyed by middlemen. Beads that originated in India in the 9th century have been found in Igbo Ukwu burial sites: Thousands of glass beads were uncovered from the ruined remains of a nobleman's garments. The burial site was associated with the Nri Kingdom, which began around the same century, according to indigenous history. The northern Igbo Kingdom of Nri, rising around the 10th century based on Umunri traditions, is credited with the foundation of much of Igboland's culture, customs, and religious practices. It is the oldest existing monarchy in present-day Nigeria. It was around the mid-10th century that the divine figure Eri is said to have migrated, according to Umunri lore, to the Anambra (Igbo: Omambara) river basin — specifically at its meeting with Ezu river known as Ezu na Omambara in present-day Aguleri. The exact origins of Eri are unknown and much of Nri traditions present him as a divine leader and civiliser sent from heaven to begin civilisation. Due to historic trade and migration of old, other people also entered the Igboland in about the fourteenth or fifteenth centuries and mixed with the natives. Towards the western end of Igboland, across the Niger River, rose a man known as Eze Chima who fled Benin with his accomplices after a dispute with the Oba of Benin who consequently exiled him in the 1560s. As they left Benin City heading eastwards, Eze Chima and his followers settled in a number of lands and established monarchies with the natives in those areas. Those grew into townships and kingdoms after the 16th century. Collectively, these places are known as Umuezechima which translates as 'the children or descendants of king Chima'. Igboland was historically known as the Ibo(e), Ebo(e), and Ibwo Country by early European explorers. Igboland was conquered by the British Empire after several decades of resistance on all fronts; some of the most famous of the resistance include the Ekumeku Movement, the Anglo-Aro War, and the Aba Women's Riots which was contributed to by women of different ethnic backgrounds in eastern Nigeria. A number of polities rose either directly or indirectly as a result of Nri; the most powerful states were those of the Aro Confederacy which rose in the Cross River region in the 17th century and declined after British colonisation in the early 20th century. The Aro state centred on Arochukwu followed Nri's steady decline, basing much of its economic activities on the rising trade in slaves to Europeans by coastal African middlemen. The present site of Arochukwu was originally settled by the Ibibio people under the Obong Okon Ita kingdom before the conquest of what became Obinkita in the 17th century by two main Igbo groups: the Eze Agwu clan and the Oke Nnachi assisted by the Ibom Isi (or Akpa) mercenaries under the leadership of the Nnubi dynasty. Led by Agwu Inobia, a descendant of Nna Uru from Abiriba, the Eze Agwu clan was centered at their capital Amanagwu and were resisted by Obong Okon Ita which led to the start of the Aro-Ibibio Wars. The war initially became a stalemate. Both sides arranged a marriage between the king of Obong Okon Ita and a woman from Amanagwu. The marriage eventually failed to bring peace but played a decisive role in the war. Oke Nnachi was led by Nnachi Ipia who was a dibia or priest among the Edda people and was called by Agwu Inobia to help in the war against the Ibibio. These groups were followed by a third non-Igbo Ekoi-cultured group, Akpa or Ibom Oburutu who were led by Akuma Nnaubi, the first Eze Aro, the title of the king of the Aro. In southern Igboland several groups developed mostly independent of Nri influence. Most of these groups followed a migration out of Isu communities in present-day Imo State, although some communities, such as the Mbaise cluster of village groups, claim to be autochthonous. Following the British parliament's abolition of the slave trade in 1807, the British Royal Navy had opened up trade with coastal towns Bonny and Opobo and further inland on the Niger with Asaba in the 1870s. The palm oil industry, the biggest export, grew large and important to the British who traded here. British arrival and trade led to increased encounters between the Igbo and other polities and ethnic groups around the Niger River and led to a deepening sense of a distinct Igbo ethnic identity. Missionaries had started arriving in the 1850s. The Igbo, at first wary of the religion, started to embrace Christianity and Western education as traditional society broke down. Christianity had played a great part in the introduction of European ideology into Igbo society and culture often time through erasure of cultural practice; adherents to the denominations were often barred in partaking in ancient rites and traditions, and joining fraternities and secret societies were forbidden as the church grew stronger. Due to the incompatibility of the Igbo decentralized style of government and the centralized system required for British indirect rule, British colonial rule was marked with open conflicts and much tension. Under British colonial rule, the diversity within each of Nigeria's major ethnic groups slowly decreased and distinctions between the Igbo and other large ethnic groups, such as the Hausa and the Yoruba, became sharper. British rule brought about changes in culture such as the introduction of warrant chiefs as Eze (traditional rulers) where there were no such monarchies. Following the independence of Nigeria from the United Kingdom in 1960, most of Igboland was included in its Eastern Region. Following a coup in 1966 which saw mostly Igbo soldiers assassinating politicians from the western and northern regions of Nigeria, Johnson Aguiyi-Ironsi seized control of Lagos, the capital, and came into power as military head of state of Nigeria. In revolt and retaliation against the government General Aguiyi-Ironsi was ambushed and assassinated by Northern members of the military on 29 July 1966 in a revolt against that had strong ethnic overtones. Ironsi's assassination stood out more because of the method of his killers; Ironsi had his legs tied to the back of a Land Rover and was driven around town while still attached. The Eastern Region formed the core of the secessionist Republic of Biafra. A regional council of the peoples of Eastern Nigeria decided the region should secede as the Republic of Biafra on May 30, 1967. Nigerian General Emeka Odumegwu-Ojukwu on this day made a declaration of independence of Biafra from Nigeria and became the head of state of the new republic. French intelligence officer Jean Mauricheau-Beaupré, a deputy to the then lead coordinator of France's Africa policy Jacques Foccart, declared the following to those who were concerned with French support to Biafra: "[French] support was actually given to a handful of Biafran bourgeoisie in return for the oil. ... The real Ibo mentality is much farther to the left than that of Ojukwu and even if we had won, there would have been the problem of keeping him in power in the face of leftist infiltration." Biafra, for its part, openly appreciated its relationship with France.The Nigerian Civil War (or the "Nigerian-Biafran War") lasted from 6 July 1967 until 15 January 1970, after which Biafra once again became part of Nigeria. The Republic of Biafra was defeated after three years of war by the federal government of Nigeria from 1967 to 1970 with military support from the United Kingdom (strategy and ammunition), Soviet Union (ammunition), the United Arab Republic (air force), as well as with support from other states around the world. The effects of Nigerian war strategies such as starvation on Biafran civilians (most of whom were ethnic Igbo) remains a controversial topic. The movement for the sovereignty of Biafra has continued with a minority, most making up the MASSOB organisation. References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Taiwan%E2%80%93United_States_relations] | [TOKENS: 8338] |
Contents Taiwan–United States relations Lai Ching-te (DPP) Hsiao Bi-khim (DPP) Cho Jung-tai (DPP) 11th Legislative Yuan Han Kuo-yu (KMT) Shieh Ming-yan acting Vacant Chou Hung-hsien Hsu Shu-hsiang Control Yuan Chen Chu Lee Hung-chun Local government Central Election Commission Kuomintang Democratic Progressive Party Taiwan People's Party Others New Power Party Taiwan Statebuilding Party People First Party Taiwan Solidarity Union New Party Non-Partisan Solidarity Union Newspapers United Daily News Liberty Times China Times Taipei Times Propaganda Censorship Film censorship Lin Chia-lung Cross-Strait relations Special state-to-state relations One Country on Each Side 1992 Consensus Taiwan consensus Chinese Taipei Australia–Taiwan relations Canada–Taiwan relations France–Taiwan relations Russia–Taiwan relations Taiwan–United Kingdom relations Taiwan–United States relations Nationalist government Chinese Civil War One-China policy China and the United Nations Chinese unification Taiwan independence movement Taiwanese nationalism Tangwai movement After the United States established diplomatic relations with the People's Republic of China (PRC) in 1979 and recognized Beijing as the only legal government of China, Taiwan–United States relations became unofficial and informal following terms of the Taiwan Relations Act (TRA), which allows the United States to have relations with the Taiwanese people and their government, whose name is not specified. U.S.–Taiwan relations were further informally grounded in the Six Assurances in response to the third communiqué on the establishment of US–PRC relations. The Taiwan Travel Act, passed by the U.S. Congress on March 16, 2018, allows high-level U.S. officials to visit Taiwan and vice versa. Both sides have since signed a consular agreement formalizing their existent consular relations on September 13, 2019. The US government removed self-imposed restrictions on executive branch contacts with Taiwan on January 9, 2021. The U.S. has viewed Taiwan as geostrategically important given its key location in the first island chain. Over the past four decades, the U.S. government's policy of deliberate ambiguity toward Taiwan has been viewed as critical to stabilizing cross-strait relations by seeking to deter the PRC from using force toward the region and dissuade Taiwan from seeking independence. However, in recent years as Beijing escalated its moves and further clarified its intentions, the effectiveness of strategic ambiguity became a topic of debate in academic and policy communities. Taiwan is the United States' ninth largest trading partner. As stipulated by the TRA, the United States remains the main provider of arms to Taiwan, which has often been a source of tension with the PRC. Both states maintain representative offices functioning as de facto embassies. Taiwan is represented by the Taipei Economic and Cultural Representative Office in the United States (TECRO), while the U.S. government is represented by the American Institute in Taiwan (AIT). History In 1784, the United States attempted to send a consul to China but was rejected by the Chinese government. Official relations beginning on June 16, 1844, under U.S. President John Tyler, leading to the 1845 Treaty of Wangxia. Two American diplomats in the 1850s suggested to Washington that the U.S. should obtain the island of Taiwan from China, but the idea was rejected. Aboriginals in Taiwan often attacked and massacred shipwrecked western sailors, and American diplomats tried to help them. In 1867, during the Rover incident, a group of Taiwanese Indigenous people killed the crew of a wrecked American ship. A subsequent U.S. military expedition attempted retaliation but was defeated in a skirmish, resulting in the death of another American. In 1894, the Revive China Society, an early predecessor of the Kuomintang (KMT) political party, was established in Honolulu to oppose the Qing dynasty, prior to the annexation of Hawaii by the United States. As Taiwan was under Japanese control, following the 1911 Revolution, which overthrew the Qing dynasty, the William Taft administration recognized the government of the Republic of China (ROC) as the sole and legitimate government of China despite a number of governments ruling various parts of China. As one of the first Western countries to recognize the Republic of China, the United States prompted the government in Peking to send a delegation led by Ch’en Ch’i to the Panama-Pacific International Exposition world's fair in February 1915. Chinese organizers reportedly contributed more than half of all exhibits presented by participating nations. China was reunified by a single government, led by the Kuomintang (KMT) in 1928, which subsequently gained recognition as China's only legitimate government despite continued internal strife. The first recipient of the Nobel Prize in Literature recognized for writing about China was Pearl S. Buck, an American author born in the United States and raised in China. Her 1938 Nobel lecture was titled The Chinese Novel. During the period of Japanese rule, the United States operated a consulate in Taihoku, Formosa (now Taipei), beginning in 1913. The consulate was closed in 1941 due to United States declaration of war on Japan. In 1997, the building was listed as historic monument by Chinese government. During the Pacific War, the United States and the Republic of China were allied against Japan. In October 1945, a month after Japan's surrender, representatives of Chinese leader Chiang Kai-shek, on behalf of the Allies, were sent to Formosa to accept the surrender of Japanese troops. However, during the period of the 1940s, there was no recognition by the United States Government that Taiwan had ever been incorporated into Chinese national territory. Chiang continued to remain suspicious of America's motives. After being defeated by Communist forces in the Chinese Civil War: 125 , the Nationalist government of the Republic of China retreated to Taiwan.: 125 In August 1949, the United States suspended the Republic of China’s participation in the Fulbright Program, as the ROC government, then in retreat, was unable to continue payments on surplus war materials purchased from the United States following World War II.: 32 On January 5, 1950, United States President Harry S. Truman issued a statement that the United States would not become involved in "the civil conflict in China" and would not provide military aid or advice to the Nationalist forces in Taiwan.: 125 On February 6, 1950, the ROC Air Force bombed Shanghai, causing extensive damage to American-owned property in the city including the Shanghai power company.: 125 The American government responded by sending a diplomatic protest to the ROC Ministry of Foreign Affairs.: 125 As the Korean War broke out, the United States resumed military aid to the ROC and sent the US Navy's Seventh Fleet into the Taiwan Strait.: 50 US military presence in Taiwan consisted of the Military Assistance Advisory Group (MAAG) and the United States Taiwan Defense Command (USTDC). Other notable units included the 327th Air Division. United States military, technical, and economic aid to Taiwan increased following China's entry into the Korean War in late October 1950.: 128 United States General Douglas MacArthur described Taiwan as an "unsinkable aircraft carrier" and visited the island during the war.: 164 Until the U.S. formally recognized the People's Republic of China in 1979, the U.S. had provided ROC with financial grants based on the Foreign Assistance Act, Mutual Security Act,: 129 and Act for International Development enacted by the US Congress. Taiwan became a top recipient of United States aid in the following years.: 128 After their defeat in the Chinese Civil War, parts of the Republic of China army had retreated south and crossed the border into Burma.: 65 The United States supported these Republic of China forces in the hope that they would engage the People’s Republic of China from the southwest, thereby diverting Chinese resources away from the Korean War.: 65 The Burmese government protested and international pressure increased.: 65 Beginning in 1953, several rounds of withdrawals of the ROC forces and their families were carried out.: 65 In 1960, joint military action by PRC and Burma expelled the remaining ROC forces from Burma, although some went on to settle in the Burma-Thailand borderlands.: 65–66 During a visit to Taiwan in 1953, U.S. Vice President Richard Nixon stated that the United States would support the development of Taiwan as an anti-communist military and cultural stronghold.: 10 In 1954, the United States began providing significant funding for education in Taiwan, including to attract overseas Chinese.: 10 These efforts also helped the KMT to consolidate its power on Taiwan.: 10 The Sino-American Mutual Defense Treaty was signed between the US and ROC in 1954 and lasted until 1979. The U.S. State Department's official position on Taiwan in 1959 was: That the provisional capital of the Republic of China has been at Taipei, Taiwan (Formosa) since December 1949; that the Government of the Republic of China exercises authority over the island; that the sovereignty of Formosa has not been transferred to China; and that Formosa is not a part of China as a country, at least not as yet, and not until and unless appropriate treaties are hereafter entered into. Formosa may be said to be a territory or an area occupied and administered by the Government of the Republic of China, but is not officially recognized as being a part of the Republic of China. — U.S. State Department, 1959, In 1970s, Taiwanese activist Peter Huang attempted to assassinate Chiang Ching-kuo in New York City.: 27 During the early Cold War the United States deployed nuclear weapons on Taiwan as part of the United States Taiwan Defense Command. In 1972, United States president Richard Nixon ordered nuclear weapons to be removed from Taiwan and this was implemented by 1974. In the 1970s, the Kuomintang (KMT) government, led by Executive Yuan Premier Chiang Ching-kuo, launched a people’s diplomacy campaign in the United States aimed at rallying public and political opposition to the People’s Republic of China through demonstrations and petitions.: 42 Among these efforts, the KMT worked with the John Birch Society to launch a petition writing campaign through which Americans were urged to write their local government officials and ask them to "Cut the Red China connection.": 42 During its martial law period (1949 to 1987), the Taiwan government surveilled Taiwanese abroad, most often in Japan and in the United States.: 2 The United States Federal Bureau of Investigation often cooperated with or allowed the KMT to surveil Taiwanese students and other Taiwanese migrants in the United States.: 15 According to a 1979 report by the United States Senate Foreign Relations Committee, the Taiwan government operated one of the two most active anti-dissident networks within the United States, with agents infiltrated within universities and campus organizations and large-scale propaganda campaigns implemented through front organizations.: 7 In 1979 and 1980, a series of bombings targeted KMT offices and officials in the United States.: 151 The United States placed the World United Formosans for Independence on its terrorist organization watch list as a result.: 151 At the height of the Sino-Soviet Split, and at the start of the reform and opening of People's Republic of China, the United States strategically switched diplomatic recognition from the Republic of China (ROC) to the People's Republic of China (PRC) on January 1, 1979, to counter the political influences and military threats from the Soviet Union. The US Embassy in Taipei was 'migrated' to Beijing and the Taiwanese Embassy in the US was closed. Following the termination of diplomatic relations, the United States terminated its Mutual Defense Treaty with Taiwan on January 1, 1980.[citation needed] On April 10, 1979, U.S. President Jimmy Carter signed into law the Taiwan Relations Act, which created domestic legal authority for the conduct of unofficial relations with Taiwan. U.S. commercial, cultural, and other interaction with the people on Taiwan is facilitated through the American Institute in Taiwan, a private nonprofit corporation. The institute has its headquarters in the Washington area and has a main office in Taipei and a branch office in Kaohsiung. It is authorized to issue visas, accept passport applications, and provide assistance to U.S. citizens in Taiwan. A counterpart organization, the Taipei Economic and Cultural Representative Office in the United States, has been established by Taiwan. The representative office located in Washington, DC, and has 12 other Taipei Economic and Cultural Offices in the continental U.S. and Guam. The Taiwan Relations Act continues to provide the legal basis for the unofficial relationship between the U.S. and Taiwan, and enshrines the U.S. commitment to assisting Taiwan maintain its defensive capability.[citation needed] After the severance of diplomatic relations, the U.S. still maintains unofficial diplomatic relations with Taiwan through Taipei Economic and Cultural Representative Office; the current Taiwan's Representative to the U.S. is Alexander Yui. The American Institute in Taiwan, a non-profit institute headquarters in the US soil under the laws of the District of Columbia in Arlington County, Virginia and serves as the semi-official, working-level US representation. The current Director of American Institute in Taiwan is Raymond Greene.[citation needed] Taiwan helped Ronald Reagan circumvent the Boland Amendment by providing covert support to the Contras in Nicaragua. Reagan pressured Taiwan into giving up its Sky Horse ballistic missile program. Taiwan's secret nuclear weapons program was revealed after the 1987 Lieyu massacre, when Colonel Chang Hsien-yi Deputy Director of Nuclear Research at INER, who was secretly working for the CIA, defected to the U.S. in December 1987 and produced a cache of incriminating documents. The CIA oversaw negotiations with the Taiwanese which led them to abandon their nuclear ambitions in return for security guarantees. Since the end of the nuclear weapons program the "Nuclear Card" has played an important part in Taiwan's relationship with the United States. In 1997 the Speaker of the United States House of Representatives, Newt Gingrich, visited Taiwan and met with President Lee Teng-hui. In 1999 former President Jimmy Carter visited Taiwan. President Bush was asked on 25 April 2001, "if Taiwan were attacked by China, do we (The U.S.) have an obligation to defend the Taiwanese?" He responded, "Yes, we do...and the Chinese must understand that. The United States would do whatever it took to help Taiwan defend herself." He made it understood that "though we (China and the U.S.) have common interests, the Chinese must understand that there will be some areas where we disagree." On the advice of his advisors, Bush later made clear to the press that there was no change in American policy. In July 2002, Minister of Justice Chen Ding-nan became the first Taiwanese government official to be invited into the White House since 1979. On 24 August 2010, the United States State Department announced a change to commercial sales of military equipment in place of the previous foreign military sales in the hope of avoiding political implications.[citation needed] However pressure from the PRC has continued and it seems unlikely that Taiwan will be provided with advanced submarines or jet fighters. The Taiwan Policy Act of 2013 was raised and passed in the House Committee on Foreign Affairs by the US Congress to update the conditions of US-Taiwan relations. In 2015 Kin Moy was appointed to the Director of the American Institute in Taiwan. U.S. commercial ties with Taiwan have been maintained and have expanded since 1979. Taiwan continues to enjoy Export-Import Bank financing, Overseas Private Investment Corporation guarantees, normal trade relations (NTR) status, and ready access to U.S. markets. In recent years, AIT commercial dealings with Taiwan have focused on expanding market access for American goods and services. AIT has been engaged in a series of trade discussions, which have focused on copyright concerns and market access for U.S. goods and services. On 19 June 2013, the Taiwanese foreign ministry expressed gratitude for a US Congress's bill in support of Taiwan's bid to participate in the International Civil Aviation Organization (ICAO). On July 12, 2013, US President Barack Obama signed into law H.R. 1151, codifying the US government's full support for Taiwan's participation in the ICAO as a non-sovereign entity. The United States has continued the sale of appropriate defensive military equipment to Taiwan in accordance with the Taiwan Relations Act, which provides for such sales and which declares that peace and stability in the area are in U.S. interests. Sales of defensive military equipment are also consistent with the 1982 U.S.-P.R.C. Joint Communiqué.[citation needed] On December 16, 2015, the Obama administration announced a deal to sell $1.83 billion worth of arms to the Armed Forces of Taiwan, a year and eight months after U.S. House passed the Taiwan Relations Act Affirmation and Naval Vessel Transfer Act of 2014 to allow the sale of Oliver Hazard Perry-class frigates to Taiwan. The deal would include the sale of two decommissioned U.S. Navy frigates, anti-tank missiles, Assault Amphibious Vehicles, and FIM-92 Stinger surface-to-air missiles, amid the territorial disputes in the South China Sea. PRC foreign ministry had expressed its disapproval for the sales and issued the U.S. a "stern warning", saying it would hurt China–U.S. relations. On December 2, 2016, U.S. President-Elect Donald Trump accepted a congratulatory call from Taiwanese President Tsai Ing-Wen, which was the first time since 1979 that a President-Elect has publicly spoken to a leader of Taiwan. Donald Trump stated the call was regarding "the close economic, political and security ties between Taiwan and the US". The phone call had been arranged by Bob Dole, who acted as a foreign agent on behalf of Taiwan. In June 2017, the Trump administration approved $1.4 billion arms sales to Taiwan. On 16 March 2018, President Trump signed the Taiwan Travel Act, allowing high-level diplomatic engagement between Taiwanese and American officials, and encourages visits between government officials of the United States and Taiwan at all levels. The legislation has sparked outrage from the PRC, and has been applauded by Taiwan. A new $250 million compound for the American Institute in Taiwan was unveiled in June 2018, accompanied by a "low-key" American delegation. The Chinese authorities described this action as violation of its "one China" policy statement and called on the US to stop any relations with Taiwan. On 17 July 2018, Taiwan's Army officially commissioned all of its Apache attack helicopters purchased from the United States, at cost of $59.31 billion NT(US$1.94 billion), having completed the necessary pilot training and verification of the fleet's combat capability. One of the helicopters was destroyed in a crash during a training flight in Taoyuan in April 2014 and the other 29 have been allocated to the command's 601st Brigade, which is based in Longtan, Taoyuan. Taiwanese President Tsai Ing-wen said the commissioning of the Apaches was "an important milestone" in meeting the island's "multiple deterrence" strategy to counter an invasion and to resist Beijing's pressure with support from Washington, which has been concerned about Beijing's growing military expansion in the South China Sea and beyond. In September 2018, the United States approved the sale of $330 million worth of spare parts and other equipment to sustain the Republic of China Air Force. In July 2019, the US State Department approved the sale of M1A2T Abrams tanks, Stinger missiles and related equipment at an approximate value of $2.2 billion to Taiwan. On 26 March 2020, President Trump signed the TAIPEI Act, aiming to increase the scope of US relations with Taiwan and encouraging other nations and international organizations to strengthen their official and unofficial ties with the island nation. In May 2020, the US State Department approved a possible Foreign Military Sale of 18 MK-48 Mod 6 Advanced Technology Heavy Weight Torpedoes for Taiwan in a deal estimated to cost $180 million. On 9 August 2020, U.S. Health and Human Services Secretary Alex Azar visited Taiwan to meet President Tsai Ing-wen, the first visit by an American official since the break in diplomatic relations between Washington and Taipei in 1979. In September 2020, U.S. Under Secretary of State for Economic Growth, Energy, and the Environment Keith J. Krach attended the memorial service for former Taiwanese President Lee Teng-hui. In September 2020, the US Ambassador to the United Nations Kelly Craft met with Amb. James K.J. Lee, Director-General of the Taipei Economic and Cultural Office in New York, who was secretary-general in Taiwan's Ministry of Foreign Affairs until July, for lunch in New York City in what was the first meeting between a top Taiwan official and a United States ambassador to the United Nations. Craft said she and Lee discussed ways the US can help Taiwan become more engaged within the U.N., and she pointed to a December 2019 email alert from Taiwan that WHO had ignored, recognizing and warning about the danger of the person-to-person transmission of the new highly contagious COVID-19 virus in China. In an October 2020 deal of $2.37 billion between the U.S. and Taiwan, the U.S. State Department approved the potential sale to Taiwan of 400 Harpoon anti-ship cruise missiles including associated radars, road-mobile launchers, and technical support. In January 2021, Taiwan's President Tsai Ing-wen met with United States Ambassador to the UN Kelly Craft by video link. Craft said: "We discussed the many ways Taiwan is a model for the world, as demonstrated by its success in fighting COVID-19 and all that Taiwan has to offer in the fields of health, technology and cutting-edge science.... the U.S. stands with Taiwan and always will." Speaking in Beijing, PRC Ministry of Foreign Affairs spokesman Zhao Lijian said: "Certain U.S. politicians will pay a heavy price for their wrong words and deeds." On her last day in office later that month, Craft called Taiwan "a force for good on the global stage -- a vibrant democracy, a generous humanitarian actor, a responsible actor in the global health community, and a vigorous promoter and defender of human rights." In 2021 and 2022, U.S. President Joe Biden made various forceful comments about coming to Taiwan's military defense in the event of a PRC invasion, indicating what scholars called a potential shift to "strategic clarity," while the State Department reiterated that the administration's Taiwan policy remained unchanged. On March 3, 2021, the Biden administration reasserted the strength of the relationship between the U.S. and Taiwan in the administration's Interim National Security Strategic Guidance. On March 8, 2021, the Biden administration made the following statement during a press briefing: "We will stand with friends and allies to advance our shared prosperity, security, and values in the Indo-Pacific region. We maintain our longstanding commitments, as outlined in the Three Communiqués, the Taiwan Relations Act, and the Six Assurances. And we will continue to assist Taiwan in maintaining a sufficient self-defense capability." In June 2021 a congressional delegation made up of Tammy Duckworth, Dan Sullivan and Christopher Coons briefly visited Taiwan and met with President Tsai Ing-wen. Their use of a C-17 military cargo aircraft drew strong protest from China. In late October 2021, U.S. Secretary of State Antony Blinken called on all United Nations member states to support Taiwan's participation in the U.N. system. The comments came a day after the 50th anniversary of U.N. Resolution 2758, in which the People's Republic of China was designated as the representative of China at the U.N., while the Republic of China (R.O.C.) was expelled. In December 2021, the U.S. invited Taiwan to the Summit for Democracy. On December 15, 2021, the US House of Representative and Senate have both passed the National Defense Authorization Act for Fiscal Year 2022, in which calls for the enhancements of the security of Taiwan, including inviting the Taiwanese navy to the 2022 Rim of the Pacific exercise in the face of "increasingly coercive and aggressive behavior" by China. President Joe Biden signed the act on December 27, 2021. On May 23, 2022, President Biden, during his trip to Asia, vowed to defend Taiwan with US military in the case of an invasion by China. At the end of May Illinois Senator Tammy Duckworth led a congressional delegation to Taiwan. In late May 2022, the State Department restored a line on its fact sheet on US-Taiwan relations which it removed earlier in the month and stated it did not support Taiwanese independence. However, another line which was also removed in the earlier fact sheet that acknowledged China's sovereignty claims over Taiwan was not restored while a line that stated the U.S. would maintain its capacity to resist any efforts by China to undermine the security, sovereignty and prosperity of Taiwan in a manner that was consistent with the Taiwan Relations Act was added to the updated fact sheet.[citation needed] On 27 January 2022, U.S. Vice President Kamala Harris and Vice President of Taiwan Lai Ching-te had a brief conversation during the presidential inauguration ceremony of Xiomara Castro of Honduras. On July 28, 2022, U.S. President Joe Biden had a phone call with CCP General Secretary Xi Jinping, during which he "underscored that the United States policy has not changed and that the United States strongly opposes unilateral efforts to change the status quo or undermine peace and stability across the Taiwan Strait." In July 2022 Senator Rick Scott led a congressional delegation to Taiwan. On August 2, 2022, Nancy Pelosi, the Speaker of the United States House of Representatives led a congressional delegation to Taiwan, leading to a military and economic response from China. Later in August a congressional delegation led by Massachusetts Senator Ed Markey also visited Taiwan and Indiana Governor Eric Holcomb (who became the first Indiana Governor to visit Taiwan since 2005). In late August 2022 Tennessee Senator Marsha Blackburn visited Taiwan. In late August 2022 then Arizona Governor Doug Ducey arrived in Taiwan for a visit focused on semiconductors. In February 2023, Representatives Ro Khanna, Jake Auchincloss, Jonathan Jackson and Tony Gonzales visited Taiwan. In March and April 2023, Tsai Ing-wen, President of Taiwan, traveled to the United States. In March, she met in New York City with House Minority Leader Hakeem Jeffries and a bipartisan group of U.S. Senators: Joni Ernst of Iowa, Mark Kelly of Arizona, and Dan Sullivan of Alaska. On April 5, 2023, Tsai met with Kevin McCarthy, the Speaker of the U.S. House of Representatives, at the Ronald Reagan Presidential Library in Simi Valley, California and a bipartisan delegation of House members. The meeting between Tsai and McCarthy marked the first time a Taiwanese President had met with a US House Speaker on American soil and the second time in less than a year that a Taiwanese President had met with a US House Speaker (having met Pelosi in August 2022 in Taiwan). In June 2023 a US congressional delegation comprising nine representatives headed by Mike Rogers visited Taiwan. On June 29, 2023, the State Department approved $440 million in arms sales to Taiwan, pending final approval by Congress. Beijing opposed the move, AIT Chair Laura Rosenberger later stated that the US' "interest in peace and stability across the Strait and our commitments to supporting Taiwan's self-defense capacity are things we will continue to uphold, any complaints from Beijing are not going to change that approach." On July 28, 2023, the Biden administration formally announced a $345 million military assistance package to Taiwan. Both China and North Korea denounced the move. In September 2023 the Biden administration redirected military aid funding which had been appropriated to Egypt to Taiwan and Lebanon in response to a deteriorating human rights situation in Egypt. In October 2023, Taiwan's vice defense minister Hsu Yen-pu urged the US to accelerate arms delivery at the US-Taiwan Defense Industry Conference in Virginia, a key exchange venue for top US and Taiwan defense officials that had been hosted annually since 2012. Some academics and retired Chinese military officers have claimed that Washington is trying to provoke Beijing to attack Taiwan by providing arms to them. CCP General Secretary Xi Jinping, told European Commission president Ursula von der Leyen that the US was trying to trick China into invading Taiwan, but that he would not take the bait. In November 2023 the US state of North Carolina opened an investment office in Taipei. On February 22, 2024, the State Department approved $75 million in weapons sale to Taiwan, the 13th such approval under the Biden administration. The announcement was made shortly prior to a bipartisan U.S. House Select Committee on China delegation led by Mike Gallagher arrived to Taiwan. President-elect Donald Trump has stated in late 2024 that he won't be committed to defending Taiwan if China invades Taiwan during his presidency. Trump has also suggested that Taiwan should pay the US for protecting it from China, referring to the relationship as insurance; especially after how the island 'took' the U.S. semiconductor business, said Trump. "You know, we're no different than an insurance company. Taiwan doesn't give us anything." In February 2025, the State Department removed a statement from its website stating that it does not support Taiwan independence. The website also added support for Taiwan's membership in international organizations. On May 5, 2025, the House of Representatives passed the bipartisan Taiwan International Solidarity Act, introduced by Democrat Congressman Gerald Connolly and Republican Congresswoman Young Kim. The Act condemns China from distorting the description of Taiwan in UN General Assembly Resolution 2758. In June 2025, the US cancelled a trip by Taiwanese Defense Minister Wellington Koo to the Washington area to meet Under Secretary of Defense for Policy Elbridge Colby. In July 2025, the Trump administration denied President Lai Ching-te permission to stop in New York during a planned visit to Central America after the PRC objected to the US permitting the stopover. In August 2025, the Trump administration announced a 20% tariff on Taiwan. In 2025 the American government sanctioned two Taiwanese companies, Mecatron Machinery Co Ltd and Joemars Machinery and Electric Industrial Co Ltd, for providing drone related goods and services to Iran. In November 2025 a delegation of American legislators from New Hampshire, Maine, Massachusetts, Rhode Island, and Vermont visited Taiwan. Political status In 1949, when Generalissimo Chiang Kai-shek's troops decamped to Taiwan at the end of the Chinese civil war, Washington continued to recognize Chiang's "Republic of China" as the government of all China. In late 1978, Washington announced that it would break relations with the government in Taipei and formally recognize the People's Republic of China (PRC) as the "sole legal government of China." Washington's "one China" policy, however, does not mean that the United States recognizes or agrees with Beijing's claims to sovereignty over Taiwan. On July 14, 1982, the Republican Reagan administration gave specific assurances to Taiwan that the United States did not accept China's claim to sovereignty over the island (Six Assurances), and the U.S. Department of State informed the Senate that "[t]he United States takes no position on the question of Taiwan's sovereignty." The U.S. Department of State, in its U.S. Relations With Taiwan fact sheet, states "[T]he United States and Taiwan enjoy a robust unofficial relationship. The 1979 U.S.–P.R.C. Joint Communiqué switched diplomatic recognition from Taipei to Beijing. In the Joint Communiqué, the United States recognized the Government of the People's Republic of China as the sole legal government of China, acknowledging the Chinese position that there is but one China and Taiwan is part of China. The United States position on Taiwan is reflected in "the six assurances to Taiwan", the Three Communiqués, and the Taiwan Relations Act. The Six Assurances are: The "Three Communiqués" include The Shanghai Communiqué, The Normalisation Communiqué, and The August 17 Communiqué, which pledged to abrogate official US-ROC relations, remove US troops from Taiwan and gradually end the arms sale to Taiwan, but with the latter of no timeline to do so, an effort made by James Lilley, the Director of American Institute in Taiwan.[citation needed] Maintaining diplomatic relations with the PRC has been recognized to be in the long-term interest of the United States by seven consecutive administrations; however, maintaining strong, unofficial relations with Taiwan is also a major U.S. goal, in line with its desire to further peace and stability in Asia. In keeping with its China policy, the U.S. does not support de jure Taiwan independence, but it does support Taiwan's membership in appropriate international organizations, such as the World Trade Organization, Asia-Pacific Economic Cooperation (APEC) forum, and the Asian Development Bank, where statehood is not a requirement for membership. In addition, the U.S. supports appropriate opportunities for Taiwan's voice to be heard in organizations where its membership is not possible.[citation needed] Intelligence and military cooperation The United States Taiwan Defense Command (USTDC; Chinese: 美軍協防台灣司令部) was a sub-unified command of the United States Armed Forces operating in Taiwan from December 1954 to April 1979. Since the mid-1980s, the U.S. National Security Agency (NSA) and Taiwan's National Security Bureau have jointly operated a signals intelligence (SIGINT) listening station at Yangmingshan. Starting in 1997, Republic of China Air Force pilots began training at Luke Air Force Base in Arizona with the 21st Fighter Squadron after the country purchased its first batch of F-16 jets. In 2007, Taiwan sold the US Department of Defense more than a billion rounds of rifle ammunition to replenish stocks depleted by the early years of the war on terror. In 2019, the U.S. State Department approved a contract to train Taiwanese F-16 pilots at Luke Air Force Base. In 2020, Taiwanese pilots began to be trained at Morris Air National Guard Base with the 162nd Wing. In 2020, the U.S. Marine Raiders jointly trained with the Republic of China Marine Corps. In 2021, former president Tsai Ing-wen stated in an interview that U.S. military personnel were in Taiwan engaged in joint training efforts. In 2022, the two countries entered into talks to co-produce weapons. In 2022, a squadron of Taiwanese F-16s was trained at Luke Air Force Base with the 21st Fighter Squadron. In early 2024, it was reported that teams from 1st Special Forces Group would be continuously stationed with Taiwan's 101st Amphibious Reconnaissance Battalion and Airborne Special Service Company for joint training. Since at least 2021, Taiwanese troops have trained with American forces at Exercise Northern Strike in Michigan at Camp Grayling. In 2025, over 500 Taiwanese troops participated in Exercise Northern Strike. In May 2024, the Republic of China Navy and the United States Navy conducted joint drills in the Western Pacific. In September 2024, the Financial Times reported that SEAL Team Six has conducted joint training with the Taiwanese military. In September 2024, Taiwan's first batch of Harpoon anti-ship missiles arrived in Kaohsiung. In December 2024, Taiwan received its first batch of M1 Abrams main battle tanks. In January 2025, the two navies announced a two-year joint training program. In March 2025, Taiwan and the U.S. extended a program to train Taiwanese F-16 pilots in the U.S. By May 2025, about 500 U.S. military trainers were operating in Taiwan. In May 2025, Taiwan tested its first M142 HIMARS system. In August 2025, Taiwan's Ministry of National Defense announced that it would receive U.S.-made Mark 48 torpedoes. In September 2025, Taiwan's National Chung-Shan Institute of Science and Technology (NCSIST) announced it would jointly manufacture missiles with Anduril Industries. In October 2025, Taiwan's Ministry of Defense announced that it would increase reciprocal visits and observation of military exercises with the U.S. In November 2025, the U.S. announced the sale of fighter jet parts and a NASAMS to Taiwan. In December 2025, the U.S. approved a total arms sale of over $11 billion to Taiwan. The package is geared toward countering an amphibious invasion of Taiwan. In January 2026, the U.S. and Taiwan agreed to jointly produce 155 mm caliber shells. In February 2026, NCSIST and Kratos Defense & Security Solutions tested a jointly-developed jet-powered kamikaze drone. Taiwanese satellites have relied on U.S. launch capabilities. Formosat-8 was launched from Vandenberg Space Force Base in 2025 as was Formosat-5 in 2017. Trade In 2013, Taiwan and Nebraska signed an agricultural trade deal. On May 18, 2023, the USTR announced that the US and Taiwan, "under the auspices of the American Institute in Taiwan and the Taipei Economic and Cultural Representative Office in the US, have concluded negotiations on the U.S.-Taiwan Initiative on 21st Century Trade." On August 7, 2023, President Biden signed into law the United States-Taiwan Initiative on 21st-Century Trade First Agreement Implementation Act. In July 2024, Texas governor Greg Abbott signed an economic cooperation agreement between Texas and Taiwan and agreed to open a trade representative office in Taipei. As of 2025[update], 24 U.S. states and territories have representative offices in Taiwan. On 26 February 2025, China accused Taiwan of using its semiconductor sector to gain political favor from the United States. U.S. President Donald Trump criticized Taiwan for its dominance in the U.S. semiconductor industry. Taiwan's government responded by emphasizing its commitment to preserving its position as a leader in semiconductor technology. In March 2025, President Lai Ching-te met with Alaska governor Mike Dunleavy in which it was announced that Taiwanese state-owned oil and gas company CPC Corporation would purchase six million tons of natural gas from the U.S. state. In October 2025, Tennessee governor Bill Lee signed a memorandum of understanding with Taiwan to expand economic cooperation. In mid-January 2026, Taiwan and the United States signed a trade agreement for Taiwanese semiconductor and technology companies to invest US$250 billion in the US economy in return for the United States reducing its tariffs on Taiwanese exports from 20 to 15 percent. Consular representation The United States has a de facto embassy in Taipei called the American Institute in Taiwan. It also operates a de facto consulate in Kaohsiung called the Kaohsiung Branch Office of the American Institute in Taiwan. Taiwan is represented by the Taipei Economic and Cultural Representative Office in the United States in Washington, D.C. This mission is also accredited to Cuba, the Bahamas, Grenada, Antigua and Barbuda, Dominica, and Trinidad and Tobago, despite Taiwan not having official relations with them. Other than the mission in Washington, Taiwan also has offices in Atlanta, Boston, Chicago, Honolulu, Houston, Miami, Los Angeles, New York, San Francisco, Seattle, Guam, and Denver. Country leadership Leaders of Taiwan and the United States from 1950: See also References This article incorporates public domain material from U.S. Bilateral Relations Fact Sheets. United States Department of State. Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Congo_Basin] | [TOKENS: 2431] |
Contents Congo Basin The Congo Basin is the sedimentary basin of the Congo River. The Congo Basin is located in Central Africa, in a region known as west equatorial Africa. The Congo Basin region is sometimes known simply as the Congo. It contains some of the largest tropical rainforests in the world and is an important source of water used in agriculture and energy generation. The rainforest in the Congo Basin is the largest rainforest in Africa and second only to the Amazon rainforest in size, with 300 million hectares compared to the 800 million hectares in the Amazon. Because of its size and diversity the basin's forest is important for mitigating climate change in its role as a carbon sink. However, deforestation and degradation of the ecology by the impacts of climate change may increase stress on the forest ecosystem, in turn making the hydrology of the basin more variable. A 2012 study found that the variability in precipitation caused by climate change will negatively affect economic activity in the basin. Eight sites of the Congo Basin are inscribed on the World Heritage List, five being also on the list of World Heritage in Danger (all five located in Democratic Republic of the Congo). Fourteen percent of the humid forest is designated as protected. Geology The Congo Basin is a large depression within the Congo Craton, making it a patch of relatively recent (Phanerozoic-aged, and mostly Mesozoic & onwards) sedimentary rock within a large, otherwise extremely ancient (Archean-aged) piece of exposed continental crust. The deformation of the Craton began as early as the late Cambrian or early Ordovician and continued over the Paleozoic, but the deformation over this period led to rapid erosion of much of this Paleozoic rock, creating a large unconformity. Sediment started to rapidly accumulate in the basin from the Mesozoic (Triassic) up to the present day. Deposits throughout the Jurassic suggest the presence of a freshwater, lacustrine habitat in the basin, and this continued into the Early Cretaceous. By the start of the Late Cretaceous, a connection with the Trans-Saharan seaway led to a significant marine incursion into the basin (evidence of an earlier, Late Jurassic marine intrusion is disputed), causing it to serve as a connection between the southern Atlantic Ocean and the Tethys Ocean. Many of the formations deposited by these freshwater and marine habitats are rich in pollen, invertebrate, and vertebrate (primarily fish) fossils. Kimberlite pipes that are thought to have formed during the Cretaceous, possibly due to a shock from a sudden decrease in the rate of seafloor spreading of the southern Mid-Atlantic Ridge, are the source of the region's famous diamonds. By the Cenozoic, an uplift in the borders of the Cuvette Centrale had blocked any further marine connections. During the Paleogene, high rainfall turned the basin into a series of marshy ponds and swamps. A shift to more arid conditions with seasonal droughts occurred with the start of the Neogene. Later in the Neogene, a sudden shift to fluvial deposits suggests a dramatic return to wetter conditions. The following sedimentary geological formations have been deposited in the basin: Description Congo is a traditional name for the equatorial Middle Africa that lies between the Gulf of Guinea and the African Great Lakes. The basin begins in the highlands of the East African Rift system with input from the Chambeshi, the Uele and Ubangi rivers in the upper reaches and the Lualaba River draining wetlands in the middle reaches. Because of the young age and active uplift of the East African Rift at the headwaters, the river's yearly sediment load is very large, but the drainage basin occupies large areas of low relief throughout much of its area. It is delineated largely by swells including the Bie, Mayumbe, Adamlia, Nile-Congo, East African, and Zambian Swells. The basin ends where the river empties into the Gulf of Guinea on the Atlantic Ocean. The basin is a total of 3.7 million square kilometers and is home to some of the largest undisturbed stands of tropical rainforest on the planet, in addition to large wetlands. Countries wholly or partially in the Congo region: History The first inhabitants of the Congo Basin area were believed to be pygmies, and at that time, the dense forests and wet climate kept the population of the region low, with the prevention of hunter-gatherer society, whose remnants of their culture survive to the present day. Eventually Bantu peoples migrated there and founded the Kingdom of Kongo. Belgium, France, and Portugal later established colonial control over the entire region by the late 19th century. The General Act of the Berlin Conference of 1885 gave a precise definition to the "conventional basin" of the Congo, which included the entire actual basin plus some other areas. The General Act bound its signatories to neutrality within the conventional basin, but this was not respected during the First World War. The World Resources Institute estimated that 80 million people live in and around the Congo Basin. Climate The Congo Basin is a globally important climatic region with annual rainfall of between 1500 and 2000 mm. It is one of three hotspots of deep convection (thunderstorms) in the tropics, the other two being over the Maritime continent and the Amazon. These three regions together drive the climate circulation of the tropics and beyond. The Congo Basin has the highest lightning strike frequency of anywhere on the planet. The high rainfall supports the second largest rainforest on Earth, which is a globally significant carbon sink and an important component of the global carbon cycle. Averaged across the whole basin, there are two major rainfall seasons in March to May and September to November. In both hemispheres the rainfall maximises in September to November, at above 210 mm per month. In northern hemisphere winter, rainfall is relatively low to the north of the equator (<80 mm per month). In southern hemisphere winter, rainfall is instead lower to the south of the equator (<80 mm per month). The annual rhythm of the wind systems which carry water vapour account for the rainfall seasonality. Much of the rainfall is derived from large Mesoscale convective systems. The systems last over 11 hours on average and have a mean size exceeding 500 km2 in some parts of the Congo Basin. Temperatures in the Congo Basin (usually between 20 and 30 °C) are lower than in the African desert regions to the north (The Sahara) and to the south (Kalahari). The differences in temperature between the deserts and the Congo Basin is important for driving wind systems known as African easterly jets, which affect climate and weather in the Sahel and Southern Africa. Future climate projections indicate that the region will get hotter in response to global climate change. There is more uncertainty over how average rainfall in the region will change, with the climate models used by the Intergovernmental Panel on Climate Change (IPCC) disagreeing on core elements of the rainfall distribution in the region. While the average rainfall change is uncertain, it is likely that extreme rainfall events will become more extreme owing to the increases in water vapour in the atmosphere. Owing to the global climatic importance of the Congo Basin, it has been suggested that, along with the Amazon, severe changes in the rainfall or climate of the Congo Rainforest could act as a 'tipping point', with widespread impacts on the Earth System. Flora and fauna The Congo forest is home to the okapi, African forest elephant, pygmy hippopotamus, bongo (antelope), chimpanzee, bonobo and the Congo peafowl. Its apex predator is the leopard, which are larger than their savannah counterparts due to lack of competition from other large predators. The basin is home to the endangered western lowland gorilla. In 2010, the United Nations Environment Programme warned that gorillas could be extinct from the greater Congo Basin in a matter of 15 years. The Congo Basin is the largest forest in Africa. More than 10,000 plant species can be found in and around the forest. The humid forests cover 1.6 million km2. The Congo Basin is an important source of African teak, used for building furniture and flooring. An estimated 40 million people depend on these woodlands, surviving on traditional livelihoods. Ecology and protection At a global level, Congo's forests act as the planet's second lung, counterpart to the rapidly dwindling Amazon. They are a huge "carbon sink", trapping carbon that could otherwise remain carbon dioxide. The Congo Basin holds roughly 8% of the world's forest-based carbon. If these woodlands are deforested, the carbon they trap will be released into the atmosphere. Predictions for future unabated deforestation estimate that by 2050 activities in the DRC will release roughly the same amount of carbon dioxide as the United Kingdom has emitted over the last 60 years. A 2013 study by British scientists showed that deforestation in the Congo Basin rainforest was slowing down. In 2017, British scientists discovered that peatlands in the Cuvette Centrale, which cover a total of 145,500 sq km, contain 30 billion tonnes of carbon, or 20 years of U.S. fossil fuel emissions. In 2021, the deforestation rate of the Congolese rainforest increased by 5%. The Global Forest Atlas estimated that the logging industry covers from 44 to 66 million hectares of forest. A study published in 2019 in Nature Sustainability showed that 54,000 miles of roads for forest concessions were built between 2003 and 2018, reaching a total of 143,500 miles. A moratorium on logging in the Congo forest was agreed between the World Bank and the Democratic Republic of the Congo in May 2002. The World Bank agreed to provide $90 million of development aid to Democratic Republic of the Congo with the provision that the government did not issue any new concessions granting logging companies rights to exploit the forest. The deal also prohibited the renewal of existing concessions. The government has written a new forestry code that requires companies to invest in local development and follow a sustainable, 25-year cycle of rotational logging. When a company is granted a concession from the central government to log in Congo, it must sign an agreement with the local chiefs and hereditary land owners, who give permission for it to extract the trees in return for development packages. In theory, the companies must pay the government nearly $18 million rent per year for these concessions, of which 40% should be returned to provincial governments for investment in social development of the local population in the logged areas. In its current form, the Kyoto Protocol does not reward so-called "avoided deforestation"—initiatives that protect forest from being cut down. But many climate scientists and policymakers hope that negotiations for Kyoto's successor will include such measures. If this were the case, there could be a financial incentive for protecting forests. L’Île Mbiye, an island in the Lualaba River in Kisangani, is part of a project about forest ecosystem conservation, conducted by Stellenbosch University. Democratic Republic of the Congo is also looking to expand the area of forest under protection, for which it hopes to secure compensation through emerging markets for forest carbon. The main Congolese environmental organization working to save the forests is an NGO called OCEAN, which serves as the link between international outfits like Greenpeace and local community groups in the concessions. National parks References External links 0°00′00″N 22°00′00″E / 0.0000°N 22.0000°E / 0.0000; 22.0000 |
======================================== |
[SOURCE: https://en.wikipedia.org/w/index.php?title=Extraterrestrial_life&action=edit§ion=20] | [TOKENS: 1432] |
Editing Extraterrestrial life (section) Copy and paste: – — ° ′ ″ ≈ ≠ ≤ ≥ ± − × ÷ ← → · § Cite your sources: <ref></ref> {{}} {{{}}} | [] [[]] [[Category:]] #REDIRECT [[]] <s></s> <sup></sup> <sub></sub> <code></code> <pre></pre> <blockquote></blockquote> <ref></ref> <ref name="" /> {{Reflist}} <references /> <includeonly></includeonly> <noinclude></noinclude> {{DEFAULTSORT:}} <nowiki></nowiki> <!-- --> <span class="plainlinks"></span> Symbols: ~ | ¡ ¿ † ‡ ↔ ↑ ↓ • ¶ # ∞ ‹› «» ¤ ₳ ฿ ₵ ¢ ₡ ₢ $ ₫ ₯ € ₠ ₣ ƒ ₴ ₭ ₤ ℳ ₥ ₦ ₧ ₰ £ ៛ ₨ ₪ ৳ ₮ ₩ ¥ ♠ ♣ ♥ ♦ 𝄫 ♭ ♮ ♯ 𝄪 © ¼ ½ ¾ Latin: A a Á á À à  â Ä ä Ǎ ǎ Ă ă Ā ā à ã Å å Ą ą Æ æ Ǣ ǣ B b C c Ć ć Ċ ċ Ĉ ĉ Č č Ç ç D d Ď ď Đ đ Ḍ ḍ Ð ð E e É é È è Ė ė Ê ê Ë ë Ě ě Ĕ ĕ Ē ē Ẽ ẽ Ę ę Ẹ ẹ Ɛ ɛ Ǝ ǝ Ə ə F f G g Ġ ġ Ĝ ĝ Ğ ğ Ģ ģ H h Ĥ ĥ Ħ ħ Ḥ ḥ I i İ ı Í í Ì ì Î î Ï ï Ǐ ǐ Ĭ ĭ Ī ī Ĩ ĩ Į į Ị ị J j Ĵ ĵ K k Ķ ķ L l Ĺ ĺ Ŀ ŀ Ľ ľ Ļ ļ Ł ł Ḷ ḷ Ḹ ḹ M m Ṃ ṃ N n Ń ń Ň ň Ñ ñ Ņ ņ Ṇ ṇ Ŋ ŋ O o Ó ó Ò ò Ô ô Ö ö Ǒ ǒ Ŏ ŏ Ō ō Õ õ Ǫ ǫ Ọ ọ Ő ő Ø ø Œ œ Ɔ ɔ P p Q q R r Ŕ ŕ Ř ř Ŗ ŗ Ṛ ṛ Ṝ ṝ S s Ś ś Ŝ ŝ Š š Ş ş Ș ș Ṣ ṣ ß T t Ť ť Ţ ţ Ț ț Ṭ ṭ Þ þ U u Ú ú Ù ù Û û Ü ü Ǔ ǔ Ŭ ŭ Ū ū Ũ ũ Ů ů Ų ų Ụ ụ Ű ű Ǘ ǘ Ǜ ǜ Ǚ ǚ Ǖ ǖ V v W w Ŵ ŵ X x Y y Ý ý Ŷ ŷ Ÿ ÿ Ỹ ỹ Ȳ ȳ Z z Ź ź Ż ż Ž ž ß Ð ð Þ þ Ŋ ŋ Ə ə Greek: Ά ά Έ έ Ή ή Ί ί Ό ό Ύ ύ Ώ ώ Α α Β β Γ γ Δ δ Ε ε Ζ ζ Η η Θ θ Ι ι Κ κ Λ λ Μ μ Ν ν Ξ ξ Ο ο Π π Ρ ρ Σ σ ς Τ τ Υ υ Φ φ Χ χ Ψ ψ Ω ω {{Polytonic|}} Cyrillic: А а Б б В в Г г Ґ ґ Ѓ ѓ Д д Ђ ђ Е е Ё ё Є є Ж ж З з Ѕ ѕ И и І і Ї ї Й й Ј ј К к Ќ ќ Л л Љ љ М м Н н Њ њ О о П п Р р С с Т т Ћ ћ У у Ў ў Ф ф Х х Ц ц Ч ч Џ џ Ш ш Щ щ Ъ ъ Ы ы Ь ь Э э Ю ю Я я ́ IPA: t̪ d̪ ʈ ɖ ɟ ɡ ɢ ʡ ʔ ɸ β θ ð ʃ ʒ ɕ ʑ ʂ ʐ ç ʝ ɣ χ ʁ ħ ʕ ʜ ʢ ɦ ɱ ɳ ɲ ŋ ɴ ʋ ɹ ɻ ɰ ʙ ⱱ ʀ ɾ ɽ ɫ ɬ ɮ ɺ ɭ ʎ ʟ ɥ ʍ ɧ ʼ ɓ ɗ ʄ ɠ ʛ ʘ ǀ ǃ ǂ ǁ ɨ ʉ ɯ ɪ ʏ ʊ ø ɘ ɵ ɤ ə ɚ ɛ œ ɜ ɝ ɞ ʌ ɔ æ ɐ ɶ ɑ ɒ ʰ ʱ ʷ ʲ ˠ ˤ ⁿ ˡ ˈ ˌ ː ˑ ̪ {{IPA|}} This page is a member of 14 hidden categories (help): |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Cameroonian_Highlands_forests] | [TOKENS: 2617] |
Contents Cameroonian Highlands forests The Cameroonian Highlands forests, also known as the Cameroon Highlands forests, is a montane tropical moist broadleaf forest ecoregion located on the range of mountains that runs inland from the Gulf of Guinea and forms the border between Cameroon and Nigeria. This is an area of forest and grassland which has become more populous as land is cleared for agriculture. Geography The Cameroonian Highlands forests extend across the Cameroon Highlands, a chain of extinct volcanoes, covering an area of 38,000 square kilometers (15,000 sq mi) in eastern Nigeria and western Cameroon. The highlands extend roughly southwest–northeast. In the southwest the ecoregion includes the Rumpi Hills, Bakossi Mountains, Mount Nlonako, Mount Kupe, and Mount Manengouba. It extends northeast towards the Mambila Plateau, and north to the Bamenda Highlands. It continues northeast along the western flank of the Adamawa Plateau to Tchabal Gangdaba. Northern outliers include the Mambilla Plateau to the northeast, Nigeria's Obudu Plateau to the northwest, the Alantika Mountains and Hosséré Vokré to the north, and in the southeastern Adamawa Plateau east of Ngaoundéré. The ecoregion lies above 900 meters elevation, and is surrounded at lower elevations by the Cross–Sanaga–Bioko coastal forests at the southern end of the range, and by Guinean forest–savanna mosaic along the central and northern ends of the range. The Cameroon Highlands form the boundary between the Guinean and Northern Congolian forest–savanna mosaic ecoregions. The highest peak within the ecoregion is Mount Oku (3,011 metres (9,879 ft)). Mount Cameroon is highest mountain in the chain, but its high-elevation forests are designated a separate ecoregion. Climate Mean maximum temperatures are below 20 °C due to the effects of altitude, and are cooler than the surrounding lowlands. Average annual rainfall ranges from around 4,000 mm near the coast to 1,800 mm or less further inland. The highlands are an important source of water for both Nigeria and Cameroon. Flora The vegetation varies with elevation. Submontane forests extend from approximately 900 metres (2,953 ft) to 2,000 metres (6,562 ft) meters elevation. Above 2,000 metres (6,562 ft) elevation are distinct montane forests and patches of montane grassland, bamboo forest, and subalpine grasslands and shrublands. The ecoregion is characterized by the presence of Afromontane species, which have an archipelago-like distribution across the highlands of Africa and are distinct from the lowland flora. Typical afromontane tree species are Nuxia congesta, Olea capensis, Podocarpus milanjianus, Prunus africana, Myrsine melanophloeos, and Syzygium staudtii. Submontane forests from 850 to 1600 meters elevation typically have an open canopy. Common trees include species of Alchornea, along with species characteristic of the adjacent lowland plant communities, like semi-deciduous forests (species of Ficus, Santiria, Symphonia, Allanblackia and Anthocleista) and savanna (species of Lannea, Bridelia, Lophira, and Fabaceae). Grasses are abundant in the understorey. From 1600 to 2000 meters, trees from the Euphorbiaceae family predominate, including species of Macaranga, Alchornea, and Mallotus. Savanna and semi-deciduous forest species - Ficus, Lophira, Bridelia, Lannea, and Fabaceae – are also present. The Afromontane genus Astropanax is abundant, and Afromontane species including Olea capensis, Syzygium, Maesa, Meliaceae, and Clematis grow in smaller numbers. Grasses remain common in the understory. Afromontane forests grow above 2000 meters elevation. Lower montane forests generally have a denser canopy than the submontane and upper montane forests, with fewer grasses in the shady understorey. Trees in the lower Afromontane forests include species of Astropanax, Alchornea, Myrica, and Ilex, and the palm Elaeis guineensis. Nuxia congesta, Olea capensis, and Astropanax are predominant from 2270 to 2500 meters elevation. In the upper montane forest from 2500 to 2945 meters elevation Podocarpus milanjianus and Astropanax are predominant, together with Myrsine melanophloeos, Syzygium, Prunus africana, Ixora, and shrubs and herbs like Isoglossa, Pavetta, Rubus, and Impatiens. In the northern mountains, including the Adamawa Plateau, Hosséré Vokré, and Alantika mountains, the climate is drier and rainfall is more seasonal. Submontane forests are generally absent, and the Afromontane forests transition directly to savanna. Afromontane forests on the Hosséré Vokré and Alantika mountains are mostly limited to stream valleys and ravines, separated by areas of montane savanna or grassland. The ericaceous belt is a transition between the upper montane forests and high-elevation grasslands, ranging from approximately 2750 up to 2950 meters elevation. Shrubs and stunted trees of genus Ericaceae, including Erica mannii and Erica silvatica, are predominant. Subalpine grasslands grow above 2800 meters elevation, with many grasses, and herbs in the genera Alchemilla and Anthospermum and the families Caryophyllaceae, Asteraceae, and Lamiaceae. Fauna The ecoregion is home to a number of endemic species, along with several more that are also found in the nearby Mount Cameroon and Bioko montane forests ecoregion. Six species of birds are strictly endemic: the Bamenda apalis (Apalis bamendae), white-throated mountain-babbler (Kupeornis gilberti), banded wattle-eye (Platysteira laticincta), Bannerman's weaver (Ploceus bannermani), Mount Kupe bush-shrike (Telophorus kupeensis) and Bannerman's turaco (Tauraco bannermani), which is a cultural icon for the Kom people who live in the area. Seven species are endemic to the Cameroon Highlands forests and Mount Cameroon: Cameroon greenbul (Arizelocichla montana), Bangwa forest warbler (Bradypterus bangwaensis), grey-headed greenbul (Phyllastrephus poliocephalus), yellow-breasted boubou (Laniarius atroflavus), green-breasted bushshrike (Malaconotus gladiator), mountain robin-chat (Cossyphicula isabellae) and a subspecies of Chubb's cisticola, Cisticola chubbi discolor (sometimes considered a separate species C. discolor). Nine more montane endemic species are shared with Mount Cameroon and Bioko: the western greenbul (Arizelocichla tephrolaema), Cameroon olive greenbul (Phyllastrephus poensis), black-capped woodland warbler (Phylloscopus herberti), green longtail (Urolais epichlorus), white-tailed warbler (Poliolais lopezi), Cameroon sunbird (Cyanomitra oritis), Ursula's sunbird (Cinnyris ursulae), Shelley's oliveback (Nesocharis shelleyi), and Cameroon olive-pigeon (Columba sjostedti). Eleven small mammal species are endemic to the ecoregion: Eisentraut's striped mouse (Hybomys eisentrauti), the Mount Oku hylomyscus (Hylomyscus grandis), Mount Oku rat (Lamottemys okuensis), Mittendorf's striped grass mouse (Lemniscomys mittendorfi), Dieterlen's brush-furred mouse (Lophuromys dieterleni) and Eisentraut's brush-furred rat (L. eisentrauti), Oku mouse shrew (Myosorex okuensis,) Rumpi mouse shrew (M. rumpii), western vlei rat (Otomys occidentalis), Hartwig's soft-furred mouse (Praomys hartwigi), and Bioko forest shrew (Sylvisorex isabellae). The ecoregion is home to several endangered primates, including the Cross River gorilla (Gorilla gorilla diehli), an endemic subspecies of western gorilla, mainland drill (Mandrillus leucophaeus leucophaeus), Preuss's red colobus (Pilocolobus preussi), chimpanzee (Pan troglodytes) and several species of guenon including Preuss's monkey (Cercopithecus preussi). Forty species of amphibians are endemic to the ecoregion: Petropedetes parkeri, Petropedetes perreti, Phrynobatrachus cricogaster, Phrynobatrachus steindachneri, Phrynobatrachus werneri, Phrynobatrachus species, Phrynodon species, Cardioglossa melanogaster, Cardioglossa oreas, Cardioglossa pulchra, Cardioglossa schioetzi, Cardioglossa trifasciata, Cardioglossa venusta, Astylosternus nganhanus, Astylosternus perreti, Astylosternus montanus, Astylosternus rheophilus, Leptodactylodon axillaris, Leptodactylodon bicolor, Leptodactylodon boulengeri, Leptodactylodon erythrogaster, Leptodactylodon mertensi, Leptodactylodon polyacanthus, Leptodactylodon perreti, Afrixalus lacteus, Hyperolius ademetzi, Hyperolius riggenbachi, Leptopelis nordequatorialis, Xenopus amieti, Xenopus species, Bufo villiersi, Werneria bambutensis, Werneria tandyi, Wolterstorffina mirei. The following reptiles are also considered more or less endemic: Atractaspis coalescens, Pfeffer's chameleon (Trioceros pfefferi), four-horned chameleon (Trioceros quadricornis), Leptosiaphos ianthinoxantha and Angel's five-toed skink (Lacertaspis lepesmei). The gecko Ancylodactylus alantika is endemic to the Alantika Mountains and Hosséré Vokré. Urban areas and settlements In Cameroon the mountains are quite heavily populated and used for farming and grazing; much of this ecoregion lies in the Northwest and Adamawa Regions. Towns include Bamenda, capital of the Northwest and base for visiting the mountains including Oku, the Kilum-Ijim Forest and Lake Nyos. In Nigeria the ecoregion is located mainly on the Mambila Plateau, an area of agricultural and grazing land in Taraba State. Conservation and threats The forest is continually being cleared for firewood, timber and to create farmland, and many of the mountains have lost significant amounts of forest cover. There is very little formal environmental protection. Protected areas 6.9% of the ecoregion is in protected areas. Protected areas include Gashaka-Gumti National Park, Korup National Park, Bayang-Mbo Wildlife Sanctuary, Santchou Faunal Reserve, Gangoro Forest Reserve, Mai Samari Forest Reserve, Ngel-Nyaki Forest Reserve, River Nwum Forest Reserve, Kakara Forest Reserve, and Nguroje Forest Reserve. References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Extraterrestrial_life#cite_ref-footnoteC_116-0] | [TOKENS: 11349] |
Contents Extraterrestrial life Extraterrestrial life, or alien life (colloquially aliens), is life that originates from another world rather than on Earth. No extraterrestrial life has yet been scientifically or conclusively detected. Such life might range from simple forms such as prokaryotes to intelligent beings, possibly bringing forth civilizations that might be far more, or far less, advanced than humans. The Drake equation speculates about the existence of sapient life elsewhere in the universe. The science of extraterrestrial life is known as astrobiology. Speculation about inhabited worlds beyond Earth dates back to antiquity. Early Christian writers, including Augustine, discussed ideas from thinkers like Democritus and Epicurus about countless worlds in the vast universe. Pre-modern writers typically assumed extraterrestrial "worlds" were inhabited by living beings. William Vorilong, in the 15th century, acknowledged the possibility Jesus could have visited extraterrestrial worlds to redeem their inhabitants.: 26 In 1440, Nicholas of Cusa suggested Earth is a "brilliant star"; he theorized that all celestial bodies, even the Sun, could host life. Descartes wrote that there were no means to prove the stars were not inhabited by "intelligent creatures", but their existence was a matter of speculation.: 67 In comparison to the life-abundant Earth, the vast majority of intrasolar and extrasolar planets and moons have harsh surface conditions and disparate atmospheric chemistry, or lack an atmosphere. However, there are many extreme and chemically harsh ecosystems on Earth that do support forms of life and are often hypothesized to be the origin of life on Earth. Examples include life surrounding hydrothermal vents, acidic hot springs, and volcanic lakes, as well as halophiles and the deep biosphere. Since the mid-20th century, researchers have searched for extraterrestrial life and intelligence. Solar system studies focus on Venus, Mars, Europa, and Titan, while exoplanet discoveries now total 6,022 confirmed planets in 4,490 systems as of October 2025. Depending on the category of search, methods range from analysis of telescope and specimen data to radios used to detect and transmit interstellar communication. Interstellar travel remains largely hypothetical, with only the Voyager 1 and Voyager 2 probes confirmed to have entered the interstellar medium. The concept of extraterrestrial life, especially intelligent life, has greatly influenced culture and fiction. A key debate centers on contacting extraterrestrial intelligence: some advocate active attempts, while others warn it could be risky, given human history of exploiting other societies. Context Initially, after the Big Bang, the universe was too hot to allow life. It is estimated that the temperature of the universe was around 10 billion Kelvin at the one-second mark. Roughly 15 million years later, it cooled to temperate levels, though the elements of organic life were yet nonexistent. The only freely available elements at that point were hydrogen and helium. Carbon and oxygen (and later, water) would not appear until 50 million years later, created through stellar fusion. At that point, the difficulty for life to appear was not the temperature, but the scarcity of free heavy elements. Planetary systems emerged, and the first organic compounds may have formed in the protoplanetary disk of dust grains that would eventually create rocky planets like Earth. Although Earth was in a molten state after its birth and may have burned any organics that fell on it, it would have been more receptive once it cooled down. Once the right conditions on Earth were met, life started by a chemical process known as abiogenesis. Alternatively, life may have formed less frequently, then spread—by meteoroids, for example—between habitable planets in a process called panspermia. During most of its stellar evolution, stars combine hydrogen nuclei to make helium nuclei by stellar fusion, and the comparatively lighter weight of helium allows the star to release the extra energy. The process continues until the star uses all of its available fuel, with the speed of consumption being related to the size of the star. During its last stages, stars start combining helium nuclei to form carbon nuclei. The larger stars can further combine carbon nuclei to create oxygen and silicon, oxygen into neon and sulfur, and so on until iron. Ultimately, the star blows much of its content back into the stellar medium, where it would join clouds that would eventually become new generations of stars and planets. Many of those materials are the raw components of life on Earth. As this process takes place in all the universe, said materials are ubiquitous in the cosmos and not a rarity from the Solar System. Earth is a planet in the Solar System, a planetary system formed by a star at the center, the Sun, and the objects that orbit it: other planets, moons, asteroids, and comets. The sun is part of the Milky Way, a galaxy. The Milky Way is part of the Local Group, a galaxy group that is in turn part of the Laniakea Supercluster. The universe is composed of all similar structures in existence. The immense distances between celestial objects are a difficulty for studying extraterrestrial life. So far, humans have only set foot on the Moon and sent robotic probes to other planets and moons in the Solar System. Although probes can withstand conditions that may be lethal to humans, the distances cause time delays: the New Horizons took nine years after launch to reach Pluto. No probe has ever reached extrasolar planetary systems. The Voyager 2 left the Solar System at a speed of 50,000 kilometers per hour; if it headed towards the Alpha Centauri system, the closest one to Earth at 4.4 light years, it would reach it in 100,000 years. Under current technology, such systems can only be studied by telescopes, which have limitations. It is estimated that dark matter has a larger amount of combined matter than stars and gas clouds, but as it plays no role in the stellar evolution of stars and planets, it is usually not taken into account by astrobiology. There is an area around a star, the circumstellar habitable zone or "Goldilocks zone", wherein water may be at the right temperature to exist in liquid form at a planetary surface. This area is neither too close to the star, where water would become steam, nor too far away, where water would be frozen as ice. However, although useful as an approximation, planetary habitability is complex and defined by several factors. Being in the habitable zone is not enough for a planet to be habitable, not even to actually have such liquid water. Venus is located in the solar system's habitable zone, but does not have liquid water because of the conditions of its atmosphere. Jovian planets or gas giants are not considered habitable even if they orbit close enough to their stars as hot Jupiters, due to crushing atmospheric pressures. The actual distances for the habitable zones vary according to the type of star, and even the solar activity of each specific star influences the local habitability. The type of star also defines the time the habitable zone will exist, as its presence and limits will change along with the star's stellar evolution. The Big Bang occurred 13.8 billion years ago, the Solar System was formed 4.6 billion years ago, and the first hominids appeared 6 million years ago. Life on other planets may have started, evolved, given birth to extraterrestrial intelligences, and perhaps even faced a planetary extinction event millions or billions of years ago. When considered from a cosmic perspective, the brief times of existence of Earth's species may suggest that extraterrestrial life may be equally fleeting under such a scale. During a period of about 7 million years, from about 10 to 17 million years after the Big Bang, the background temperature was between 373 and 273 K (100 and 0 °C; 212 and 32 °F), allowing the possibility of liquid water if any planets existed. Avi Loeb (2014) speculated that primitive life might in principle have appeared during this window, which he called "the Habitable Epoch of the Early Universe". Life on Earth is quite ubiquitous across the planet and has adapted over time to almost all the available environments in it, extremophiles and the deep biosphere thrive at even the most hostile ones. As a result, it is inferred that life in other celestial bodies may be equally adaptive. However, the origin of life is unrelated to its ease of adaptation and may have stricter requirements. A celestial body may not have any life on it, even if it were habitable. Likelihood of existence Life in the cosmos beyond Earth has been observed. The hypothesis of ubiquitous extraterrestrial life relies on three main ideas. The first one, the size of the universe, allows for plenty of planets to have a similar habitability to Earth, and the age of the universe gives enough time for a long process analog to the history of Earth to happen there. The second is that the substances that make life, such as carbon and water, are ubiquitous in the universe. The third is that the physical laws are universal, which means that the forces that would facilitate or prevent the existence of life would be the same ones as on Earth. According to this argument, made by scientists such as Carl Sagan and Stephen Hawking, it would be improbable for life not to exist somewhere else other than Earth. This argument is embodied in the Copernican principle, which states that Earth does not occupy a unique position in the Universe, and the mediocrity principle, which states that there is nothing special about life on Earth. Other authors consider instead that life in the cosmos, or at least multicellular life, may actually be rare. The Rare Earth hypothesis maintains that life on Earth is possible because of a series of factors that range from the location in the galaxy and the configuration of the Solar System to local characteristics of the planet, and that it is unlikely that another planet simultaneously meets all such requirements. The proponents of this hypothesis consider that very little evidence suggests the existence of extraterrestrial life and that, at this point, it is just a desired result and not a reasonable scientific explanation for any gathered data. In 1961, astronomer and astrophysicist Frank Drake devised the Drake equation as a way to stimulate scientific dialogue at a meeting on the search for extraterrestrial intelligence (SETI). The Drake equation is a probabilistic argument used to estimate the number of active, communicative extraterrestrial civilizations in the Milky Way galaxy. The Drake equation is:: xix where: and Drake's proposed estimates are as follows, but numbers on the right side of the equation are agreed as speculative and open to substitution: 10,000 = 5 ⋅ 0.5 ⋅ 2 ⋅ 1 ⋅ 0.2 ⋅ 1 ⋅ 10,000 {\displaystyle 10{,}000=5\cdot 0.5\cdot 2\cdot 1\cdot 0.2\cdot 1\cdot 10{,}000} [better source needed] The Drake equation has proved controversial since, although it is written as a math equation, none of its values were known at the time. Although some values may eventually be measured, others are based on social sciences and are not knowable by their very nature. This does not allow one to make noteworthy conclusions from the equation. Based on observations from the Hubble Space Telescope, there are nearly 2 trillion galaxies in the observable universe. It is estimated that at least ten percent of all Sun-like stars have a system of planets. In other words, there are 6.25×1018 stars with planets orbiting them in the observable universe. Even if it is assumed that only one out of a billion of these stars has planets supporting life, there would be some 6.25 billion life-supporting planetary systems in the observable universe. A 2013 study based on results from the Kepler spacecraft estimated that the Milky Way contains at least as many planets as it does stars, resulting in 100–400 billion exoplanets. The Nebular hypothesis that explains the formation of the Solar System and other planetary systems would suggest that those can have several configurations, and not all of them may have rocky planets within the habitable zone. The apparent contradiction between high estimates of the probability of the existence of extraterrestrial civilisations and the lack of evidence for such civilisations is known as the Fermi paradox. Dennis W. Sciama claimed that life's existence in the universe depends on various fundamental constants. Zhi-Wei Wang and Samuel L. Braunstein suggest that a random universe capable of supporting life is likely to be just barely able to do so, giving a potential explanation to the Fermi paradox. Biochemical basis If extraterrestrial life exists, it could range from simple microorganisms and multicellular organisms similar to animals or plants, to complex alien intelligences akin to humans. When scientists talk about extraterrestrial life, they consider all those types. Although it is possible that extraterrestrial life may have other configurations, scientists use the hierarchy of lifeforms from Earth for simplicity, as it is the only one known to exist. The first basic requirement for life is an environment with non-equilibrium thermodynamics, which means that the thermodynamic equilibrium must be broken by a source of energy. The traditional sources of energy in the cosmos are the stars, such as for life on Earth, which depends on the energy of the sun. However, there are other alternative energy sources, such as volcanoes, plate tectonics, and hydrothermal vents. There are ecosystems on Earth in deep areas of the ocean that do not receive sunlight, and take energy from black smokers instead. Magnetic fields and radioactivity have also been proposed as sources of energy, although they would be less efficient ones. Life on Earth requires water in a liquid state as a solvent in which biochemical reactions take place. It is highly unlikely that an abiogenesis process can start within a gaseous or solid medium: the atom speeds, either too fast or too slow, make it difficult for specific ones to meet and start chemical reactions. A liquid medium also allows the transport of nutrients and substances required for metabolism. Sufficient quantities of carbon and other elements, along with water, might enable the formation of living organisms on terrestrial planets with a chemical make-up and temperature range similar to that of Earth. Life based on ammonia rather than water has been suggested as an alternative, though this solvent appears less suitable than water. It is also conceivable that there are forms of life whose solvent is a liquid hydrocarbon, such as methane, ethane or propane. Another unknown aspect of potential extraterrestrial life would be the chemical elements that would compose it. Life on Earth is largely composed of carbon, but there could be other hypothetical types of biochemistry. A replacement for carbon would need to be able to create complex molecules, store information required for evolution, and be freely available in the medium. To create DNA, RNA, or a close analog, such an element should be able to bind its atoms with many others, creating complex and stable molecules. It should be able to create at least three covalent bonds: two for making long strings and at least a third to add new links and allow for diverse information. Only nine elements meet this requirement: boron, nitrogen, phosphorus, arsenic, antimony (three bonds), carbon, silicon, germanium and tin (four bonds). As for abundance, carbon, nitrogen, and silicon are the most abundant ones in the universe, far more than the others. On Earth's crust the most abundant of those elements is silicon, in the Hydrosphere it is carbon and in the atmosphere, it is carbon and nitrogen. Silicon, however, has disadvantages over carbon. The molecules formed with silicon atoms are less stable, and more vulnerable to acids, oxygen, and light. An ecosystem of silicon-based lifeforms would require very low temperatures, high atmospheric pressure, an atmosphere devoid of oxygen, and a solvent other than water. The low temperatures required would add an extra problem, the difficulty to kickstart a process of abiogenesis to create life in the first place. Norman Horowitz, head of the Jet Propulsion Laboratory bioscience section for the Mariner and Viking missions from 1965 to 1976 considered that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival of life on other planets. However, he also considered that the conditions found on Mars were incompatible with carbon based life. Even if extraterrestrial life is based on carbon and uses water as a solvent, like Earth life, it may still have a radically different biochemistry. Life is generally considered to be a product of natural selection. It has been proposed that to undergo natural selection a living entity must have the capacity to replicate itself, the capacity to avoid damage/decay, and the capacity to acquire and process resources in support of the first two capacities. Life on Earth may have started with an RNA world and later evolved to its current form, where some of the RNA tasks were transferred to DNA and proteins. Extraterrestrial life may still be stuck using RNA, or evolve into other configurations. It is unclear if our biochemistry is the most efficient one that could be generated, or which elements would follow a similar pattern. However, it is likely that, even if cells had a different composition to those from Earth, they would still have a cell membrane. Life on Earth jumped from prokaryotes to eukaryotes and from unicellular organisms to multicellular organisms through evolution. So far no alternative process to achieve such a result has been conceived, even if hypothetical. Evolution requires life to be divided into individual organisms, and no alternative organisation has been satisfactorily proposed either. At the basic level, membranes define the limit of a cell, between it and its environment, while remaining partially open to exchange energy and resources with it. The evolution from simple cells to eukaryotes, and from them to multicellular lifeforms, is not guaranteed. The Cambrian explosion took place thousands of millions of years after the origin of life, and its causes are not fully known yet. On the other hand, the jump to multicellularity took place several times, which suggests that it could be a case of convergent evolution, and so likely to take place on other planets as well. Palaeontologist Simon Conway Morris considers that convergent evolution would lead to kingdoms similar to our plants and animals, and that many features are likely to develop in alien animals as well, such as bilateral symmetry, limbs, digestive systems and heads with sensory organs. Scientists from the University of Oxford analysed it from the perspective of evolutionary theory and wrote in a study in the International Journal of Astrobiology that aliens may be similar to humans. The planetary context would also have an influence: a planet with higher gravity would have smaller animals, and other types of stars can lead to non-green photosynthesizers. The amount of energy available would also affect biodiversity, as an ecosystem sustained by black smokers or hydrothermal vents would have less energy available than those sustained by a star's light and heat, and so its lifeforms would not grow beyond a certain complexity. There is also research in assessing the capacity of life for developing intelligence. It has been suggested that this capacity arises with the number of potential niches a planet contains, and that the complexity of life itself is reflected in the information density of planetary environments, which in turn can be computed from its niches. It is common knowledge that the conditions on other planets in the solar system, in addition to the many galaxies outside of the Milky Way galaxy, are very harsh and seem to be too extreme to harbor any life. The environmental conditions on these planets can have intense UV radiation paired with extreme temperatures, lack of water, and much more that can lead to conditions that don't seem to favor the creation or maintenance of extraterrestrial life. However, there has been much historical evidence that some of the earliest and most basic forms of life on Earth originated in some extreme environments that seem unlikely to have harbored life at least at one point in Earth's history. Fossil evidence as well as many historical theories backed up by years of research and studies have marked environments like hydrothermal vents or acidic hot springs as some of the first places that life could have originated on Earth. These environments can be considered extreme when compared to the typical ecosystems that the majority of life on Earth now inhabit, as hydrothermal vents are scorching hot due to the magma escaping from the Earth's mantle and meeting the much colder oceanic water. Even in today's world, there can be a diverse population of bacteria found inhabiting the area surrounding these hydrothermal vents which can suggest that some form of life can be supported even in the harshest of environments like the other planets in the solar system. The aspects of these harsh environments that make them ideal for the origin of life on Earth, as well as the possibility of creation of life on other planets, is the chemical reactions forming spontaneously. For example, the hydrothermal vents found on the ocean floor are known to support many chemosynthetic processes which allow organisms to utilize energy through reduced chemical compounds that fix carbon. In return, these reactions will allow for organisms to live in relatively low oxygenated environments while maintaining enough energy to support themselves. The early Earth environment was reducing and therefore, these carbon fixing compounds were necessary for the survival and possible origin of life on Earth. With the little amount of information that scientists have found regarding the atmosphere on other planets in the Milky Way galaxy and beyond, the atmospheres are most likely reducing or with very low oxygen levels, especially when compared with Earth's atmosphere. If there were the necessary elements and ions on these planets, the same carbon fixing, reduced chemical compounds occurring around hydrothermal vents could also occur on these planets' surfaces and possibly result in the origin of extraterrestrial life. Planetary habitability in the Solar System The Solar System has a wide variety of planets, dwarf planets, and moons, and each one is studied for its potential to host life. Each one has its own specific conditions that may benefit or harm life. So far, the only lifeforms found are those from Earth. No extraterrestrial intelligence other than humans exists or has ever existed within the Solar System. Astrobiologist Mary Voytek points out that it would be unlikely to find large ecosystems, as they would have already been detected by now. The inner Solar System is likely devoid of life. However, Venus is still of interest to astrobiologists, as it is a terrestrial planet that was likely similar to Earth in its early stages and developed in a different way. There is a greenhouse effect, the surface is the hottest in the Solar System, sulfuric acid clouds, all surface liquid water is lost, and it has a thick carbon-dioxide atmosphere with huge pressure. Comparing both helps to understand the precise differences that lead to beneficial or harmful conditions for life. And despite the conditions against life on Venus, there are suspicions that microbial life-forms may still survive in high-altitude clouds. Mars is a cold and almost airless desert, inhospitable to life. However, recent studies revealed that water on Mars used to be quite abundant, forming rivers, lakes, and perhaps even oceans. Mars may have been habitable back then, and life on Mars may have been possible. But when the planetary core ceased to generate a magnetic field, solar winds removed the atmosphere and the planet became vulnerable to solar radiation. Ancient life-forms may still have left fossilised remains, and microbes may still survive deep underground. As mentioned, the gas giants and ice giants are unlikely to contain life. The most distant solar system bodies, found in the Kuiper Belt and outwards, are locked in permanent deep-freeze, but cannot be ruled out completely. Although the giant planets themselves are highly unlikely to have life, there is much hope to find it on moons orbiting these planets. Europa, from the Jovian system, has a subsurface ocean below a thick layer of ice. Ganymede and Callisto also have subsurface oceans, but life is less likely in them because water is sandwiched between layers of solid ice. Europa would have contact between the ocean and the rocky surface, which helps the chemical reactions. It may be difficult to dig so deep in order to study those oceans, though. Enceladus, a tiny moon of Saturn with another subsurface ocean, may not need to be dug, as it releases water to space in eruption columns. The space probe Cassini flew inside one of these, but could not make a full study because NASA did not expect this phenomenon and did not equip the probe to study ocean water. Still, Cassini detected complex organic molecules, salts, evidence of hydrothermal activity, hydrogen, and methane. Titan is the only celestial body in the Solar System besides Earth that has liquid bodies on the surface. It has rivers, lakes, and rain of hydrocarbons, methane, and ethane, and even a cycle similar to Earth's water cycle. This special context encourages speculations about lifeforms with different biochemistry, but the cold temperatures would make such chemistry take place at a very slow pace. Water is rock-solid on the surface, but Titan does have a subsurface water ocean like several other moons. However, it is of such a great depth that it would be very difficult to access it for study. Scientific search The science that searches and studies life in the universe, both on Earth and elsewhere, is called astrobiology. With the study of Earth's life, the only known form of life, astrobiology seeks to study how life starts and evolves and the requirements for its continuous existence. This helps to determine what to look for when searching for life in other celestial bodies. This is a complex area of study, and uses the combined perspectives of several scientific disciplines, such as astronomy, biology, chemistry, geology, oceanography, and atmospheric sciences. The scientific search for extraterrestrial life is being carried out both directly and indirectly. As of September 2017[update], 3,667 exoplanets in 2,747 systems have been identified, and other planets and moons in the Solar System hold the potential for hosting primitive life such as microorganisms. As of 8 February 2021, an updated status of studies considering the possible detection of lifeforms on Venus (via phosphine) and Mars (via methane) was reported. Scientists search for biosignatures within the Solar System by studying planetary surfaces and examining meteorites. Some claim to have identified evidence that microbial life has existed on Mars. In 1996, a controversial report stated that structures resembling nanobacteria were discovered in a meteorite, ALH84001, formed of rock ejected from Mars. Although all the unusual properties of the meteorite were eventually explained as the result of inorganic processes, the controversy over its discovery laid the groundwork for the development of astrobiology. An experiment on the two Viking Mars landers reported gas emissions from heated Martian soil samples that some scientists argue are consistent with the presence of living microorganisms. Lack of corroborating evidence from other experiments on the same samples suggests that a non-biological reaction is a more likely hypothesis. In February 2005 NASA scientists reported they may have found some evidence of extraterrestrial life on Mars. The two scientists, Carol Stoker and Larry Lemke of NASA's Ames Research Center, based their claim on methane signatures found in Mars's atmosphere resembling the methane production of some forms of primitive life on Earth, as well as on their own study of primitive life near the Rio Tinto river in Spain. NASA officials soon distanced NASA from the scientists' claims, and Stoker herself backed off from her initial assertions. In November 2011, NASA launched the Mars Science Laboratory that landed the Curiosity rover on Mars. It is designed to assess the past and present habitability on Mars using a variety of scientific instruments. The rover landed on Mars at Gale Crater in August 2012. A group of scientists at Cornell University started a catalog of microorganisms, with the way each one reacts to sunlight. The goal is to help with the search for similar organisms in exoplanets, as the starlight reflected by planets rich in such organisms would have a specific spectrum, unlike that of starlight reflected from lifeless planets. If Earth was studied from afar with this system, it would reveal a shade of green, as a result of the abundance of plants with photosynthesis. In August 2011, NASA studied meteorites found on Antarctica, finding adenine, guanine, hypoxanthine, and xanthine. Adenine and guanine are components of DNA, and the others are used in other biological processes. The studies ruled out pollution of the meteorites on Earth, as those components would not be freely available the way they were found in the samples. This discovery suggests that several organic molecules that serve as building blocks of life may be generated within asteroids and comets. In October 2011, scientists reported that cosmic dust contains complex organic compounds ("amorphous organic solids with a mixed aromatic-aliphatic structure") that could be created naturally, and rapidly, by stars. It is still unclear if those compounds played a role in the creation of life on Earth, but Sun Kwok, of the University of Hong Kong, thinks so. "If this is the case, life on Earth may have had an easier time getting started as these organics can serve as basic ingredients for life." In August 2012, and in a world first, astronomers at Copenhagen University reported the detection of a specific sugar molecule, glycolaldehyde, in a distant star system. The molecule was found around the protostellar binary IRAS 16293-2422, which is located 400 light years from Earth. Glycolaldehyde is needed to form ribonucleic acid, or RNA, which is similar in function to DNA. This finding suggests that complex organic molecules may form in stellar systems prior to the formation of planets, eventually arriving on young planets early in their formation. In December 2023, astronomers reported the first time discovery, in the plumes of Enceladus, moon of the planet Saturn, of hydrogen cyanide, a possible chemical essential for life as we know it, as well as other organic molecules, some of which are yet to be better identified and understood. According to the researchers, "these [newly discovered] compounds could potentially support extant microbial communities or drive complex organic synthesis leading to the origin of life." Although most searches are focused on the biology of extraterrestrial life, an extraterrestrial intelligence capable enough to develop a civilization may be detectable by other means as well. Technology may generate technosignatures, effects on the native planet that may not be caused by natural causes. There are three main types of techno-signatures considered: interstellar communications, effects on the atmosphere, and planetary-sized structures such as Dyson spheres. Organizations such as the SETI Institute search the cosmos for potential forms of communication. They started with radio waves, and now search for laser pulses as well. The challenge for this search is that there are natural sources of such signals as well, such as gamma-ray bursts and supernovae, and the difference between a natural signal and an artificial one would be in its specific patterns. Astronomers intend to use artificial intelligence for this, as it can manage large amounts of data and is devoid of biases and preconceptions. Besides, even if there is an advanced extraterrestrial civilization, there is no guarantee that it is transmitting radio communications in the direction of Earth. The length of time required for a signal to travel across space means that a potential answer may arrive decades or centuries after the initial message. The atmosphere of Earth is rich in nitrogen dioxide as a result of air pollution, which can be detectable. The natural abundance of carbon, which is also relatively reactive, makes it likely to be a basic component of the development of a potential extraterrestrial technological civilization, as it is on Earth. Fossil fuels may likely be generated and used on such worlds as well. The abundance of chlorofluorocarbons in the atmosphere can also be a clear technosignature, considering their role in ozone depletion. Light pollution may be another technosignature, as multiple lights on the night side of a rocky planet can be a sign of advanced technological development. However, modern telescopes are not strong enough to study exoplanets with the required level of detail to perceive it. The Kardashev scale proposes that a civilization may eventually start consuming energy directly from its local star. This would require giant structures built next to it, called Dyson spheres. Those speculative structures would cause an excess infrared radiation, that telescopes may notice. The infrared radiation is typical of young stars, surrounded by dusty protoplanetary disks that will eventually form planets. An older star such as the Sun would have no natural reason to have excess infrared radiation. The presence of heavy elements in a star's light-spectrum is another potential biosignature; such elements would (in theory) be found if the star were being used as an incinerator/repository for nuclear waste products. Some astronomers search for extrasolar planets that may be conducive to life, narrowing the search to terrestrial planets within the habitable zones of their stars. Since 1992, over four thousand exoplanets have been discovered (6,128 planets in 4,584 planetary systems including 1,017 multiple planetary systems as of 30 October 2025). The extrasolar planets so far discovered range in size from that of terrestrial planets similar to Earth's size to that of gas giants larger than Jupiter. The number of observed exoplanets is expected to increase greatly in the coming years.[better source needed] The Kepler space telescope has also detected a few thousand candidate planets, of which about 11% may be false positives. There is at least one planet on average per star. About 1 in 5 Sun-like stars[a] have an "Earth-sized"[b] planet in the habitable zone,[c] with the nearest expected to be within 12 light-years distance from Earth. Assuming 200 billion stars in the Milky Way,[d] that would be 11 billion potentially habitable Earth-sized planets in the Milky Way, rising to 40 billion if red dwarfs are included. The rogue planets in the Milky Way possibly number in the trillions. The nearest known exoplanet is Proxima Centauri b, located 4.2 light-years (1.3 pc) from Earth in the southern constellation of Centaurus. As of March 2014[update], the least massive exoplanet known is PSR B1257+12 A, which is about twice the mass of the Moon. The most massive planet listed on the NASA Exoplanet Archive is DENIS-P J082303.1−491201 b, about 29 times the mass of Jupiter, although according to most definitions of a planet, it is too massive to be a planet and may be a brown dwarf instead. Almost all of the planets detected so far are within the Milky Way, but there have also been a few possible detections of extragalactic planets. The study of planetary habitability also considers a wide range of other factors in determining the suitability of a planet for hosting life. One sign that a planet probably already contains life is the presence of an atmosphere with significant amounts of oxygen, since that gas is highly reactive and generally would not last long without constant replenishment. This replenishment occurs on Earth through photosynthetic organisms. One way to analyse the atmosphere of an exoplanet is through spectrography when it transits its star, though this might only be feasible with dim stars like white dwarfs. History and cultural impact The modern concept of extraterrestrial life is based on assumptions that were not commonplace during the early days of astronomy. The first explanations for the celestial objects seen in the night sky were based on mythology. Scholars from Ancient Greece were the first to consider that the universe is inherently understandable and rejected explanations based on supernatural incomprehensible forces, such as the myth of the Sun being pulled across the sky in the chariot of Apollo. They had not developed the scientific method yet and based their ideas on pure thought and speculation, but they developed precursor ideas to it, such as that explanations had to be discarded if they contradict observable facts. The discussions of those Greek scholars established many of the pillars that would eventually lead to the idea of extraterrestrial life, such as Earth being round and not flat. The cosmos was first structured in a geocentric model that considered that the sun and all other celestial bodies revolve around Earth. However, they did not consider them as worlds. In Greek understanding, the world was composed by both Earth and the celestial objects with noticeable movements. Anaximander thought that the cosmos was made from apeiron, a substance that created the world, and that the world would eventually return to the cosmos. Eventually two groups emerged, the atomists that thought that matter at both Earth and the cosmos was equally made of small atoms of the classical elements (earth, water, fire and air), and the Aristotelians who thought that those elements were exclusive of Earth and that the cosmos was made of a fifth one, the aether. Atomist Epicurus thought that the processes that created the world, its animals and plants should have created other worlds elsewhere, along with their own animals and plants. Aristotle thought instead that all the earth element naturally fell towards the center of the universe, and that would make it impossible for other planets to exist elsewhere. Under that reasoning, Earth was not only in the center, it was also the only planet in the universe. Cosmic pluralism, the plurality of worlds, or simply pluralism, describes the philosophical belief in numerous "worlds" in addition to Earth, which might harbor extraterrestrial life. The earliest recorded assertion of extraterrestrial human life is found in ancient scriptures of Jainism. There are multiple "worlds" mentioned in Jain scriptures that support human life. These include, among others, Bharat Kshetra, Mahavideh Kshetra, Airavat Kshetra, and Hari kshetra. Medieval Muslim writers like Fakhr al-Din al-Razi and Muhammad al-Baqir supported cosmic pluralism on the basis of the Qur'an. Chaucer's poem The House of Fame engaged in medieval thought experiments that postulated the plurality of worlds. However, those ideas about other worlds were different from the current knowledge about the structure of the universe, and did not postulate the existence of planetary systems other than the Solar System. When those authors talk about other worlds, they talk about places located at the center of their own systems, and with their own stellar vaults and cosmos surrounding them. The Greek ideas and the disputes between atomists and Aristotelians outlived the fall of the Greek empire. The Great Library of Alexandria compiled information about it, part of which was translated by Islamic scholars and thus survived the end of the Library. Baghdad combined the knowledge of the Greeks, the Indians, the Chinese and its own scholars, and the knowledge expanded through the Byzantine Empire. From there it eventually returned to Europe by the time of the Middle Ages. However, as the Greek atomist doctrine held that the world was created by random movements of atoms, with no need for a creator deity, it became associated with atheism, and the dispute intertwined with religious ones. Still, the Church did not react to those topics in a homogeneous way, and there were stricter and more permissive views within the church itself. The first known mention of the term 'panspermia' was in the writings of the 5th-century BC Greek philosopher Anaxagoras. He proposed the idea that life exists everywhere. By the time of the late Middle Ages there were many known inaccuracies in the geocentric model, but it was kept in use because naked eye observations provided limited data. Nicolaus Copernicus started the Copernican Revolution by proposing that the planets revolve around the sun rather than Earth. His proposal had little acceptance at first because, as he kept the assumption that orbits were perfect circles, his model led to as many inaccuracies as the geocentric one. Tycho Brahe improved the available data with naked-eye observatories, which worked with highly complex sextants and quadrants. Tycho could not make sense of his observations, but Johannes Kepler did: orbits were not perfect circles, but ellipses. This knowledge benefited the Copernican model, which worked now almost perfectly. The invention of the telescope a short time later, perfected by Galileo Galilei, clarified the final doubts, and the paradigm shift was completed. Under this new understanding, the notion of extraterrestrial life became feasible: if Earth is but just a planet orbiting around a star, there may be planets similar to Earth elsewhere. The astronomical study of distant bodies also proved that physical laws are the same elsewhere in the universe as on Earth, with nothing making the planet truly special. The new ideas were met with resistance from the Catholic church. Galileo was tried for the heliocentric model, which was considered heretical, and forced to recant it. The best-known early-modern proponent of ideas of extraterrestrial life was the Italian philosopher Giordano Bruno, who argued in the 16th century for an infinite universe in which every star is surrounded by its own planetary system. Bruno wrote that other worlds "have no less virtue nor a nature different to that of our earth" and, like Earth, "contain animals and inhabitants". Bruno's belief in the plurality of worlds was one of the charges leveled against him by the Venetian Holy Inquisition, which tried and executed him. The heliocentric model was further strengthened by the postulation of the theory of gravity by Sir Isaac Newton. This theory provided the mathematics that explains the motions of all things in the universe, including planetary orbits. By this point, the geocentric model was definitely discarded. By this time, the use of the scientific method had become a standard, and new discoveries were expected to provide evidence and rigorous mathematical explanations. Science also took a deeper interest in the mechanics of natural phenomena, trying to explain not just the way nature works but also the reasons for working that way. There was very little actual discussion about extraterrestrial life before this point, as the Aristotelian ideas remained influential while geocentrism was still accepted. When it was finally proved wrong, it not only meant that Earth was not the center of the universe, but also that the lights seen in the sky were not just lights, but physical objects. The notion that life may exist in them as well soon became an ongoing topic of discussion, although one with no practical ways to investigate. The possibility of extraterrestrials remained a widespread speculation as scientific discovery accelerated. William Herschel, the discoverer of Uranus, was one of many 18th–19th-century astronomers who believed that the Solar System is populated by alien life. Other scholars of the period who championed "cosmic pluralism" included Immanuel Kant and Benjamin Franklin. At the height of the Enlightenment, even the Sun and Moon were considered candidates for extraterrestrial inhabitants. Speculation about life on Mars increased in the late 19th century, following telescopic observation of apparent Martian canals – which soon, however, turned out to be optical illusions. Despite this, in 1895, American astronomer Percival Lowell published his book Mars, followed by Mars and its Canals in 1906, proposing that the canals were the work of a long-gone civilisation. Spectroscopic analysis of Mars's atmosphere began in earnest in 1894, when U.S. astronomer William Wallace Campbell showed that neither water nor oxygen was present in the Martian atmosphere. By 1909 better telescopes and the best perihelic opposition of Mars since 1877 conclusively put an end to the canal hypothesis. As a consequence of the belief in the spontaneous generation there was little thought about the conditions of each celestial body: it was simply assumed that life would thrive anywhere. This theory was disproved by Louis Pasteur in the 19th century. Popular belief in thriving alien civilisations elsewhere in the solar system still remained strong until Mariner 4 and Mariner 9 provided close images of Mars, which debunked forever the idea of the existence of Martians and decreased the previous expectations of finding alien life in general. The end of the spontaneous generation belief forced investigation into the origin of life. Although abiogenesis is the more accepted theory, a number of authors reclaimed the term "panspermia" and proposed that life was brought to Earth from elsewhere. Some of those authors are Jöns Jacob Berzelius (1834), Kelvin (1871), Hermann von Helmholtz (1879) and, somewhat later, by Svante Arrhenius (1903). The science fiction genre, although not so named during the time, developed during the late 19th century. The expansion of the genre of extraterrestrials in fiction influenced the popular perception over the real-life topic, making people eager to jump to conclusions about the discovery of aliens. Science marched at a slower pace, some discoveries fueled expectations and others dashed excessive hopes. For example, with the advent of telescopes, most structures seen on the Moon or Mars were immediately attributed to Selenites or Martians, and later ones (such as more powerful telescopes) revealed that all such discoveries were natural features. A famous case is the Cydonia region of Mars, first imaged by the Viking 1 orbiter. The low-resolution photos showed a rock formation that resembled a human face, but later spacecraft took photos in higher detail that showed that there was nothing special about the site. The search and study of extraterrestrial life became a science of its own, astrobiology. Also known as exobiology, this discipline is studied by the NASA, the ESA, the INAF, and others. Astrobiology studies life from Earth as well, but with a cosmic perspective. For example, abiogenesis is of interest to astrobiology, not because of the origin of life on Earth, but for the chances of a similar process taking place in other celestial bodies. Many aspects of life, from its definition to its chemistry, are analyzed as either likely to be similar in all forms of life across the cosmos or only native to Earth. Astrobiology, however, remains constrained by the current lack of extraterrestrial life-forms to study, as all life on Earth comes from the same ancestor, and it is hard to infer general characteristics from a group with a single example to analyse. The 20th century came with great technological advances, speculations about future hypothetical technologies, and an increased basic knowledge of science by the general population thanks to science divulgation through the mass media. The public interest in extraterrestrial life and the lack of discoveries by mainstream science led to the emergence of pseudosciences that provided affirmative, if questionable, answers to the existence of aliens. Ufology claims that many unidentified flying objects (UFOs) would be spaceships from alien species, and ancient astronauts hypothesis claim that aliens would have visited Earth in antiquity and prehistoric times but people would have failed to understand it by then. Most UFOs or UFO sightings can be readily explained as sightings of Earth-based aircraft (including top-secret aircraft), known astronomical objects or weather phenomenons, or as hoaxes. Looking beyond the pseudosciences, Lewis White Beck strove to elevate the level of public discourse on the topic of extraterrestrial life by tracing the evolution of philosophical thought over the centuries from ancient times into the modern era. His review of the contributions made by Lucretius, Plutarch, Aristotle, Copernicus, Immanuel Kant, John Wilkins, Charles Darwin and Karl Marx demonstrated that even in modern times, humanity could be profoundly influenced in its search for extraterrestrial life by subtle and comforting archetypal ideas which are largely derived from firmly held religious, philosophical and existential belief systems. On a positive note, however, Beck further argued that even if the search for extraterrestrial life proves to be unsuccessful, the endeavor itself could have beneficial consequences by assisting humanity in its attempt to actualize superior ways of living here on Earth. By the 21st century, it was accepted that multicellular life in the Solar System can only exist on Earth, but the interest in extraterrestrial life increased regardless. This is a result of the advances in several sciences. The knowledge of planetary habitability allows to consider on scientific terms the likelihood of finding life at each specific celestial body, as it is known which features are beneficial and harmful for life. Astronomy and telescopes also improved to the point exoplanets can be confirmed and even studied, increasing the number of search places. Life may still exist elsewhere in the Solar System in unicellular form, but the advances in spacecraft allow to send robots to study samples in situ, with tools of growing complexity and reliability. Although no extraterrestrial life has been found and life may still be just a rarity from Earth, there are scientific reasons to suspect that it can exist elsewhere, and technological advances that may detect it if it does. Many scientists are optimistic about the chances of finding alien life. In the words of SETI's Frank Drake, "All we know for sure is that the sky is not littered with powerful microwave transmitters". Drake noted that it is entirely possible that advanced technology results in communication being carried out in some way other than conventional radio transmission. At the same time, the data returned by space probes, and giant strides in detection methods, have allowed science to begin delineating habitability criteria on other worlds, and to confirm that at least other planets are plentiful, though aliens remain a question mark. The Wow! signal, detected in 1977 by a SETI project, remains a subject of speculative debate. On the other hand, other scientists are pessimistic. Jacques Monod wrote that "Man knows at last that he is alone in the indifferent immensity of the universe, whence which he has emerged by chance". In 2000, geologist and paleontologist Peter Ward and astrobiologist Donald Brownlee published a book entitled Rare Earth: Why Complex Life is Uncommon in the Universe.[better source needed] In it, they discussed the Rare Earth hypothesis, in which they claim that Earth-like life is rare in the universe, whereas microbial life is common. Ward and Brownlee are open to the idea of evolution on other planets that is not based on essential Earth-like characteristics such as DNA and carbon. As for the possible risks, theoretical physicist Stephen Hawking warned in 2010 that humans should not try to contact alien life forms. He warned that aliens might pillage Earth for resources. "If aliens visit us, the outcome would be much as when Columbus landed in America, which didn't turn out well for the Native Americans", he said. Jared Diamond had earlier expressed similar concerns. On 20 July 2015, Hawking and Russian billionaire Yuri Milner, along with the SETI Institute, announced a well-funded effort, called the Breakthrough Initiatives, to expand efforts to search for extraterrestrial life. The group contracted the services of the 100-meter Robert C. Byrd Green Bank Telescope in West Virginia in the United States and the 64-meter Parkes Telescope in New South Wales, Australia. On 13 February 2015, scientists (including Geoffrey Marcy, Seth Shostak, Frank Drake and David Brin) at a convention of the American Association for the Advancement of Science, discussed Active SETI and whether transmitting a message to possible intelligent extraterrestrials in the Cosmos was a good idea; one result was a statement, signed by many, that a "worldwide scientific, political and humanitarian discussion must occur before any message is sent". Government responses The 1967 Outer Space Treaty and the 1979 Moon Agreement define rules of planetary protection against potentially hazardous extraterrestrial life. COSPAR also provides guidelines for planetary protection. A committee of the United Nations Office for Outer Space Affairs had in 1977 discussed for a year strategies for interacting with extraterrestrial life or intelligence. The discussion ended without any conclusions. As of 2010, the UN lacks response mechanisms for the case of an extraterrestrial contact. One of the NASA divisions is the Office of Safety and Mission Assurance (OSMA), also known as the Planetary Protection Office. A part of its mission is to "rigorously preclude backward contamination of Earth by extraterrestrial life." In 2016, the Chinese Government released a white paper detailing its space program. According to the document, one of the research objectives of the program is the search for extraterrestrial life. It is also one of the objectives of the Chinese Five-hundred-meter Aperture Spherical Telescope (FAST) program. In 2020, Dmitry Rogozin, the head of the Russian space agency, said the search for extraterrestrial life is one of the main goals of deep space research. He also acknowledged the possibility of existence of primitive life on other planets of the Solar System. The French space agency has an office for the study of "non-identified aero spatial phenomena". The agency is maintaining a publicly accessible database of such phenomena, with over 1600 detailed entries. According to the head of the office, the vast majority of entries have a mundane explanation; but for 25% of entries, their extraterrestrial origin can neither be confirmed nor denied. In 2020, chairman of the Israel Space Agency Isaac Ben-Israel stated that the probability of detecting life in outer space is "quite large". But he disagrees with his former colleague Haim Eshed who stated that there are contacts between an advanced alien civilisation and some of Earth's governments. In fiction Although the idea of extraterrestrial peoples became feasible once astronomy developed enough to understand the nature of planets, they were not thought of as being any different from humans. Having no scientific explanation for the origin of mankind and its relation to other species, there was no reason to expect them to be any other way. This was changed by the 1859 book On the Origin of Species by Charles Darwin, which proposed the theory of evolution. Now with the notion that evolution on other planets may take other directions, science fiction authors created bizarre aliens, clearly distinct from humans. A usual way to do that was to add body features from other animals, such as insects or octopuses. Costuming and special effects feasibility alongside budget considerations forced films and TV series to tone down the fantasy, but these limitations lessened since the 1990s with the advent of computer-generated imagery (CGI), and later on as CGI became more effective and less expensive. Real-life events sometimes captivate people's imagination and this influences the works of fiction. For example, during the Barney and Betty Hill incident, the first recorded claim of an alien abduction, the couple reported that they were abducted and experimented on by aliens with oversized heads, big eyes, pale grey skin, and small noses, a description that eventually became the grey alien archetype once used in works of fiction. See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Computer#cite_ref-130] | [TOKENS: 10628] |
Contents Computer A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs, which enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation, or to a group of computers that are linked and function together, such as a computer network or computer cluster. A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users. Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joysticks, etc.), output devices (monitors, printers, etc.), and input/output devices that perform both functions (e.g. touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. Etymology It was not until the mid-20th century that the word acquired its modern definition; according to the Oxford English Dictionary, the first known use of the word computer was in a different sense, in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine". The name has remained, although modern computers are capable of many higher-level functions. History Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a] The use of counting rods is one example. The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft. In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates. In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which through a system of pulleys and cylinders could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences. Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables". He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. In his work Essays on Automatics published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like a x ( y − z ) 2 {\displaystyle a^{x}(y-z)^{2}} , for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson. The art of mechanical analog computing reached its zenith with the differential analyzer, completed in 1931 by Vannevar Bush at MIT. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).[citation needed] Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers. By 1938, the United States Navy had developed the Torpedo Data Computer, an electromechanical analog computer for submarines that used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II, similar devices were developed in other countries. Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer. In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete. Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. The Z4 served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls". It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945. The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick work on semiconductor surface passivation by silicon dioxide. Modern monolithic ICs are predominantly MOS (metal–oxide–semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel.[b] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip. System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. The first mobile computers were heavy and ran from mains power. The 50 lb (23 kg) IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin. Types Computers can be classified in a number of different ways, including: A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer,[c] a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that processes information qualifies as a computer. Hardware The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Input devices are the means by which the operations of a computer are controlled and it is provided with data. Examples include: Output devices are the means by which a computer provides the results of its calculations in a human-accessible form. Examples include: The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[e] Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[f] The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor. The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic. Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[g] In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[h] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Software Software is the part of a computer system that consists of the encoded information that determines the computer's operation, such as data or instructions on how to process the data. In contrast to the physical hardware from which the system is built, software is immaterial. Software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither is useful on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware". The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. This section applies to most common RAM machine–based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language: Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[i] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. A programming language is a notation system for writing the source code from which a computer program is produced. Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of programming languages—some intended for general purpose programming, others useful for only highly specialized applications. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[j] Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[k] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[l] Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. Networking and the Internet Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. In the 20th century, artificial intelligence systems were predominantly symbolic: they executed code that was explicitly programmed by software developers. Machine learning models, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular of neural networks) has rapidly improved with progress in hardware for parallel computing, mainly graphics processing units (GPUs). Some large language models are able to control computers or robots. AI progress may lead to the creation of artificial general intelligence (AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans. Professions and organizations As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. See also Notes References Sources External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#CITEREFStockdale1995] | [TOKENS: 10728] |
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Alonei_Abba] | [TOKENS: 3073] |
Contents Alonei Abba Alonei Abba (Hebrew: אַלּוֹנֵי אַבָּא, lit. 'Abba's Oaks') is a moshav shitufi in northern Israel. Located in the Lower Galilee near Bethlehem of Galilee and Alonim, in the hills east of Kiryat Tivon, it falls under the jurisdiction of the Jezreel Valley Regional Council. In 2023 it had a population of 933. The modern village was founded in 1948 on the site of the historical Arab village of Umm el Amad, later the German Protestant Colony of Waldheim. History Archaeological investigations indicate that this was an industrial agricultural processing area in the Hellenistic and Roman periods. Among the remains found are Roman-period industrial oil press and a winepress, in addition to a paved path from the same era. Umm al-'Amad was mentioned in the Ottoman defter for the year 1555–6, as Mezraa land, (that is, cultivated land), located in the Nahiya of Tabariyya of the Liwa of Safad. The land was designated as Ziamet land. In 1799 it appeared as a village Zebed on the Carte de l'Égypte (Description de l'Égypte) of Pierre Jacotin, and in the 1880s as Umm el Amed (Arabic: ام العمد) on the PEF Survey of Palestine. The 1799 Jacotin map had not surveyed the area; it was drawn based on the notes of an inhabitant of Shefa-ʻAmr and some parts are incorrect. In 1859 the British consul Rogers stated that the population of Umm al-Amed was 100 and the tillage was ten feddans. In 1875 Victor Guérin found Umm al-Amed situated on a small plateau, surrounded by gardens. In spite of its name Umm al-Amed, which meant "The place with the columns", Guérin could find no columns. In 1881 the Palestine Exploration Fund's Survey of Western Palestine described it as standing in oak-woods on a hill-top. There was an ancient rock-cut sepulchre on the east side. A population list from about 1887 showed that Umm el Ahmad had about 55 inhabitants; all Muslims. In 1907 the colony Waldheim (German: "Forest Home" or "Forestville") was founded by German Protestants affiliated with the Old-Prussian State Church on land purchased from the village of Umm al-Amed. Most of the colonists came from the German Colony (Haifa), which was founded by the Templers. In 1874, the Temple Society underwent a schism and envoys of the Evangelical State Church of Prussia's older Provinces successfully proselytised among the schismatics. Thus the Haifa German Colony became home to two Christian denominations and their congregations. While in Germany the Templers were regarded as sectarians, the Evangelical proselytes gained major financial and ideological support from Lutheran and United church bodies. This created an atmosphere of mistrust and envy among the German colonists in Haifa. Due to population increase and the ongoing urbanisation of Haifa, they searched for land to found new monodenominational colonies. Thus the Protestants founded Waldheim, while Templers settled in the neighbouring Bethlehem of Galilee. The purchase price of 170,000 francs was financed by a Haifa-based bank Darlehenskasse der deutschen evangelischen Gemeinde Haifa GmbH and completely refinanced by the Stuttgarter Gesellschaft zur Förderung der deutschen Ansiedlungen in Palästina. The colony comprised 7,200,000 square meters (7,200 dunams). The settlement was inaugurated on the occasion of Harvest Festival on 6 October 1907. At this time, the new Waldheimers were still living in the simple clay huts purchased from the previous owners. The Haifa engineer Ernst August Voigt presented the plan of the streets and the 16 allotments around a central plot reserved for a church. In 1909 the Jerusalemsverein [de] (Jerusalem Association), a Berlin-based organisation supportive of Protestant activities in the Holy Land, contributed money for the development of a water supply. By 1914, the residents planted 5,000 square metres of vineyard and more than 500 olive trees. In December 1913 the farmers of Waldheim and Bethlehem keeping dairy cattle founded a common dairy cooperative to pasteurise milk and deliver it to Haifa. In the 1922 census of Palestine conducted by the British authorities, Umm al Amad had a population of 128; 63 Christians and 65 Muslims. Of the Christians, 62 were Protestant and one was Greek Catholic (Melkite). This had increased slightly in the 1931 census, when Umm el Amad had a population of 231; 163 Muslim and 68 Christians, in a total of 76 inhabited houses. Most of the residents bore German citizenship. In the course of the 1930s some Waldheimers joined the Nazi party, indicating the fading affinity to the Evangelical ideals. Until August 1939 17% of all gentile Germans in Palestine were enrolled as members of the Nazi party. After the Nazi takeover in Germany, the new Reich government adapted foreign policy to Nazi ideals, based on the idea that Germany and Germanness were equal to Nazism. International schools of German language subsidised or fully financed with government funds were asked to redraw their educational programs and employ teachers aligned to the Nazi party. The teachers in Waldheim were financed by the Reich so that also here Nazi teachers took over. In 1933 German Gentiles living in Palestine appealed unsuccessfully to Paul von Hindenburg and the Foreign Office not to use Swastika symbols for German institutions. Some German Gentiles pleaded the Reich's government to drop its announced plan to boycott shops of Jewish Germans on 1 April 1933. Later the opposition of Gentile Germans in Palestine acquiesced. A Palestinian branch of the Hitler youth was built up by the help of German government subsidies. By 1935 the Nazis had succeeded to streamline the municipal bodies of the settlements of Gentile Germans in Palestine. On 20 August 1939 the German government ordered the recruitment of Gentile German men into the Wehrmacht. 350 followed the call. According to one Nahalal resident, until the outbreak of World War II the German community had good relationships with local members of the Yishuv, sold stock to them while the Jewish farmers went there to study agricultural methods, and the Germans would bring them gifts of bread on the last day of Passover. After the start of the war, all Germans in Palestine were classed as enemy aliens. The British authorities decided to intern most of the enemy aliens. Sarona, Bethlehem of Galilee, Waldheim, and Wilhelma were converted into internment camps. Most enemy aliens living elsewhere in Palestine—comprising Gentile Germans, Hungarians and Italians—were interned in one of the settlements, while the inhabitants of the settlements simply stayed where they were. In summer 1941, 665 interned Templers from all their settlements, mainly young families with children, were transported to Australia for internment. Many of the remaining Germans were either too old or too sick to leave. The internees could maintain the agricultural production to feed themselves and supply surplus to market in return for supplies not available within the camps. In 1941, 1942 and 1944, by way of internee exchanges, another 400 Evangelical and Templer internees, mostly wives and children of men who had followed the call for recruitment, were repatriated, via Turkey, to Germany for family reunification. In the 1945 statistics, the population of Waldheim/Um el Amad consisted of 260 people, and the total land area was 9,227 dunams, according to an official land and population survey. There were 150 Muslims and 110 Christians. 170 dunams of land were designated for plantations and irrigable land, 4,776 for cereals, while 102 dunams were built-up areas. After the war the Palmach staged provocative operations, with hit squads killing several Germans, two involving members of the Waldheim community (Mitscherlich and Müller), against the German communities to impress upon them that they were unwelcome in Palestine. In 1946 Moshe Shertok on behalf of the Jewish Agency requested that Palestine's German colonies be liquidated, and their properties turned over to the Agency as part of reparations Germany owed the Jewish people. According to Meir Amit who led the operation, a decision to take over the villages, which were considered friendly to the Arabs, was taken in March because Amin al-Husayni had had contact with the Nazi government during WW2. The two Templar villages, the other being Betlehem, were under British protection, and both considered 'unreliable' by Jewish forces. There was a perception that the British, who were due to evacuate Palestine in May, tended to hand over areas under their control to local Arabs, and the area was strategic because close to the main axis leading to the Nahalal police station. According to Hagai Binyamini, the German estates were very neat, and emblematic of german order and efficiency. with housing made of stone in contrast to the sheet metal, old pipes and wiring used to build local Jewish settlements. At 04:00 on 17 April 1948 a unit composed of three platoons of Golani troopers from the Dror and Nafat Levi Battalions, drawn from members of the Nahalal, Alonim, Kfar Yehoshua Sde Ya'akov and Sha'ar HaAmakim settlements and backed by armoured trucks mounted with machine guns. penetrated into Waldheim via the woodland. Some of the soldiers were Holocaust survivors, and many were fresh from combat at Mishmar HaEmek. The Germans put up no resistance, and shots fired were attributed to Arabs. The few British soldiers under camp commander Alan Tilbury were unable to impede the attack during which two colonists, Karl and Regina Aimann, were shot dead, 'before they could even say 'good morning',' in the words of Meir Amit. Newspaper accounts the day after reported they were shot when they resisted arrest while armed, and that the action was taken to intercept plans by 'Arab gangs' to take over the property. The killings occurred in front of their eldest son Traugott. Having ordered their three children, Traugott, Helmut and Gisela to hide in a bedroom, the couple went to the door when two Jewish soldiers began knocked loudly and were cut down when they opened it. The family had reportedly taken refuge close to a British defensive position. A third woman, Katharina Deininger (65), who was milking in the cow shed at the time, suffered a severe wounding when she was shot in the head. Medical assistance was denied to the father, who was still alive, and the community once rounded up was locked up in a building and later subjected to a long speech in which they were all denounced as Nazis. They also underwent a body search to discover, without success, whether anyone bore SS tattoos. One trooper assured the internees that "we are not like the Germans, we will not behave like the Germans". The internees were given 20 minutes to collect what remained of their belongings, all their valuables and good clothes having been looted in the meantime, together with ploughs, disks and tractors. The Germans were then stripped of any documents and some books they recovered, before being handed over to the British, who evacuated them to Haifa. The soldiers who shot the Almanns were reprimanded,- one of them, Chummi Zarchi from Nahalal, had angry memories of several Ukrainian relatives killed in the Holocaust - and the looting deplored not on moral grounds but because it endangered operative priorities. This incident and the end of the Mandate forced the British to hasten the resettlement, thus all the internees, 51 Germans and 4 Swiss, as well as those from the other settlements, were transferred to Cyprus, into a camp of simple tents near Famagusta. By 14 May 1948, when Israel became independent, only about 50 Germans, mostly elderly and sick persons, were living in the new state. They voluntarily left the country or were later expelled by the government. On 12 May 1948 a group of young Zionist pioneers from Czechoslovakia, Austria and Romania, members of HaNoar HaTzioni, established kibbutz BaMa'avak (In The Struggle) in the abandoned colony, after four years of agricultural training in Herzliya. Three years later, the kibbutz became a moshav shitufi and the name was changed to Alonei Abba in memory of Abba Berdichev, who was parachuted into Czechoslovakia in 1943 to assist clandestine British forces, but was captured and executed in 1945. Landmarks Hans Martin Kuno Moderow (1877-1945), pastor of the Haifa Evangelical Congregation (1908–18), also provided services in Waldheim, at the beginning in the living room of the new house of Waldheim's then mayor Gottlob Weinmann. The Waldheimers saved funds for a church of their own and could thus lay the cornerstone for the church in early 1914. The Haifa-based architect Otto Lutz led the construction works. In 1921, the Evangelical church at Alonei Abba, which still stands today, was inaugurated. The Alon winery, surrounded by a grove of oak trees, is located in the former dairy cooperative (est. 1913). Alonei Abba nature reserve In 1994, a 950-dunam nature reserve was declared close by, to the north. The reserve is home to Valonia oak trees (Quercus macrolepis) and Palestine oak (Quercus calliprinos). Other flora in the forest includes terebinths (Pistacia terebinthus), storax trees (Styrax officinalis), carobs (Ceratonia siliqua), buckthorns (Rhamnus palaestinus), and Judas trees (Cercis siliquastrum). Most of the reserve is open for experimental grazing by cattle from the moshav. Notable residents References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Chad_Basin] | [TOKENS: 3312] |
Contents Chad Basin The Chad Basin is the largest endorheic basin in Africa, centered approximately on Lake Chad. It has no outlet to the sea and contains large areas of semi-arid desert and savanna. The drainage basin is approximately coterminous with the sedimentary basin of the same name, but extends further to the northeast and east. The basin spans four modern nations, including most of Chad and a large part of Niger, Nigeria and Cameroon. A combination of dams, increased irrigation, climate change, and reduced rainfall are causing water shortages. Lake Chad continues to shrink. Geology The geological basin, which is smaller than the drainage basin, is a Phanerozoic sedimentary basin formed during the plate divergence that opened the South Atlantic Ocean. The basin lies between the West African Craton and Congo Craton, and formed about the same time as the Benue Trough. It covers an area of about 2,335,000 square kilometres (902,000 sq mi). It merges into the Iullemmeden Basin to the west at the Damergou gap between the Aïr and Zinder massifs. The floor of the basin is made of Precambrian bedrock covered by more than 3,600 metres (11,800 ft) of sedimentary deposits. The basin may have resulted from the intersection of an "Aïr-Chad Trough" running NW-SE and a "Tibesti-Cameroon Trough" running NE-SW. That is, the two deepest parts are an extension of the Benue Trough that runs northeast to the margin of the basin, and another extension running from below the present lake to below the Ténéré rift structure to the east of the Aïr massif. The southern part of the basin is underlain by another elongated depression. This runs in an ENE direction and extends from the Yola arm of the Benue trough. At times, parts of the basin were below the sea. In the northeastern part of the Benue Trough where it enters the Chad Basin there are marine sediments from the Late Cretaceous (100.5–66 Ma[a]). These sediments seem to be considerably thicker towards the northeast. Boreholes under Maiduguri have found marine sediments 400 metres (1,300 ft) deep, lying over continental sediments 600 metres (2,000 ft) deep. The sea seems to have retreated from the western part of the basin in the Turonian (93.5–89.3 Ma). In the Maastrichtian (72.1–66 Ma) the west was non-marine, but the southeast probably was still marine. No marine sediments have been found from the Paleocene (66–56 Ma). For most of the Quaternary, from 2.6 million years ago to the present, the basin seems to have been a huge, well-watered plain, with many rivers and water bodies, probably rich in plant and animal life. Towards the end of this period the climate became drier. Around 20,000-40,000 years ago, eolianite sand dunes began to form in the north of the basin. During the Holocene, from 11,000 years ago until recently, a giant "Lake Mega-Chad" covered an area of more than 350,000 square kilometres (140,000 sq mi) in the basin. It would have drained to the Atlantic Ocean via the Benue River. Stratigraphic records show that "Mega-Chad" varied in size as the climate changed, with a maximum about 2,300 years ago. The remains of fish and molluscs from this period are found in what are now desert regions. Drainage basin extent The Chad Basin covers almost 8% of the African continent, with an area of about 2,434,000 square kilometres (940,000 sq mi). It is surrounded by mountains. The Aïr Mountains and the Termit Massif in Niger form the western boundary. To the northwest, in Algeria, are the Tassili n'Ajjer mountains, including the 2,158 metres (7,080 ft) Jebel Azao. The Tibesti Mountains to the north of the basin include Emi Koussi, the highest mountain in the Sahara at 3,415 metres (11,204 ft). The Ennedi Plateau lies to the northeast, rising to 1,450 metres (4,760 ft). The Ouaddaï highlands lies the east. They include the Marrah Mountains in Darfur at up to 3,088 metres (10,131 ft) in height. The Adamawa Plateau, Jos Plateau, Biu Plateau, and Mandara Mountains lie to the south. To the west the basin is separated by a watershed from the Niger River, and to the south it is separated by a basement dome from the Benue River. Further east, watersheds separate it from the Congo Basin and the river Nile. The lowest part of the basin is not Lake Chad, but the Bodélé Depression, at a distance of 480 kilometres (300 mi) to the northeast of the lake. The Bodélé Depression is just 155 metres (509 ft) above sea level in its deepest portion, while the surface of Lake Chad is 275 metres (902 ft) above sea level. The basin spans parts of eight countries. These are: Climate and ecology The northern half of the basin is desert, containing the Ténéré desert, Erg of Bilma and Djurab Desert. South of that is the Sahel zone, dry savanna and thorny shrub savanna. The main rivers include riparian forests, flooding savannas and wetland areas. In the far south there are dry forests. Rainfall varies widely from year to year. The amount of annual rainfall is very slight in the north of the basin, increasing to 1,200 millimetres (47 in) in the south. As late as 2000, the basin has remained home to large populations of wildlife. In the Sahel these include antelopes such as the addax and dama gazelle, and in the savanna there are korrigum and red-fronted gazelle. The black crowned crane and other waterbirds are found in the wetlands. There are populations of elephants, giraffes, and lions. The western black rhinoceros was once common but is now extinct. Elephants almost became extinct by the end of the nineteenth century due to European and American demand for ivory, but stocks have since recovered. Water resources The seasonal Korama River in the south of Niger does not reach Lake Chad. Nigeria includes two sub-basins that drain into Lake Chad. The Hadejia - Jama'are - Yobe sub-basin in the north contains the Hadejia and Jama'are rivers, which supply the 6,000 square kilometres (2,300 sq mi) Hadejia-Nguru wetlands. They converge to form the Yobe, which defines the border between Niger and Nigeria for 300 kilometres (190 mi), flowing into Lake Chad. About .5 cubic kilometres (0.12 cu mi) of water reaches Lake Chad annually. Construction of upstream dams and growth in irrigation have reduced water flow, and the floodplains are drying. The Yedseram - Ngadda sub-basin further south is fed by the Yedseram River and Ngadda River, which join to form an 80 square kilometres (31 sq mi) swamp to the southwest of the lake. There is not any significant water flow from the swamp to the lake. The Central African Republic (CAR) contains the sources of the rivers Chari and Logone, which flow north into the lake. The volume of water entering Chad annually from the CAR has decreased from about 33 cubic kilometres (7.9 cu mi) before the 1970s to 17 cubic kilometres (4.1 cu mi) during the 1980s. A further 3 cubic kilometres (0.72 cu mi) to 7 cubic kilometres (1.7 cu mi) of water annually flows from Cameroon into Chad via the Logone River. The Chari-Logone system accounts for about 95% of the water entering Lake Chad. The basin in the Nigerian section contains an upper aquifer of Early Pleistocene alluvial deposits that are often covered by recent sand dunes, varying in thickness from 15 to 100 metres (49 to 328 ft). It consists of interbedded sands, clays and silts, with discontinuous clay lenses. The aquifer recharges from run-off and rainfall. The local people access the water with hand-dug wells and shallow boreholes, and use it for domestic use, growing vegetables and watering their livestock. Below this aquifer, separated from it by a sequence of grey to bluish-grey clays from the Zanclean, is a second aquifer at a depth of 240 to 380 metres (790 to 1,250 ft). Due to intensive pumping, since the start of the 1980s the water levels in both aquifers has been lowered, and some wells no longer function. There is a third, much lower, aquifer in Bima Sandstones that lies at a depth of 2,700 to 4,600 metres (8,900 to 15,100 ft). Oil and gas resources Both the current and historical areas of the basin and the mega-basin contain concentrations of fossils. Fossil-fuels deposits in the area are estimated to exceed a trillion barrels of reserves. Management The Lake Chad Basin Commission was established in 1964 by Cameroon, Chad, Niger and Nigeria, the four countries that contain parts of Lake Chad. About 20% of the basin, lying in these countries, is termed the Conventional Basin. The Lake Chad Basin Commission manages use of water and other natural resources in this area. Although the lake fluctuates considerably in size from one year to another, the general trend has been for water levels to decrease. There has been a proposal to supply water from the Congo Basin via a canal 2,400 kilometres (1,500 mi) long, but major political, technical, and economic challenges would have to be overcome to make this practical. People Humans have lived in the inner Chad Basin from at least eight thousand years ago, and were engaging in agriculture and livestock management around the lake by 1000 BC. Permanent villages were established to the south of the lake by 500 BC at the start of the Iron Age. The Chad Basin contained important trade routes to the east and to the north across the Sahara. By the 5th century AD camels were being used for trans-Saharan trade via the Fezzan, or to the east via Darfur, where slaves and ivory were exchanged for salt, horses, glass beads, and, later, firearms. After the Arabs conquered North Africa during the 7th and 8th centuries, the Chad Basin became increasingly linked to the Muslim countries. Trade and improved agricultural techniques enabled more sophisticated societies, resulting in the early kingdoms of the Kanem Empire, the Wadai Empire, and the Sultanate of Bagirmi. Kanem developed during the 8th century in the region to the north and east of Lake Chad. The Sayfuwa dynasty that ruled this kingdom had adopted Islam by the 12th century. The Kanem empire went into decline, shrank, and during the 14th century was defeated by Bilala invaders from the Lake Fitri region. The Kanuri people commanded by the Sayfuwa migrated to the west and south of the lake, where they established the Bornu Empire. By the late 16th century the Bornu empire had expanded and recaptured the parts of Kanem that had been conquered by the Bilala. Satellite states of Bornu included the Sultanate of Damagaram in the west and Baguirmi to the southeast of Lake Chad. The Tunjur people initiated the Wadai Empire to the east of Bornu during the 16th century. During the 17th century, the Maba people revolted and established a Muslim dynasty. At first, Wadai paid tribute to Bornu and Durfur, but by the 18th century Wadai was fully independent and had become an aggressor against its neighbors. To the west of Bornu, by the 15th century the Kingdom of Kano had become the most powerful of the Hausa Kingdoms, with an unstable truce with the Kingdom of Katsina to the north. Both of these states adopted Islam during the 15th and 16th centuries. Both were absorbed into the Sokoto Caliphate during the Fulani War of 1805, which threatened Bornu itself. During the Berlin Conference in 1884-85 Africa was divided between the European colonial powers, defining boundaries that are largely intact with the present post-colonial states. On 5 August 1890 the British and French concluded an agreement to define the boundary between French West Africa and what would become Nigeria. A boundary was agreed along a line from Say on the river Niger to Barruwa on Lake Chad, but leaving the Sokoto Caliphate in the British sphere. Parfait-Louis Monteil was given charge of an expedition to discover where this line actually ran. On 9 April 1892 he reached Kukawa on the shore of the lake. During the next twenty years a large part of the Chad Basin was incorporated by treaty or by force into French West Africa. On 2 June 1909 the Wadai capital of Abéché was occupied by the French. The remainder of the basin was divided by the British in Nigeria who captured Kano in 1903, and the Germans in Kameroun. The countries of the basin regained their independence between 1956 and 1962, retaining the colonial administrative boundaries. The area is badly affected by the Boko Haram insurgency, which began in 2009 and is centred on Borno State in northeastern Nigeria. As of 2011, more than 30 million people lived in the Chad Basin. The population is growing rapidly. Ethnic groups include Kanuri, Maba, Buduma, Hausa, Kanembu, Kotoko, Bagger, Haddad, Kuri, Fulani and Manga. The largest cities are Kano and Maiduguri in Nigeria, Maroua in Cameroon, N'Djamena in Chad and Diffa in Niger. The main economic activities are farming, herding and fishing. At least 40% of the rural population of the basin is impoverished and experiences chronic food shortages. Crop production based on rain is possible only in the southern belt. Flood recession agriculture is practiced around Lake Chad and in the riverine wetlands. Nomadic herders migrate with their animals into the grasslands of the northern part of the basin for a few weeks during each brief rainy season, where they intensively graze the nutritious grasses. When the dry season starts they move back south, either to grazing lands around the lakes and floodplains, or to the savannas further to the south. During the period 2000-01, fisheries in the Lake Chad basin provided food and income to more than 10 million people, with a harvest of about 70,000 tons. Fisheries have been managed traditionally by a system where each village has recognized rights over a defined part of the river, wetland or lake, and fishers from elsewhere must seek permission and pay a fee to use this area. The governments only enforced rules and regulations to a limited extent. Fishery management practices vary. For example, on the Katagum river in Jigawa State, Nigeria, a village will have a water management council that collects a portion of each fisherman's catch and redistributes it among the villagers, or sells it and used the proceeds for communal projects. Local governments and traditional authorities are increasingly engaged in rent-seeking, collecting license fees with the help of the police or army. References Notes Citations Sources |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Cognitive_psychology] | [TOKENS: 3895] |
Contents Cognitive psychology Cognitive psychology is the scientific study of human mental processes such as attention, language use, memory, perception, problem solving, creativity, and reasoning. Cognitive psychology originated in the 1960s in a break from behaviorism, which held from the 1920s to 1950s that unobservable mental processes were outside the realm of empirical science. This break came as researchers in linguistics, cybernetics, and applied psychology used models of mental processing to explain human behavior. Work derived from cognitive psychology was integrated into other branches of psychology and various other modern disciplines like cognitive science, linguistics, and economics. History Philosophically, ruminations on the human mind and its processes have been around since the time of the ancient Greeks. In 387 BCE, Plato suggested that the brain was the seat of mental processes. In 1637, René Descartes posited that humans have innate ideas and promulgated mind-body dualism, which came to be known as substance dualism (essentially the idea that the mind and the body are two separate substances). From that time, major debates ensued through the 19th century about whether human thought is solely experiential (empiricism) or includes innate knowledge (nativism). Some of those involved in this debate include George Berkeley and John Locke on the side of empiricism, and Immanuel Kant on the side of nativism. With the philosophical debate continuing, the mid- to late 19th century was a critical time in the development of psychology as a scientific discipline. Two discoveries that later played substantial roles in cognitive psychology were Paul Broca's discovery of the area of the brain largely responsible for language production and Carl Wernicke's discovery of an area thought to be mostly responsible for comprehension of language. Both areas were subsequently formally named for their founders, and disruptions of an individual's language production or comprehension due to trauma or malformation in these areas have come to commonly be known as Broca's aphasia and Wernicke's aphasia. From the 1920s to the 1950s, the main approach to psychology was behaviorism. Initially, its adherents viewed mental events such as thoughts, ideas, attention, and consciousness as unobservable, hence outside the realm of a science of psychology. A pioneer of cognitive psychology, whose work predated much of behaviorist literature, was Carl Jung. Jung introduced the hypothesis of cognitive functions in his 1921 book Psychological Types. Another pioneer of cognitive psychology, who worked outside the boundaries (both intellectual and geographical) of behaviorism, was Jean Piaget. From 1926 to the 1950s and into the 1980s, he studied the thoughts, language, and intelligence of children and adults. In the mid-20th century, four main influences arose that inspired and shaped cognitive psychology as a formal school of thought: Ulric Neisser put the term "cognitive psychology" into common use through his 1967 book Cognitive Psychology. Neisser's definition of "cognition" illustrates the then-progressive concept of cognitive processes: The term "cognition" refers to all processes by which the sensory input is transformed, reduced, elaborated, stored, recovered, and used. It is concerned with these processes even when they operate in the absence of relevant stimulation, as in images and hallucinations. ... Given such a sweeping definition, it is apparent that cognition is involved in everything a human being might possibly do; that every psychological phenomenon is a cognitive phenomenon. But although cognitive psychology is concerned with all human activity rather than some fraction of it, the concern is from a particular point of view. Other viewpoints are equally legitimate and necessary. Dynamic psychology, which begins with motives rather than with sensory input, is a case in point. Instead of asking how a man's actions and experiences result from what he saw, remembered, or believed, the dynamic psychologist asks how they follow from the subject's goals, needs, or instincts. Cognitive processes The main focus of cognitive psychologists is the mental processes that affect behavior. Those processes include, but are not limited to, the following three stages of memory:[original research?] The psychological definition of attention is "a state of focused awareness on a subset of the available sensation perception information". A key function of attention is to identify irrelevant data and filter it out, enabling significant data to be distributed to the other mental processes. For example, the human brain may simultaneously receive auditory, visual, olfactory, taste, and tactile information. The brain is able to consciously handle only a small subset of this information, and this is accomplished through the attentional processes. Attention can be divided into two major attentional systems: exogenous control and endogenous control. Exogenous control works in a bottom-up manner and is responsible for orienting reflex, and pop-out effects. Endogenous control works top-down and is the more deliberate attentional system, responsible for divided attention and conscious processing. One major focal point relating to attention within the field of cognitive psychology is the concept of divided attention. A number of early studies dealt with the ability of a person wearing headphones to discern meaningful conversation when presented with different messages into each ear; this is known as the dichotic listening task. Key findings involved an increased understanding of the mind's ability to both focus on one message, while still being somewhat aware of information being taken in from the ear not being consciously attended to. For example, participants (wearing earphones) may be told that they will be hearing separate messages in each ear and that they are expected to attend only to information related to basketball. When the experiment starts, the message about basketball will be presented to the left ear and non-relevant information will be presented to the right ear. At some point the message related to basketball will switch to the right ear and the non-relevant information to the left ear. When this happens, the listener is usually able to repeat the entire message at the end, having attended to the left or right ear only when it was appropriate. The ability to attend to one conversation in the face of many is known as the cocktail party effect. Other major findings include that participants cannot comprehend both passages when shadowing one passage, they cannot report the content of the unattended message, while they can shadow a message better if the pitches in each ear are different. However, while deep processing does not occur, early sensory processing does. Subjects did notice if the pitch of the unattended message changed or if it ceased altogether, and some even oriented to the unattended message if their name was mentioned. The two main types of memory are short-term memory and long-term memory; however, short-term memory has become better understood to be working memory. Cognitive psychologists often study memory in terms of working memory. Though working memory is often thought of as just short-term memory, it is more clearly defined as the ability to process and maintain temporary information in a wide range of everyday activities in the face of distraction. The famously known capacity of memory of 7 plus or minus 2 is a combination of both memories in working memory and long-term memory. One of the classic experiments is by Ebbinghaus, who found the serial position effect where information from the beginning and end of the list of random words were better recalled than those in the center. This primacy and recency effect varies in intensity based on list length. Its typical U-shaped curve can be disrupted by an attention-grabbing word; this is known as the Von Restorff effect. Many models of working memory have been made. One of the most regarded is the Baddeley and Hitch model of working memory. It takes into account both visual and auditory stimuli, long-term memory to use as a reference, and a central processor to combine and understand it all. A large part of memory is forgetting, and there is a large debate among psychologists of decay theory versus interference theory.[citation needed] Modern conceptions of memory are usually about long-term memory and break it down into three main sub-classes. These three classes are somewhat hierarchical in nature, in terms of the level of conscious thought related to their use. Perception involves both the physical senses (sight, smell, hearing, taste, touch, and proprioception) as well as the cognitive processes involved in interpreting those senses. Essentially, it is how people come to understand the world around them through the interpretation of stimuli. Early psychologists like Edward B. Titchener began to work with perception in their structuralist approach to psychology. Structuralism dealt heavily with trying to reduce human thought (or "consciousness", as Titchener would have called it) into its most basic elements by gaining an understanding of how an individual perceives particular stimuli. Current perspectives on perception within cognitive psychology tend to focus on particular ways in which the human mind interprets stimuli from the senses and how these interpretations affect behavior. An example of the way in which modern psychologists approach the study of perception is the research being done at the Center for Ecological Study of Perception and Action at the University of Connecticut (CESPA). One study at CESPA concerns ways in which individuals perceive their physical environment and how that influences their navigation through that environment. Psychologists have had an interest in the cognitive processes involved with language that dates back to the 1870s, when Carl Wernicke proposed a model for the mental processing of language. Current work on language within the field of cognitive psychology varies widely. Cognitive psychologists may study language acquisition, individual components of language formation (like phonemes), how language use is involved in mood, or numerous other related areas. Significant work has focused on understanding the timing of language acquisition and how it can be used to determine if a child has, or is at risk of, developing a learning disability. A study from 2012 showed that, while this can be an effective strategy, it is important that those making evaluations include all relevant information when making their assessments. Factors such as individual variability, socioeconomic status, short-term and long-term memory capacity, and others must be included in order to make valid assessments. Metacognition, in a broad sense, is the thoughts that a person has about their own thoughts. More specifically, metacognition includes things like: Much of the current study regarding metacognition within the field of cognitive psychology deals with its application within the area of education. Being able to increase a student's metacognitive abilities has been shown to have a significant impact on their learning and study habits. One key aspect of this concept is the improvement of students' ability to set goals and self-regulate effectively to meet those goals. As a part of this process, it is also important to ensure that students are realistically evaluating their personal degree of knowledge and setting realistic goals (another metacognitive task). Common phenomena related to metacognition include: Modern perspectives Modern perspectives on cognitive psychology generally address cognition as a dual process theory, expounded upon by Daniel Kahneman in 2011. Kahneman differentiated the two styles of processing more, calling them intuition and reasoning. Intuition (or system 1), similar to associative reasoning, was determined to be fast and automatic, usually with strong emotional bonds included in the reasoning process. Kahneman said that this kind of reasoning was based on formed habits and very difficult to change or manipulate. Reasoning (or system 2) was slower and much more volatile, being subject to conscious judgments and attitudes. Applications Following the cognitive revolution, and as a result of many of the principal discoveries to come out of the field of cognitive psychology, the discipline of cognitive behavior therapy (CBT) evolved. Aaron T. Beck is generally regarded as the father of cognitive therapy, a particular type of CBT treatment. His work in the areas of recognition and treatment of depression has gained worldwide recognition. In his 1987 book titled Cognitive Therapy of Depression, Beck puts forth three salient points with regard to his reasoning for the treatment of depression by means of therapy or therapy and antidepressants versus using a pharmacological-only approach: 1. Despite the prevalent use of antidepressants, the fact remains that not all patients respond to them. Beck cites (in 1987) that only 60 to 65% of patients respond to antidepressants, and recent meta-analyses (a statistical breakdown of multiple studies) show very similar numbers.2. Many of those who do respond to antidepressants end up not taking their medications, for various reasons. They may develop side-effects or have some form of personal objection to taking the drugs.3. Beck posits that the use of psychotropic drugs may lead to an eventual breakdown in the individual's coping mechanisms. His theory is that the person essentially becomes reliant on the medication as a means of improving mood and fails to practice those coping techniques typically practiced by healthy individuals to alleviate the effects of depressive symptoms. By failing to do so, once the patient is weaned off of the antidepressants, they often are unable to cope with normal levels of depressed mood and feel driven to reinstate use of the antidepressants. Many facets of modern social psychology have roots in research done within the field of cognitive psychology. Social cognition is a specific sub-set of social psychology that concentrates on processes that have been of particular focus within cognitive psychology, specifically applied to human interactions. Gordon B. Moskowitz defines social cognition as "... the study of the mental processes involved in perceiving, attending to, remembering, thinking about, and making sense of the people in our social world". The development of multiple social information processing (SIP) models has been influential in studies involving aggressive and anti-social behavior. Kenneth Dodge's SIP model is one of, if not the most, empirically supported models relating to aggression. Among his research, Dodge posits that children who possess a greater ability to process social information more often display higher levels of socially acceptable behavior; that the type of social interaction that children have affects their relationships. His model asserts that there are five steps that an individual proceeds through when evaluating interactions with other individuals and that how the person interprets cues is key to their reactionary process. Many of the prominent names in the field of developmental psychology base their understanding of development on cognitive models. One of the major paradigms of developmental psychology, the Theory of Mind (ToM), deals specifically with the ability of an individual to effectively understand and attribute cognition to those around them. This concept typically becomes fully apparent in children between the ages of 4 and 6. Essentially, before the child develops ToM, they are unable to understand that those around them can have different thoughts, ideas, or feelings than themselves. The development of ToM is a matter of metacognition, or thinking about one's thoughts. The child must be able to recognize that they have their own thoughts and in turn, that others possess thoughts of their own. One of the foremost minds with regard to developmental psychology, Jean Piaget, focused much of his attention on cognitive development from birth through adulthood. Though there have been considerable challenges to parts of his stages of cognitive development, they remain a staple in the realm of education. Piaget's concepts and ideas predated the cognitive revolution but inspired a wealth of research in the field of cognitive psychology and many of his principles have been blended with modern theory to synthesize the predominant views of today. Modern theories of education have applied many concepts that are focal points of cognitive psychology. Some of the most prominent concepts include: Cognitive therapeutic approaches have received considerable attention in the treatment of personality disorders in recent years. The approach focuses on the formation of what it believes to be faulty schemata, centralized on judgmental biases and general cognitive errors. Relationship to cognitive science Cognitive psychology is considered a core aspect of cognitive science, the interdisciplinary study of mind and mental function, including how such functions implemented in brains and machines. Cognitive science, as a unitary field, integrates knowledge, theory and methodology from psychology, neuroscience, linguistics, philosophy, artificial intelligence, and anthropology. It has been argued that cognitive science has been largely consumed by cognitive psychology, with some scholars even using the terms interchangeably (see LeMoult & Gotlib for an example). This largely results from early difficulties integrating the different fields of cognitive science (e.g. psychology and artificial intelligence), with the resulting divergence of terminology, methodology and theoretical approach over time rendering efforts at cohering the disciplines challenging. Criticisms Some observers have suggested that as cognitive psychology became a movement during the 1970s, the intricacies of the phenomena and processes it examined meant it also began to lose cohesion as a field of study. In Psychology: Pythagoras to Present, for example, John Malone writes: "Examinations of late twentieth-century textbooks dealing with "cognitive psychology", "human cognition", "cognitive science" and the like quickly reveal that there are many, many varieties of cognitive psychology and very little agreement about exactly what may be its domain." This misfortune produced competing models that questioned information-processing approaches to cognitive functioning such as Decision Making and Behavioral Sciences. Recently, cognitive psychology has been criticised for being overly focused on the internal mind, not allowing for influences external to it. 4E cognition is one such new approach that aims to show that cognition is embodied, embedded, extended and enacted. Controversies In the early years of cognitive psychology, behaviorist critics held that the empiricism it pursued was incompatible with the concept of internal mental states. However, cognitive neuroscience continues to gather evidence of direct correlations between physiological brain activity and mental states, endorsing the basis for cognitive psychology. There is however disagreement between neuropsychologists and cognitive psychologists. Cognitive psychology has produced models of cognition which are not supported by modern brain science. It is often the case that the advocates of different cognitive models form a dialectic relationship with one another thus affecting empirical research, with researchers siding with their favorite theory. For example, advocates of mental model theory have attempted to find evidence that deductive reasoning is based on image thinking, while the advocates of mental logic theory have tried to prove that it is based on verbal thinking, leading to a disorderly picture of the findings from brain imaging and brain lesion studies. When theoretical claims are put aside, the evidence shows that interaction depends on the type of task tested, whether of visuospatial or linguistical orientation; but that there is also an aspect of reasoning which is not covered by either theory. Similarly, neurolinguistics has found that it is easier to make sense of brain imaging studies when the theories are left aside. In the field of language cognition research, generative grammar has taken the position that language resides within its private cognitive module, while 'Cognitive Linguistics' goes to the opposite extreme by claiming that language is not an independent function, but operates on general cognitive capacities such as visual processing and motor skills. Consensus in neuropsychology however takes the middle position that, while language is a specialized function, it overlaps or interacts with visual processing. Nonetheless, much of the research in language cognition continues to be divided along the lines of generative grammar and Cognitive Linguistics; and this, again, affects adjacent research fields including language development and language acquisition. Major research areas Categorization Knowledge representation Language Memory Perception Thinking Influential cognitive psychologists See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_ref-111] | [TOKENS: 10728] |
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Israelite_highland_settlement] | [TOKENS: 389] |
Contents Israelite highland settlement In the early Iron Age, Canaan was characterized by a significant increase in a sedentary Israelite population in Samaria. Archaeology Archaeological field surveys conducted since the 1970s found a large increase in the settled population dating to the 12th century BC Late Bronze Age collapse. It is not known whether the Israelites arrived in the wake of conquests or the new villages were established by former nomads or displaced persons. A similar increase was not found in the surrounding lowland areas. According to archaeological evidence, these areas may have been inhabited by Canaanites or Sea People. A 2005 book by Robert D. Miller applies statistical modeling to the sizes and locations of the villages, grouping them by economic and political features. He found highland groupings centered on Dothan, Tirzah, Shechem, and Shiloh. The tribal territory of Benjamin was not organized around any main town. Biblical narrative The Book of Joshua describes the conquest of Canaan, including the Fall of Jericho and the Battle of the Waters of Merom. This evidence does not prove there was a conquest, but if the biblical reference to "daughter villages" means all villages closest to a specific town, the list of Canaanite towns not taken in the Book of Judges (Judges 1:27–35), which begins: "Nor did Manesseh drive out Bet Shean and her daughter-villages ...", the correspondence to the survey results is remarkably accurate. Towns not captured in the central zone were Taanach, Ibleam, Megiddo, Dor, Gezer, Aijalon, Shaalbim, and Jerusalem. See also References Bibliography This article relating to archaeology in Israel is a stub. You can help Wikipedia by adding missing information. This article about biblical studies is a stub. You can help Wikipedia by adding missing information. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Extraterrestrial_life#cite_ref-WP-20201231_1-0] | [TOKENS: 11349] |
Contents Extraterrestrial life Extraterrestrial life, or alien life (colloquially aliens), is life that originates from another world rather than on Earth. No extraterrestrial life has yet been scientifically or conclusively detected. Such life might range from simple forms such as prokaryotes to intelligent beings, possibly bringing forth civilizations that might be far more, or far less, advanced than humans. The Drake equation speculates about the existence of sapient life elsewhere in the universe. The science of extraterrestrial life is known as astrobiology. Speculation about inhabited worlds beyond Earth dates back to antiquity. Early Christian writers, including Augustine, discussed ideas from thinkers like Democritus and Epicurus about countless worlds in the vast universe. Pre-modern writers typically assumed extraterrestrial "worlds" were inhabited by living beings. William Vorilong, in the 15th century, acknowledged the possibility Jesus could have visited extraterrestrial worlds to redeem their inhabitants.: 26 In 1440, Nicholas of Cusa suggested Earth is a "brilliant star"; he theorized that all celestial bodies, even the Sun, could host life. Descartes wrote that there were no means to prove the stars were not inhabited by "intelligent creatures", but their existence was a matter of speculation.: 67 In comparison to the life-abundant Earth, the vast majority of intrasolar and extrasolar planets and moons have harsh surface conditions and disparate atmospheric chemistry, or lack an atmosphere. However, there are many extreme and chemically harsh ecosystems on Earth that do support forms of life and are often hypothesized to be the origin of life on Earth. Examples include life surrounding hydrothermal vents, acidic hot springs, and volcanic lakes, as well as halophiles and the deep biosphere. Since the mid-20th century, researchers have searched for extraterrestrial life and intelligence. Solar system studies focus on Venus, Mars, Europa, and Titan, while exoplanet discoveries now total 6,022 confirmed planets in 4,490 systems as of October 2025. Depending on the category of search, methods range from analysis of telescope and specimen data to radios used to detect and transmit interstellar communication. Interstellar travel remains largely hypothetical, with only the Voyager 1 and Voyager 2 probes confirmed to have entered the interstellar medium. The concept of extraterrestrial life, especially intelligent life, has greatly influenced culture and fiction. A key debate centers on contacting extraterrestrial intelligence: some advocate active attempts, while others warn it could be risky, given human history of exploiting other societies. Context Initially, after the Big Bang, the universe was too hot to allow life. It is estimated that the temperature of the universe was around 10 billion Kelvin at the one-second mark. Roughly 15 million years later, it cooled to temperate levels, though the elements of organic life were yet nonexistent. The only freely available elements at that point were hydrogen and helium. Carbon and oxygen (and later, water) would not appear until 50 million years later, created through stellar fusion. At that point, the difficulty for life to appear was not the temperature, but the scarcity of free heavy elements. Planetary systems emerged, and the first organic compounds may have formed in the protoplanetary disk of dust grains that would eventually create rocky planets like Earth. Although Earth was in a molten state after its birth and may have burned any organics that fell on it, it would have been more receptive once it cooled down. Once the right conditions on Earth were met, life started by a chemical process known as abiogenesis. Alternatively, life may have formed less frequently, then spread—by meteoroids, for example—between habitable planets in a process called panspermia. During most of its stellar evolution, stars combine hydrogen nuclei to make helium nuclei by stellar fusion, and the comparatively lighter weight of helium allows the star to release the extra energy. The process continues until the star uses all of its available fuel, with the speed of consumption being related to the size of the star. During its last stages, stars start combining helium nuclei to form carbon nuclei. The larger stars can further combine carbon nuclei to create oxygen and silicon, oxygen into neon and sulfur, and so on until iron. Ultimately, the star blows much of its content back into the stellar medium, where it would join clouds that would eventually become new generations of stars and planets. Many of those materials are the raw components of life on Earth. As this process takes place in all the universe, said materials are ubiquitous in the cosmos and not a rarity from the Solar System. Earth is a planet in the Solar System, a planetary system formed by a star at the center, the Sun, and the objects that orbit it: other planets, moons, asteroids, and comets. The sun is part of the Milky Way, a galaxy. The Milky Way is part of the Local Group, a galaxy group that is in turn part of the Laniakea Supercluster. The universe is composed of all similar structures in existence. The immense distances between celestial objects are a difficulty for studying extraterrestrial life. So far, humans have only set foot on the Moon and sent robotic probes to other planets and moons in the Solar System. Although probes can withstand conditions that may be lethal to humans, the distances cause time delays: the New Horizons took nine years after launch to reach Pluto. No probe has ever reached extrasolar planetary systems. The Voyager 2 left the Solar System at a speed of 50,000 kilometers per hour; if it headed towards the Alpha Centauri system, the closest one to Earth at 4.4 light years, it would reach it in 100,000 years. Under current technology, such systems can only be studied by telescopes, which have limitations. It is estimated that dark matter has a larger amount of combined matter than stars and gas clouds, but as it plays no role in the stellar evolution of stars and planets, it is usually not taken into account by astrobiology. There is an area around a star, the circumstellar habitable zone or "Goldilocks zone", wherein water may be at the right temperature to exist in liquid form at a planetary surface. This area is neither too close to the star, where water would become steam, nor too far away, where water would be frozen as ice. However, although useful as an approximation, planetary habitability is complex and defined by several factors. Being in the habitable zone is not enough for a planet to be habitable, not even to actually have such liquid water. Venus is located in the solar system's habitable zone, but does not have liquid water because of the conditions of its atmosphere. Jovian planets or gas giants are not considered habitable even if they orbit close enough to their stars as hot Jupiters, due to crushing atmospheric pressures. The actual distances for the habitable zones vary according to the type of star, and even the solar activity of each specific star influences the local habitability. The type of star also defines the time the habitable zone will exist, as its presence and limits will change along with the star's stellar evolution. The Big Bang occurred 13.8 billion years ago, the Solar System was formed 4.6 billion years ago, and the first hominids appeared 6 million years ago. Life on other planets may have started, evolved, given birth to extraterrestrial intelligences, and perhaps even faced a planetary extinction event millions or billions of years ago. When considered from a cosmic perspective, the brief times of existence of Earth's species may suggest that extraterrestrial life may be equally fleeting under such a scale. During a period of about 7 million years, from about 10 to 17 million years after the Big Bang, the background temperature was between 373 and 273 K (100 and 0 °C; 212 and 32 °F), allowing the possibility of liquid water if any planets existed. Avi Loeb (2014) speculated that primitive life might in principle have appeared during this window, which he called "the Habitable Epoch of the Early Universe". Life on Earth is quite ubiquitous across the planet and has adapted over time to almost all the available environments in it, extremophiles and the deep biosphere thrive at even the most hostile ones. As a result, it is inferred that life in other celestial bodies may be equally adaptive. However, the origin of life is unrelated to its ease of adaptation and may have stricter requirements. A celestial body may not have any life on it, even if it were habitable. Likelihood of existence Life in the cosmos beyond Earth has been observed. The hypothesis of ubiquitous extraterrestrial life relies on three main ideas. The first one, the size of the universe, allows for plenty of planets to have a similar habitability to Earth, and the age of the universe gives enough time for a long process analog to the history of Earth to happen there. The second is that the substances that make life, such as carbon and water, are ubiquitous in the universe. The third is that the physical laws are universal, which means that the forces that would facilitate or prevent the existence of life would be the same ones as on Earth. According to this argument, made by scientists such as Carl Sagan and Stephen Hawking, it would be improbable for life not to exist somewhere else other than Earth. This argument is embodied in the Copernican principle, which states that Earth does not occupy a unique position in the Universe, and the mediocrity principle, which states that there is nothing special about life on Earth. Other authors consider instead that life in the cosmos, or at least multicellular life, may actually be rare. The Rare Earth hypothesis maintains that life on Earth is possible because of a series of factors that range from the location in the galaxy and the configuration of the Solar System to local characteristics of the planet, and that it is unlikely that another planet simultaneously meets all such requirements. The proponents of this hypothesis consider that very little evidence suggests the existence of extraterrestrial life and that, at this point, it is just a desired result and not a reasonable scientific explanation for any gathered data. In 1961, astronomer and astrophysicist Frank Drake devised the Drake equation as a way to stimulate scientific dialogue at a meeting on the search for extraterrestrial intelligence (SETI). The Drake equation is a probabilistic argument used to estimate the number of active, communicative extraterrestrial civilizations in the Milky Way galaxy. The Drake equation is:: xix where: and Drake's proposed estimates are as follows, but numbers on the right side of the equation are agreed as speculative and open to substitution: 10,000 = 5 ⋅ 0.5 ⋅ 2 ⋅ 1 ⋅ 0.2 ⋅ 1 ⋅ 10,000 {\displaystyle 10{,}000=5\cdot 0.5\cdot 2\cdot 1\cdot 0.2\cdot 1\cdot 10{,}000} [better source needed] The Drake equation has proved controversial since, although it is written as a math equation, none of its values were known at the time. Although some values may eventually be measured, others are based on social sciences and are not knowable by their very nature. This does not allow one to make noteworthy conclusions from the equation. Based on observations from the Hubble Space Telescope, there are nearly 2 trillion galaxies in the observable universe. It is estimated that at least ten percent of all Sun-like stars have a system of planets. In other words, there are 6.25×1018 stars with planets orbiting them in the observable universe. Even if it is assumed that only one out of a billion of these stars has planets supporting life, there would be some 6.25 billion life-supporting planetary systems in the observable universe. A 2013 study based on results from the Kepler spacecraft estimated that the Milky Way contains at least as many planets as it does stars, resulting in 100–400 billion exoplanets. The Nebular hypothesis that explains the formation of the Solar System and other planetary systems would suggest that those can have several configurations, and not all of them may have rocky planets within the habitable zone. The apparent contradiction between high estimates of the probability of the existence of extraterrestrial civilisations and the lack of evidence for such civilisations is known as the Fermi paradox. Dennis W. Sciama claimed that life's existence in the universe depends on various fundamental constants. Zhi-Wei Wang and Samuel L. Braunstein suggest that a random universe capable of supporting life is likely to be just barely able to do so, giving a potential explanation to the Fermi paradox. Biochemical basis If extraterrestrial life exists, it could range from simple microorganisms and multicellular organisms similar to animals or plants, to complex alien intelligences akin to humans. When scientists talk about extraterrestrial life, they consider all those types. Although it is possible that extraterrestrial life may have other configurations, scientists use the hierarchy of lifeforms from Earth for simplicity, as it is the only one known to exist. The first basic requirement for life is an environment with non-equilibrium thermodynamics, which means that the thermodynamic equilibrium must be broken by a source of energy. The traditional sources of energy in the cosmos are the stars, such as for life on Earth, which depends on the energy of the sun. However, there are other alternative energy sources, such as volcanoes, plate tectonics, and hydrothermal vents. There are ecosystems on Earth in deep areas of the ocean that do not receive sunlight, and take energy from black smokers instead. Magnetic fields and radioactivity have also been proposed as sources of energy, although they would be less efficient ones. Life on Earth requires water in a liquid state as a solvent in which biochemical reactions take place. It is highly unlikely that an abiogenesis process can start within a gaseous or solid medium: the atom speeds, either too fast or too slow, make it difficult for specific ones to meet and start chemical reactions. A liquid medium also allows the transport of nutrients and substances required for metabolism. Sufficient quantities of carbon and other elements, along with water, might enable the formation of living organisms on terrestrial planets with a chemical make-up and temperature range similar to that of Earth. Life based on ammonia rather than water has been suggested as an alternative, though this solvent appears less suitable than water. It is also conceivable that there are forms of life whose solvent is a liquid hydrocarbon, such as methane, ethane or propane. Another unknown aspect of potential extraterrestrial life would be the chemical elements that would compose it. Life on Earth is largely composed of carbon, but there could be other hypothetical types of biochemistry. A replacement for carbon would need to be able to create complex molecules, store information required for evolution, and be freely available in the medium. To create DNA, RNA, or a close analog, such an element should be able to bind its atoms with many others, creating complex and stable molecules. It should be able to create at least three covalent bonds: two for making long strings and at least a third to add new links and allow for diverse information. Only nine elements meet this requirement: boron, nitrogen, phosphorus, arsenic, antimony (three bonds), carbon, silicon, germanium and tin (four bonds). As for abundance, carbon, nitrogen, and silicon are the most abundant ones in the universe, far more than the others. On Earth's crust the most abundant of those elements is silicon, in the Hydrosphere it is carbon and in the atmosphere, it is carbon and nitrogen. Silicon, however, has disadvantages over carbon. The molecules formed with silicon atoms are less stable, and more vulnerable to acids, oxygen, and light. An ecosystem of silicon-based lifeforms would require very low temperatures, high atmospheric pressure, an atmosphere devoid of oxygen, and a solvent other than water. The low temperatures required would add an extra problem, the difficulty to kickstart a process of abiogenesis to create life in the first place. Norman Horowitz, head of the Jet Propulsion Laboratory bioscience section for the Mariner and Viking missions from 1965 to 1976 considered that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival of life on other planets. However, he also considered that the conditions found on Mars were incompatible with carbon based life. Even if extraterrestrial life is based on carbon and uses water as a solvent, like Earth life, it may still have a radically different biochemistry. Life is generally considered to be a product of natural selection. It has been proposed that to undergo natural selection a living entity must have the capacity to replicate itself, the capacity to avoid damage/decay, and the capacity to acquire and process resources in support of the first two capacities. Life on Earth may have started with an RNA world and later evolved to its current form, where some of the RNA tasks were transferred to DNA and proteins. Extraterrestrial life may still be stuck using RNA, or evolve into other configurations. It is unclear if our biochemistry is the most efficient one that could be generated, or which elements would follow a similar pattern. However, it is likely that, even if cells had a different composition to those from Earth, they would still have a cell membrane. Life on Earth jumped from prokaryotes to eukaryotes and from unicellular organisms to multicellular organisms through evolution. So far no alternative process to achieve such a result has been conceived, even if hypothetical. Evolution requires life to be divided into individual organisms, and no alternative organisation has been satisfactorily proposed either. At the basic level, membranes define the limit of a cell, between it and its environment, while remaining partially open to exchange energy and resources with it. The evolution from simple cells to eukaryotes, and from them to multicellular lifeforms, is not guaranteed. The Cambrian explosion took place thousands of millions of years after the origin of life, and its causes are not fully known yet. On the other hand, the jump to multicellularity took place several times, which suggests that it could be a case of convergent evolution, and so likely to take place on other planets as well. Palaeontologist Simon Conway Morris considers that convergent evolution would lead to kingdoms similar to our plants and animals, and that many features are likely to develop in alien animals as well, such as bilateral symmetry, limbs, digestive systems and heads with sensory organs. Scientists from the University of Oxford analysed it from the perspective of evolutionary theory and wrote in a study in the International Journal of Astrobiology that aliens may be similar to humans. The planetary context would also have an influence: a planet with higher gravity would have smaller animals, and other types of stars can lead to non-green photosynthesizers. The amount of energy available would also affect biodiversity, as an ecosystem sustained by black smokers or hydrothermal vents would have less energy available than those sustained by a star's light and heat, and so its lifeforms would not grow beyond a certain complexity. There is also research in assessing the capacity of life for developing intelligence. It has been suggested that this capacity arises with the number of potential niches a planet contains, and that the complexity of life itself is reflected in the information density of planetary environments, which in turn can be computed from its niches. It is common knowledge that the conditions on other planets in the solar system, in addition to the many galaxies outside of the Milky Way galaxy, are very harsh and seem to be too extreme to harbor any life. The environmental conditions on these planets can have intense UV radiation paired with extreme temperatures, lack of water, and much more that can lead to conditions that don't seem to favor the creation or maintenance of extraterrestrial life. However, there has been much historical evidence that some of the earliest and most basic forms of life on Earth originated in some extreme environments that seem unlikely to have harbored life at least at one point in Earth's history. Fossil evidence as well as many historical theories backed up by years of research and studies have marked environments like hydrothermal vents or acidic hot springs as some of the first places that life could have originated on Earth. These environments can be considered extreme when compared to the typical ecosystems that the majority of life on Earth now inhabit, as hydrothermal vents are scorching hot due to the magma escaping from the Earth's mantle and meeting the much colder oceanic water. Even in today's world, there can be a diverse population of bacteria found inhabiting the area surrounding these hydrothermal vents which can suggest that some form of life can be supported even in the harshest of environments like the other planets in the solar system. The aspects of these harsh environments that make them ideal for the origin of life on Earth, as well as the possibility of creation of life on other planets, is the chemical reactions forming spontaneously. For example, the hydrothermal vents found on the ocean floor are known to support many chemosynthetic processes which allow organisms to utilize energy through reduced chemical compounds that fix carbon. In return, these reactions will allow for organisms to live in relatively low oxygenated environments while maintaining enough energy to support themselves. The early Earth environment was reducing and therefore, these carbon fixing compounds were necessary for the survival and possible origin of life on Earth. With the little amount of information that scientists have found regarding the atmosphere on other planets in the Milky Way galaxy and beyond, the atmospheres are most likely reducing or with very low oxygen levels, especially when compared with Earth's atmosphere. If there were the necessary elements and ions on these planets, the same carbon fixing, reduced chemical compounds occurring around hydrothermal vents could also occur on these planets' surfaces and possibly result in the origin of extraterrestrial life. Planetary habitability in the Solar System The Solar System has a wide variety of planets, dwarf planets, and moons, and each one is studied for its potential to host life. Each one has its own specific conditions that may benefit or harm life. So far, the only lifeforms found are those from Earth. No extraterrestrial intelligence other than humans exists or has ever existed within the Solar System. Astrobiologist Mary Voytek points out that it would be unlikely to find large ecosystems, as they would have already been detected by now. The inner Solar System is likely devoid of life. However, Venus is still of interest to astrobiologists, as it is a terrestrial planet that was likely similar to Earth in its early stages and developed in a different way. There is a greenhouse effect, the surface is the hottest in the Solar System, sulfuric acid clouds, all surface liquid water is lost, and it has a thick carbon-dioxide atmosphere with huge pressure. Comparing both helps to understand the precise differences that lead to beneficial or harmful conditions for life. And despite the conditions against life on Venus, there are suspicions that microbial life-forms may still survive in high-altitude clouds. Mars is a cold and almost airless desert, inhospitable to life. However, recent studies revealed that water on Mars used to be quite abundant, forming rivers, lakes, and perhaps even oceans. Mars may have been habitable back then, and life on Mars may have been possible. But when the planetary core ceased to generate a magnetic field, solar winds removed the atmosphere and the planet became vulnerable to solar radiation. Ancient life-forms may still have left fossilised remains, and microbes may still survive deep underground. As mentioned, the gas giants and ice giants are unlikely to contain life. The most distant solar system bodies, found in the Kuiper Belt and outwards, are locked in permanent deep-freeze, but cannot be ruled out completely. Although the giant planets themselves are highly unlikely to have life, there is much hope to find it on moons orbiting these planets. Europa, from the Jovian system, has a subsurface ocean below a thick layer of ice. Ganymede and Callisto also have subsurface oceans, but life is less likely in them because water is sandwiched between layers of solid ice. Europa would have contact between the ocean and the rocky surface, which helps the chemical reactions. It may be difficult to dig so deep in order to study those oceans, though. Enceladus, a tiny moon of Saturn with another subsurface ocean, may not need to be dug, as it releases water to space in eruption columns. The space probe Cassini flew inside one of these, but could not make a full study because NASA did not expect this phenomenon and did not equip the probe to study ocean water. Still, Cassini detected complex organic molecules, salts, evidence of hydrothermal activity, hydrogen, and methane. Titan is the only celestial body in the Solar System besides Earth that has liquid bodies on the surface. It has rivers, lakes, and rain of hydrocarbons, methane, and ethane, and even a cycle similar to Earth's water cycle. This special context encourages speculations about lifeforms with different biochemistry, but the cold temperatures would make such chemistry take place at a very slow pace. Water is rock-solid on the surface, but Titan does have a subsurface water ocean like several other moons. However, it is of such a great depth that it would be very difficult to access it for study. Scientific search The science that searches and studies life in the universe, both on Earth and elsewhere, is called astrobiology. With the study of Earth's life, the only known form of life, astrobiology seeks to study how life starts and evolves and the requirements for its continuous existence. This helps to determine what to look for when searching for life in other celestial bodies. This is a complex area of study, and uses the combined perspectives of several scientific disciplines, such as astronomy, biology, chemistry, geology, oceanography, and atmospheric sciences. The scientific search for extraterrestrial life is being carried out both directly and indirectly. As of September 2017[update], 3,667 exoplanets in 2,747 systems have been identified, and other planets and moons in the Solar System hold the potential for hosting primitive life such as microorganisms. As of 8 February 2021, an updated status of studies considering the possible detection of lifeforms on Venus (via phosphine) and Mars (via methane) was reported. Scientists search for biosignatures within the Solar System by studying planetary surfaces and examining meteorites. Some claim to have identified evidence that microbial life has existed on Mars. In 1996, a controversial report stated that structures resembling nanobacteria were discovered in a meteorite, ALH84001, formed of rock ejected from Mars. Although all the unusual properties of the meteorite were eventually explained as the result of inorganic processes, the controversy over its discovery laid the groundwork for the development of astrobiology. An experiment on the two Viking Mars landers reported gas emissions from heated Martian soil samples that some scientists argue are consistent with the presence of living microorganisms. Lack of corroborating evidence from other experiments on the same samples suggests that a non-biological reaction is a more likely hypothesis. In February 2005 NASA scientists reported they may have found some evidence of extraterrestrial life on Mars. The two scientists, Carol Stoker and Larry Lemke of NASA's Ames Research Center, based their claim on methane signatures found in Mars's atmosphere resembling the methane production of some forms of primitive life on Earth, as well as on their own study of primitive life near the Rio Tinto river in Spain. NASA officials soon distanced NASA from the scientists' claims, and Stoker herself backed off from her initial assertions. In November 2011, NASA launched the Mars Science Laboratory that landed the Curiosity rover on Mars. It is designed to assess the past and present habitability on Mars using a variety of scientific instruments. The rover landed on Mars at Gale Crater in August 2012. A group of scientists at Cornell University started a catalog of microorganisms, with the way each one reacts to sunlight. The goal is to help with the search for similar organisms in exoplanets, as the starlight reflected by planets rich in such organisms would have a specific spectrum, unlike that of starlight reflected from lifeless planets. If Earth was studied from afar with this system, it would reveal a shade of green, as a result of the abundance of plants with photosynthesis. In August 2011, NASA studied meteorites found on Antarctica, finding adenine, guanine, hypoxanthine, and xanthine. Adenine and guanine are components of DNA, and the others are used in other biological processes. The studies ruled out pollution of the meteorites on Earth, as those components would not be freely available the way they were found in the samples. This discovery suggests that several organic molecules that serve as building blocks of life may be generated within asteroids and comets. In October 2011, scientists reported that cosmic dust contains complex organic compounds ("amorphous organic solids with a mixed aromatic-aliphatic structure") that could be created naturally, and rapidly, by stars. It is still unclear if those compounds played a role in the creation of life on Earth, but Sun Kwok, of the University of Hong Kong, thinks so. "If this is the case, life on Earth may have had an easier time getting started as these organics can serve as basic ingredients for life." In August 2012, and in a world first, astronomers at Copenhagen University reported the detection of a specific sugar molecule, glycolaldehyde, in a distant star system. The molecule was found around the protostellar binary IRAS 16293-2422, which is located 400 light years from Earth. Glycolaldehyde is needed to form ribonucleic acid, or RNA, which is similar in function to DNA. This finding suggests that complex organic molecules may form in stellar systems prior to the formation of planets, eventually arriving on young planets early in their formation. In December 2023, astronomers reported the first time discovery, in the plumes of Enceladus, moon of the planet Saturn, of hydrogen cyanide, a possible chemical essential for life as we know it, as well as other organic molecules, some of which are yet to be better identified and understood. According to the researchers, "these [newly discovered] compounds could potentially support extant microbial communities or drive complex organic synthesis leading to the origin of life." Although most searches are focused on the biology of extraterrestrial life, an extraterrestrial intelligence capable enough to develop a civilization may be detectable by other means as well. Technology may generate technosignatures, effects on the native planet that may not be caused by natural causes. There are three main types of techno-signatures considered: interstellar communications, effects on the atmosphere, and planetary-sized structures such as Dyson spheres. Organizations such as the SETI Institute search the cosmos for potential forms of communication. They started with radio waves, and now search for laser pulses as well. The challenge for this search is that there are natural sources of such signals as well, such as gamma-ray bursts and supernovae, and the difference between a natural signal and an artificial one would be in its specific patterns. Astronomers intend to use artificial intelligence for this, as it can manage large amounts of data and is devoid of biases and preconceptions. Besides, even if there is an advanced extraterrestrial civilization, there is no guarantee that it is transmitting radio communications in the direction of Earth. The length of time required for a signal to travel across space means that a potential answer may arrive decades or centuries after the initial message. The atmosphere of Earth is rich in nitrogen dioxide as a result of air pollution, which can be detectable. The natural abundance of carbon, which is also relatively reactive, makes it likely to be a basic component of the development of a potential extraterrestrial technological civilization, as it is on Earth. Fossil fuels may likely be generated and used on such worlds as well. The abundance of chlorofluorocarbons in the atmosphere can also be a clear technosignature, considering their role in ozone depletion. Light pollution may be another technosignature, as multiple lights on the night side of a rocky planet can be a sign of advanced technological development. However, modern telescopes are not strong enough to study exoplanets with the required level of detail to perceive it. The Kardashev scale proposes that a civilization may eventually start consuming energy directly from its local star. This would require giant structures built next to it, called Dyson spheres. Those speculative structures would cause an excess infrared radiation, that telescopes may notice. The infrared radiation is typical of young stars, surrounded by dusty protoplanetary disks that will eventually form planets. An older star such as the Sun would have no natural reason to have excess infrared radiation. The presence of heavy elements in a star's light-spectrum is another potential biosignature; such elements would (in theory) be found if the star were being used as an incinerator/repository for nuclear waste products. Some astronomers search for extrasolar planets that may be conducive to life, narrowing the search to terrestrial planets within the habitable zones of their stars. Since 1992, over four thousand exoplanets have been discovered (6,128 planets in 4,584 planetary systems including 1,017 multiple planetary systems as of 30 October 2025). The extrasolar planets so far discovered range in size from that of terrestrial planets similar to Earth's size to that of gas giants larger than Jupiter. The number of observed exoplanets is expected to increase greatly in the coming years.[better source needed] The Kepler space telescope has also detected a few thousand candidate planets, of which about 11% may be false positives. There is at least one planet on average per star. About 1 in 5 Sun-like stars[a] have an "Earth-sized"[b] planet in the habitable zone,[c] with the nearest expected to be within 12 light-years distance from Earth. Assuming 200 billion stars in the Milky Way,[d] that would be 11 billion potentially habitable Earth-sized planets in the Milky Way, rising to 40 billion if red dwarfs are included. The rogue planets in the Milky Way possibly number in the trillions. The nearest known exoplanet is Proxima Centauri b, located 4.2 light-years (1.3 pc) from Earth in the southern constellation of Centaurus. As of March 2014[update], the least massive exoplanet known is PSR B1257+12 A, which is about twice the mass of the Moon. The most massive planet listed on the NASA Exoplanet Archive is DENIS-P J082303.1−491201 b, about 29 times the mass of Jupiter, although according to most definitions of a planet, it is too massive to be a planet and may be a brown dwarf instead. Almost all of the planets detected so far are within the Milky Way, but there have also been a few possible detections of extragalactic planets. The study of planetary habitability also considers a wide range of other factors in determining the suitability of a planet for hosting life. One sign that a planet probably already contains life is the presence of an atmosphere with significant amounts of oxygen, since that gas is highly reactive and generally would not last long without constant replenishment. This replenishment occurs on Earth through photosynthetic organisms. One way to analyse the atmosphere of an exoplanet is through spectrography when it transits its star, though this might only be feasible with dim stars like white dwarfs. History and cultural impact The modern concept of extraterrestrial life is based on assumptions that were not commonplace during the early days of astronomy. The first explanations for the celestial objects seen in the night sky were based on mythology. Scholars from Ancient Greece were the first to consider that the universe is inherently understandable and rejected explanations based on supernatural incomprehensible forces, such as the myth of the Sun being pulled across the sky in the chariot of Apollo. They had not developed the scientific method yet and based their ideas on pure thought and speculation, but they developed precursor ideas to it, such as that explanations had to be discarded if they contradict observable facts. The discussions of those Greek scholars established many of the pillars that would eventually lead to the idea of extraterrestrial life, such as Earth being round and not flat. The cosmos was first structured in a geocentric model that considered that the sun and all other celestial bodies revolve around Earth. However, they did not consider them as worlds. In Greek understanding, the world was composed by both Earth and the celestial objects with noticeable movements. Anaximander thought that the cosmos was made from apeiron, a substance that created the world, and that the world would eventually return to the cosmos. Eventually two groups emerged, the atomists that thought that matter at both Earth and the cosmos was equally made of small atoms of the classical elements (earth, water, fire and air), and the Aristotelians who thought that those elements were exclusive of Earth and that the cosmos was made of a fifth one, the aether. Atomist Epicurus thought that the processes that created the world, its animals and plants should have created other worlds elsewhere, along with their own animals and plants. Aristotle thought instead that all the earth element naturally fell towards the center of the universe, and that would make it impossible for other planets to exist elsewhere. Under that reasoning, Earth was not only in the center, it was also the only planet in the universe. Cosmic pluralism, the plurality of worlds, or simply pluralism, describes the philosophical belief in numerous "worlds" in addition to Earth, which might harbor extraterrestrial life. The earliest recorded assertion of extraterrestrial human life is found in ancient scriptures of Jainism. There are multiple "worlds" mentioned in Jain scriptures that support human life. These include, among others, Bharat Kshetra, Mahavideh Kshetra, Airavat Kshetra, and Hari kshetra. Medieval Muslim writers like Fakhr al-Din al-Razi and Muhammad al-Baqir supported cosmic pluralism on the basis of the Qur'an. Chaucer's poem The House of Fame engaged in medieval thought experiments that postulated the plurality of worlds. However, those ideas about other worlds were different from the current knowledge about the structure of the universe, and did not postulate the existence of planetary systems other than the Solar System. When those authors talk about other worlds, they talk about places located at the center of their own systems, and with their own stellar vaults and cosmos surrounding them. The Greek ideas and the disputes between atomists and Aristotelians outlived the fall of the Greek empire. The Great Library of Alexandria compiled information about it, part of which was translated by Islamic scholars and thus survived the end of the Library. Baghdad combined the knowledge of the Greeks, the Indians, the Chinese and its own scholars, and the knowledge expanded through the Byzantine Empire. From there it eventually returned to Europe by the time of the Middle Ages. However, as the Greek atomist doctrine held that the world was created by random movements of atoms, with no need for a creator deity, it became associated with atheism, and the dispute intertwined with religious ones. Still, the Church did not react to those topics in a homogeneous way, and there were stricter and more permissive views within the church itself. The first known mention of the term 'panspermia' was in the writings of the 5th-century BC Greek philosopher Anaxagoras. He proposed the idea that life exists everywhere. By the time of the late Middle Ages there were many known inaccuracies in the geocentric model, but it was kept in use because naked eye observations provided limited data. Nicolaus Copernicus started the Copernican Revolution by proposing that the planets revolve around the sun rather than Earth. His proposal had little acceptance at first because, as he kept the assumption that orbits were perfect circles, his model led to as many inaccuracies as the geocentric one. Tycho Brahe improved the available data with naked-eye observatories, which worked with highly complex sextants and quadrants. Tycho could not make sense of his observations, but Johannes Kepler did: orbits were not perfect circles, but ellipses. This knowledge benefited the Copernican model, which worked now almost perfectly. The invention of the telescope a short time later, perfected by Galileo Galilei, clarified the final doubts, and the paradigm shift was completed. Under this new understanding, the notion of extraterrestrial life became feasible: if Earth is but just a planet orbiting around a star, there may be planets similar to Earth elsewhere. The astronomical study of distant bodies also proved that physical laws are the same elsewhere in the universe as on Earth, with nothing making the planet truly special. The new ideas were met with resistance from the Catholic church. Galileo was tried for the heliocentric model, which was considered heretical, and forced to recant it. The best-known early-modern proponent of ideas of extraterrestrial life was the Italian philosopher Giordano Bruno, who argued in the 16th century for an infinite universe in which every star is surrounded by its own planetary system. Bruno wrote that other worlds "have no less virtue nor a nature different to that of our earth" and, like Earth, "contain animals and inhabitants". Bruno's belief in the plurality of worlds was one of the charges leveled against him by the Venetian Holy Inquisition, which tried and executed him. The heliocentric model was further strengthened by the postulation of the theory of gravity by Sir Isaac Newton. This theory provided the mathematics that explains the motions of all things in the universe, including planetary orbits. By this point, the geocentric model was definitely discarded. By this time, the use of the scientific method had become a standard, and new discoveries were expected to provide evidence and rigorous mathematical explanations. Science also took a deeper interest in the mechanics of natural phenomena, trying to explain not just the way nature works but also the reasons for working that way. There was very little actual discussion about extraterrestrial life before this point, as the Aristotelian ideas remained influential while geocentrism was still accepted. When it was finally proved wrong, it not only meant that Earth was not the center of the universe, but also that the lights seen in the sky were not just lights, but physical objects. The notion that life may exist in them as well soon became an ongoing topic of discussion, although one with no practical ways to investigate. The possibility of extraterrestrials remained a widespread speculation as scientific discovery accelerated. William Herschel, the discoverer of Uranus, was one of many 18th–19th-century astronomers who believed that the Solar System is populated by alien life. Other scholars of the period who championed "cosmic pluralism" included Immanuel Kant and Benjamin Franklin. At the height of the Enlightenment, even the Sun and Moon were considered candidates for extraterrestrial inhabitants. Speculation about life on Mars increased in the late 19th century, following telescopic observation of apparent Martian canals – which soon, however, turned out to be optical illusions. Despite this, in 1895, American astronomer Percival Lowell published his book Mars, followed by Mars and its Canals in 1906, proposing that the canals were the work of a long-gone civilisation. Spectroscopic analysis of Mars's atmosphere began in earnest in 1894, when U.S. astronomer William Wallace Campbell showed that neither water nor oxygen was present in the Martian atmosphere. By 1909 better telescopes and the best perihelic opposition of Mars since 1877 conclusively put an end to the canal hypothesis. As a consequence of the belief in the spontaneous generation there was little thought about the conditions of each celestial body: it was simply assumed that life would thrive anywhere. This theory was disproved by Louis Pasteur in the 19th century. Popular belief in thriving alien civilisations elsewhere in the solar system still remained strong until Mariner 4 and Mariner 9 provided close images of Mars, which debunked forever the idea of the existence of Martians and decreased the previous expectations of finding alien life in general. The end of the spontaneous generation belief forced investigation into the origin of life. Although abiogenesis is the more accepted theory, a number of authors reclaimed the term "panspermia" and proposed that life was brought to Earth from elsewhere. Some of those authors are Jöns Jacob Berzelius (1834), Kelvin (1871), Hermann von Helmholtz (1879) and, somewhat later, by Svante Arrhenius (1903). The science fiction genre, although not so named during the time, developed during the late 19th century. The expansion of the genre of extraterrestrials in fiction influenced the popular perception over the real-life topic, making people eager to jump to conclusions about the discovery of aliens. Science marched at a slower pace, some discoveries fueled expectations and others dashed excessive hopes. For example, with the advent of telescopes, most structures seen on the Moon or Mars were immediately attributed to Selenites or Martians, and later ones (such as more powerful telescopes) revealed that all such discoveries were natural features. A famous case is the Cydonia region of Mars, first imaged by the Viking 1 orbiter. The low-resolution photos showed a rock formation that resembled a human face, but later spacecraft took photos in higher detail that showed that there was nothing special about the site. The search and study of extraterrestrial life became a science of its own, astrobiology. Also known as exobiology, this discipline is studied by the NASA, the ESA, the INAF, and others. Astrobiology studies life from Earth as well, but with a cosmic perspective. For example, abiogenesis is of interest to astrobiology, not because of the origin of life on Earth, but for the chances of a similar process taking place in other celestial bodies. Many aspects of life, from its definition to its chemistry, are analyzed as either likely to be similar in all forms of life across the cosmos or only native to Earth. Astrobiology, however, remains constrained by the current lack of extraterrestrial life-forms to study, as all life on Earth comes from the same ancestor, and it is hard to infer general characteristics from a group with a single example to analyse. The 20th century came with great technological advances, speculations about future hypothetical technologies, and an increased basic knowledge of science by the general population thanks to science divulgation through the mass media. The public interest in extraterrestrial life and the lack of discoveries by mainstream science led to the emergence of pseudosciences that provided affirmative, if questionable, answers to the existence of aliens. Ufology claims that many unidentified flying objects (UFOs) would be spaceships from alien species, and ancient astronauts hypothesis claim that aliens would have visited Earth in antiquity and prehistoric times but people would have failed to understand it by then. Most UFOs or UFO sightings can be readily explained as sightings of Earth-based aircraft (including top-secret aircraft), known astronomical objects or weather phenomenons, or as hoaxes. Looking beyond the pseudosciences, Lewis White Beck strove to elevate the level of public discourse on the topic of extraterrestrial life by tracing the evolution of philosophical thought over the centuries from ancient times into the modern era. His review of the contributions made by Lucretius, Plutarch, Aristotle, Copernicus, Immanuel Kant, John Wilkins, Charles Darwin and Karl Marx demonstrated that even in modern times, humanity could be profoundly influenced in its search for extraterrestrial life by subtle and comforting archetypal ideas which are largely derived from firmly held religious, philosophical and existential belief systems. On a positive note, however, Beck further argued that even if the search for extraterrestrial life proves to be unsuccessful, the endeavor itself could have beneficial consequences by assisting humanity in its attempt to actualize superior ways of living here on Earth. By the 21st century, it was accepted that multicellular life in the Solar System can only exist on Earth, but the interest in extraterrestrial life increased regardless. This is a result of the advances in several sciences. The knowledge of planetary habitability allows to consider on scientific terms the likelihood of finding life at each specific celestial body, as it is known which features are beneficial and harmful for life. Astronomy and telescopes also improved to the point exoplanets can be confirmed and even studied, increasing the number of search places. Life may still exist elsewhere in the Solar System in unicellular form, but the advances in spacecraft allow to send robots to study samples in situ, with tools of growing complexity and reliability. Although no extraterrestrial life has been found and life may still be just a rarity from Earth, there are scientific reasons to suspect that it can exist elsewhere, and technological advances that may detect it if it does. Many scientists are optimistic about the chances of finding alien life. In the words of SETI's Frank Drake, "All we know for sure is that the sky is not littered with powerful microwave transmitters". Drake noted that it is entirely possible that advanced technology results in communication being carried out in some way other than conventional radio transmission. At the same time, the data returned by space probes, and giant strides in detection methods, have allowed science to begin delineating habitability criteria on other worlds, and to confirm that at least other planets are plentiful, though aliens remain a question mark. The Wow! signal, detected in 1977 by a SETI project, remains a subject of speculative debate. On the other hand, other scientists are pessimistic. Jacques Monod wrote that "Man knows at last that he is alone in the indifferent immensity of the universe, whence which he has emerged by chance". In 2000, geologist and paleontologist Peter Ward and astrobiologist Donald Brownlee published a book entitled Rare Earth: Why Complex Life is Uncommon in the Universe.[better source needed] In it, they discussed the Rare Earth hypothesis, in which they claim that Earth-like life is rare in the universe, whereas microbial life is common. Ward and Brownlee are open to the idea of evolution on other planets that is not based on essential Earth-like characteristics such as DNA and carbon. As for the possible risks, theoretical physicist Stephen Hawking warned in 2010 that humans should not try to contact alien life forms. He warned that aliens might pillage Earth for resources. "If aliens visit us, the outcome would be much as when Columbus landed in America, which didn't turn out well for the Native Americans", he said. Jared Diamond had earlier expressed similar concerns. On 20 July 2015, Hawking and Russian billionaire Yuri Milner, along with the SETI Institute, announced a well-funded effort, called the Breakthrough Initiatives, to expand efforts to search for extraterrestrial life. The group contracted the services of the 100-meter Robert C. Byrd Green Bank Telescope in West Virginia in the United States and the 64-meter Parkes Telescope in New South Wales, Australia. On 13 February 2015, scientists (including Geoffrey Marcy, Seth Shostak, Frank Drake and David Brin) at a convention of the American Association for the Advancement of Science, discussed Active SETI and whether transmitting a message to possible intelligent extraterrestrials in the Cosmos was a good idea; one result was a statement, signed by many, that a "worldwide scientific, political and humanitarian discussion must occur before any message is sent". Government responses The 1967 Outer Space Treaty and the 1979 Moon Agreement define rules of planetary protection against potentially hazardous extraterrestrial life. COSPAR also provides guidelines for planetary protection. A committee of the United Nations Office for Outer Space Affairs had in 1977 discussed for a year strategies for interacting with extraterrestrial life or intelligence. The discussion ended without any conclusions. As of 2010, the UN lacks response mechanisms for the case of an extraterrestrial contact. One of the NASA divisions is the Office of Safety and Mission Assurance (OSMA), also known as the Planetary Protection Office. A part of its mission is to "rigorously preclude backward contamination of Earth by extraterrestrial life." In 2016, the Chinese Government released a white paper detailing its space program. According to the document, one of the research objectives of the program is the search for extraterrestrial life. It is also one of the objectives of the Chinese Five-hundred-meter Aperture Spherical Telescope (FAST) program. In 2020, Dmitry Rogozin, the head of the Russian space agency, said the search for extraterrestrial life is one of the main goals of deep space research. He also acknowledged the possibility of existence of primitive life on other planets of the Solar System. The French space agency has an office for the study of "non-identified aero spatial phenomena". The agency is maintaining a publicly accessible database of such phenomena, with over 1600 detailed entries. According to the head of the office, the vast majority of entries have a mundane explanation; but for 25% of entries, their extraterrestrial origin can neither be confirmed nor denied. In 2020, chairman of the Israel Space Agency Isaac Ben-Israel stated that the probability of detecting life in outer space is "quite large". But he disagrees with his former colleague Haim Eshed who stated that there are contacts between an advanced alien civilisation and some of Earth's governments. In fiction Although the idea of extraterrestrial peoples became feasible once astronomy developed enough to understand the nature of planets, they were not thought of as being any different from humans. Having no scientific explanation for the origin of mankind and its relation to other species, there was no reason to expect them to be any other way. This was changed by the 1859 book On the Origin of Species by Charles Darwin, which proposed the theory of evolution. Now with the notion that evolution on other planets may take other directions, science fiction authors created bizarre aliens, clearly distinct from humans. A usual way to do that was to add body features from other animals, such as insects or octopuses. Costuming and special effects feasibility alongside budget considerations forced films and TV series to tone down the fantasy, but these limitations lessened since the 1990s with the advent of computer-generated imagery (CGI), and later on as CGI became more effective and less expensive. Real-life events sometimes captivate people's imagination and this influences the works of fiction. For example, during the Barney and Betty Hill incident, the first recorded claim of an alien abduction, the couple reported that they were abducted and experimented on by aliens with oversized heads, big eyes, pale grey skin, and small noses, a description that eventually became the grey alien archetype once used in works of fiction. See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Developmental_psychology] | [TOKENS: 13979] |
Contents Developmental psychology Developmental psychology is the scientific study of how and why humans grow, change, and adapt across the course of their lives. Originally concerned with infants and children, the field has expanded to include adolescence, adult development, aging, and the entire lifespan. Developmental psychologists aim to explain how thinking, feeling, and behaviors change throughout life. This field examines change across three major dimensions, which are physical development, cognitive development, and social emotional development. Within these three dimensions are a broad range of topics including motor skills, executive functions, moral understanding, language acquisition, social change, personality, emotional development, self-concept, and identity formation. Developmental psychology explores the influence of both nature and nurture on human development, as well as the processes of change that occur across different contexts over time. Many researchers are interested in the interactions among personal characteristics, the individual's behavior, and environmental factors, including the social context and the built environment. Ongoing debates in regards to developmental psychology include biological essentialism vs. neuroplasticity, and stages of development vs. dynamic systems of development. While research in developmental psychology has certain limitations, ongoing studies aim to understand how life stage transitions and biological factors influence human behavior and development. Developmental psychology involves a range of fields, such as educational psychology, child psychopathology, forensic developmental psychology, child development, cognitive psychology, ecological psychology, and cultural psychology. Influential developmental psychologists from the 20th century include Urie Bronfenbrenner, Erik Erikson, Sigmund Freud, Anna Freud, Jean Piaget, Barbara Rogoff, Esther Thelen, and Lev Vygotsky. Historical antecedents Jean-Jacques Rousseau and John B. Watson are typically cited as providing the foundation for modern developmental psychology. In the mid-18th century, Jean Jacques Rousseau described three stages of development: infants (infancy), puer (childhood) and adolescence in Emile: Or, On Education. Rousseau's ideas were adopted and supported by educators at the time. Developmental psychology generally focuses on how and why certain changes (cognitive, social, intellectual, personality) occur over time in the course of a human life. Many theorists have made a profound contribution to this area of psychology. One of them is the psychologist Erik Erikson, who created a model of eight phases of psychosocial development. According to his theory, people go through different phases in their lives, each of which has its own developmental crisis that shapes a person's personality and behavior. In the late 19th century, psychologists familiar with the evolutionary theory of Darwin began seeking an evolutionary description of psychological development; prominent here was the pioneering psychologist G. Stanley Hall, who attempted to correlate ages of childhood with previous ages of humanity. James Mark Baldwin, who wrote essays on topics that included Imitation: A Chapter in the Natural History of Consciousness and Mental Development in the Child and the Race: Methods and Processes, was significantly involved in the theory of developmental psychology. Sigmund Freud, whose concepts were developmental, significantly affected public perceptions. Theories Sigmund Freud developed a theory that suggested that humans behave as they do because they are constantly seeking pleasure. This process of seeking pleasure changes through stages because people evolve. Each period of seeking pleasure that a person experiences is represented by a stage of psychosexual development. These stages symbolize the process of arriving at becoming a maturing adult. The first is the oral stage, which begins at birth and ends around a year and a half of age. During the oral stage, the child finds pleasure in behaviors like sucking or other behaviors with the mouth. The second is the anal stage, from about a year or a year and a half to three years of age. During the anal stage, the child defecates from the anus and is often fascinated with its defecation. This period of development often occurs during the time when the child is being toilet-trained. The child becomes interested in feces and urine. Children begin to see themselves as independent from their parents. They begin to desire assertiveness and autonomy. The third is the phallic stage, which occurs from three to five years of age (most of a person's personality forms by this age). During the phallic stage, the child becomes aware of its sexual organs. Pleasure comes from finding acceptance and love from the opposite sex. The fourth is the latency stage, which occurs from age five until puberty. During the latency stage, the child's sexual interests are repressed. Stage five is the genital stage, which takes place from puberty until adulthood. During the genital stage, puberty begins to occur. Children have now matured, and begin to think about other people instead of just themselves. Pleasure comes from feelings of affection from other people. Freud believed there is tension between the conscious and unconscious because the conscious tries to hold back what the unconscious tries to express. To explain this, he developed three personality structures: id, ego, and superego. The id, the most primitive of the three, functions according to the pleasure principle: seek pleasure and avoid pain. The superego plays the critical and moralizing role, while the ego is the organized, realistic part that mediates between the desires of the id and the superego. Jean Piaget, a Swiss theorist, posited that children learn by actively constructing knowledge through their interactions with their physical and social environments. He suggested that the adult's role in helping the child learn was to provide appropriate materials. In his interview techniques with children that formed an empirical basis for his theories, he used something similar to Socratic questioning to get children to reveal their thinking. He argued that a principal source of development was through the child's inevitable generation of contradictions through their interactions with their physical and social worlds. The child's resolution of these contradictions led to more integrated and advanced forms of interaction, a developmental process that he called "equilibration." Piaget argued that intellectual development takes place through a series of stages generated through the equilibration process. Each stage consists of steps the child must master before moving to the next step. He believed that these stages are not separate from one another, but rather that each stage builds on the previous one in a continuous learning process. He proposed four stages: sensorimotor, pre-operational, concrete operational, and formal operational. Though he did not believe these stages occurred at any given age, many studies have determined when these cognitive abilities should take place. Piaget claimed that logic and morality develop through constructive stages. Expanding on Piaget's work, Lawrence Kohlberg determined that the process of moral development was principally concerned with justice, and that it continued throughout the individual's lifetime. He suggested three levels of moral reasoning: pre-conventional moral reasoning, conventional moral reasoning, and post-conventional moral reasoning. The pre-conventional moral reasoning is typical of children and is characterized by reasoning that is based on rewards and punishments associated with different courses of action. Conventional moral reasoning occurs during late childhood and early adolescence and is characterized by reasoning based on the rules and conventions of society. Lastly, post-conventional moral reasoning is a stage during which the individual sees society's rules and conventions as relative and subjective, rather than as authoritative. Kohlberg used the Heinz Dilemma to apply to his stages of moral development. The Heinz Dilemma involves Heinz's wife dying from cancer and Heinz having the dilemma to save his wife by stealing a drug. Preconventional morality, conventional morality, and post-conventional morality applies to Heinz's situation. Recent scholarship challenges the "deficit models" of Piaget and Kohlberg, which portrayed humans as arriving amoral. Instead, research suggests infants possess innate "pre-moral" capacities and biological substrates for ethics. This perspective argues that children have cognitive equipment naturally receptive to divine attributes, such as immortality and omniscience, suggesting an innate orientation toward sensing transcendence. German-American psychologist Erik Erikson and his collaborator and wife, Joan Erikson, posits eight stages of individual human development influenced by biological, psychological, and social factors throughout the lifespan. At each stage the person must resolve a challenge, or an existential dilemma. Successful resolution of the dilemma results in the person ingraining a positive virtue, but failure to resolve the fundamental challenge of that stage reinforces negative perceptions of the person or the world around them and the person's personal development is unable to progress. The first stage, "Trust vs. Mistrust", takes place in infancy. The positive virtue for the first stage is hope, in the infant learning whom to trust and having hope for a supportive group of people to be there for him/her. The second stage is "Autonomy vs. Shame and Doubt" with the positive virtue being will. This takes place in early childhood when the child learns to become more independent by discovering what they are capable of whereas if the child is overly controlled, feelings of inadequacy are reinforced, which can lead to low self-esteem and doubt. The third stage is "Initiative vs. Guilt". The virtue of being gained is a sense of purpose. This takes place primarily via play. This is the stage where the child will be curious and have many interactions with other kids. They will ask many questions as their curiosity grows. If too much guilt is present, the child may have a slower and harder time interacting with their world and other children in it. The fourth stage is "Industry (competence) vs. Inferiority". The virtue for this stage is competency and is the result of the child's early experiences in school. This stage is when the child will try to win the approval of others and understand the value of their accomplishments. The fifth stage is "Identity vs. Role Confusion". The virtue gained is fidelity and it takes place in adolescence. This is when the child ideally starts to identify their place in society, particularly in terms of their gender role. The sixth stage is "Intimacy vs. Isolation", which happens in young adults and the virtue gained is love. This is when the person starts to share his/her life with someone else intimately and emotionally. Not doing so can reinforce feelings of isolation. The seventh stage is "Generativity vs. Stagnation". This happens in adulthood and the virtue gained is care. A person becomes stable and starts to give back by raising a family and becoming involved in the community. The eighth stage is "Ego Integrity vs. Despair". When one grows old, they look back on their life and contemplate their successes and failures. If they resolve this positively, the virtue of wisdom is gained. This is also the stage when one can gain a sense of closure and accept death without regret or fear. Michael Commons enhanced and simplified Bärbel Inhelder and Piaget's developmental theory and offers a standard method of examining the universal pattern of development. The Model of Hierarchical Complexity (MHC) is not based on the assessment of domain-specific information, It divides the Order of Hierarchical Complexity of tasks to be addressed from the Stage performance on those tasks. A stage is the order hierarchical complexity of the tasks the participant's successfully addresses. He expanded Piaget's original eight stage (counting the half stages) to seventeen stages. The stages are: The order of hierarchical complexity of tasks predicts how difficult the performance is with an R ranging from 0.9 to 0.98. In the MHC, there are three main axioms for an order to meet in order for the higher order task to coordinate the next lower order task. Axioms are rules that are followed to determine how the MHC orders actions to form a hierarchy. These axioms are: a) defined in terms of tasks at the next lower order of hierarchical complexity task action; b) defined as the higher order task action that organizes two or more less complex actions; that is, the more complex action specifies the way in which the less complex actions combine; c) defined as the lower order task actions have to be carried out non-arbitrarily.Commons, Michael L.; Gane-McCalla, Rebecca; Barker, Christopher D.; Li, Ellen Y. (2014). "The model of hierarchical complexity as a measurement system". Behavioral Development Bulletin. 19 (3). American Psychological Association: 9–68. doi:10.1037/h0100589. Ecological systems theory, originally formulated by Urie Bronfenbrenner, specifies four types of nested environmental systems, with bi-directional influences within and between the systems. The four systems are microsystem, mesosystem, exosystem, and macrosystem. Each system contains roles, norms and rules that can powerfully shape development. The microsystem is the direct environment in our lives such as our home and school. Mesosystem is how relationships connect to the microsystem. Exosystem is a larger social system where the child plays no role. Macrosystem refers to the cultural values, customs and laws of society. The microsystem is the immediate environment surrounding and influencing the individual (example: school or the home setting). The mesosystem is the combination of two microsystems and how they influence each other (example: sibling relationships at home vs. peer relationships at school). The exosystem is the interaction among two or more settings that are indirectly linked (example: a father's job requiring more overtime ends up influencing his daughter's performance in school because he can no longer help with her homework). The macrosystem is broader taking into account social economic status, culture, beliefs, customs and morals (example: a child from a wealthier family sees a peer from a less wealthy family as inferior for that reason). Lastly, the chronosystem refers to the chronological nature of life events and how they interact and change the individual and their circumstances through transition (example: a mother losing her own mother to illness and no longer having that support in her life). Since its publication in 1979, Bronfenbrenner's major statement of this theory, The Ecology of Human Development, has had widespread influence on the way psychologists and others approach the study of human beings and their environments. As a result of this conceptualization of development, these environments—from the family to economic and political structures—have come to be viewed as part of the life course from childhood through to adulthood. Modern research addresses a "contemporary paradox" where institutional religious participation is declining among youth, yet spiritual seeking and belief in God remain high. Some theorists propose "spiritual transcendence" as a sixth personality factor beyond the Big Five, arguing it is a fundamental dimension of human nature rather than a derivative of other traits. This field explores how attachment styles influence an individual's "God concepts" and how adolescent biological maturation can influence religious devotion through psychoanalytic sublimation. Lev Vygotsky was a Russian theorist from the Soviet era, who posited that children learn through hands-on experience and social interactions with members of their culture. Vygotsky believed that a child's development should be examined during problem-solving activities. Unlike Piaget, he claimed that timely and sensitive intervention by adults when a child is on the edge of learning a new task (called the "zone of proximal development") could help children learn new tasks. Zone of proximal development is a tool used to explain the learning of children and collaborating problem solving activities with an adult or peer. This adult role is often referred to as the skilled "master", whereas the child is considered the learning apprentice through an educational process often termed "cognitive apprenticeship" Martin Hill stated that "The world of reality does not apply to the mind of a child." This technique is called "scaffolding", because it builds upon knowledge children already have with new knowledge that adults can help the child learn. Vygotsky was strongly focused on the role of culture in determining the child's pattern of development, arguing that development moves from the social level to the individual level. In other words, Vygotsky claimed that psychology should focus on the progress of human consciousness through the relationship of an individual and their environment. He felt that if scholars continued to disregard this connection, then this disregard would inhibit the full comprehension of the human consciousness. Constructivism is a paradigm in psychology that characterizes learning as a process of actively constructing knowledge. Individuals create meaning for themselves or make sense of new information by selecting, organizing, and integrating information with other knowledge, often in the context of social interactions. Constructivism can occur in two ways: individual and social. Individual constructivism is when a person constructs knowledge through cognitive processes of their own experiences rather than by memorizing facts provided by others. Social constructivism is when individuals construct knowledge through an interaction between the knowledge they bring to a situation and social or cultural exchanges within that content. A foundational concept of constructivism is that the purpose of cognition is to organize one's experiential world, instead of the ontological world around them. Jean Piaget, a Swiss developmental psychologist, proposed that learning is an active process because children learn through experience and make mistakes and solve problems. Piaget proposed that learning should be whole by helping students understand that meaning is constructed. Evolutionary developmental psychology is a research paradigm that applies the basic principles of Darwinian evolution, particularly natural selection, to understand the development of human behavior and cognition. It involves the study of both the genetic and environmental mechanisms that underlie the development of social and cognitive competencies, as well as the epigenetic (gene-environment interactions) processes that adapt these competencies to local conditions. EDP considers both the reliably developing, species-typical features of ontogeny (developmental adaptations), as well as individual differences in behavior, from an evolutionary perspective. While evolutionary views tend to regard most individual differences as the result of either random genetic noise (evolutionary byproducts) and/or idiosyncrasies (for example, peer groups, education, neighborhoods, and chance encounters) rather than products of natural selection, EDP asserts that natural selection can favor the emergence of individual differences via "adaptive developmental plasticity". From this perspective, human development follows alternative life-history strategies in response to environmental variability, rather than following one species-typical pattern of development. EDP is closely linked to the theoretical framework of evolutionary psychology (EP), but is also distinct from EP in several domains, including research emphasis (EDP focuses on adaptations of ontogeny, as opposed to adaptations of adulthood) and consideration of proximate ontogenetic and environmental factors (i.e., how development happens) in addition to more ultimate factors (i.e., why development happens), which are the focus of mainstream evolutionary psychology. Attachment theory, originally developed by John Bowlby, focuses on the importance of open, intimate, emotionally meaningful relationships. Attachment is described as a biological system or powerful survival impulse that evolved to ensure the survival of the infant. A threatened or stressed child will move toward caregivers who create a sense of physical, emotional, and psychological safety for the individual. Attachment feeds on body contact and familiarity. Psychologist Harry Harlow's research with infant rhesus monkeys in the mid-20th century provided pivotal experimental support for attachment theory. His studies found that infant monkeys consistently preferred cloth surrogate mothers that provided comfort over wire ones that offered only food. These results demonstrated that emotional security and physical comfort are more critical to attachment than nourishment alone. Harlow's findings reinforced Bowlby's view that early caregiving relationships are biologically essential for healthy emotional development and social bonding later in life. Later Mary Ainsworth developed the Strange Situation protocol and the concept of the secure base. This tool has been found to help understand attachment, such as the Strange Situation Test and the Adult Attachment Interview. Both of which help determine factors to certain attachment styles. The Strange Situation Test helps find "disturbances in attachment" and whether certain attributes are found to contribute to a certain attachment issue. The Adult Attachment Interview is a tool that is similar to the Strange Situation Test but instead focuses attachment issues found in adults. Both tests have helped many researchers gain more information on the risks and how to identify them. Theorists have proposed four types of attachment styles: secure, anxious-avoidant, anxious-resistant, and disorganized. Secure attachment is a healthy attachment between the infant and the caregiver. It is characterized by trust. Anxious-avoidant is an insecure attachment between an infant and a caregiver. This is characterized by the infant's indifference toward the caregiver. Anxious-resistant is an insecure attachment between the infant and the caregiver characterized by distress from the infant when separated and anger when reunited. Disorganized is an attachment style without a consistent pattern of responses upon return of the parent. It is possible to prevent a child's innate propensity to develop bonds. Some infants are kept in isolation or subjected to severe neglect or abuse, or they are raised without the stimulation and care of a regular caregiver. This deprivation may cause short-term consequences such as separation, rage, despair, and a brief lag in cerebral growth. Increased aggression, clinging behavior, alienation, psychosomatic illnesses, and an elevated risk of adult depression are among the long-term consequences.[page needed][page needed]\ According to attachment theory, which is a psychological concept, people's capacity to develop healthy social and emotional ties later in life is greatly impacted by their early relationships with their primary caregivers, especially during infancy. This suggests that humans have an inbuilt need to develop strong bonds with caregivers in order to survive and be healthy. Childhood attachment styles can have an impact on how people behave in adult social situations, including romantic partnerships. A significant concern of developmental psychology is the relationship between innateness and environmental influences on development. This is often referred to as "nature and nurture" or nativism versus empiricism. A nativist account of development would argue that the processes in question are innate, that is, they are specified by the organism's genes. What makes a person who they are? Is it their environment or their genetics? This is the debate of nature vs nurture. According to an empiricist viewpoint, those processes are learned through interaction with the environment. Today most developmental psychologists take a more holistic approach, emphasizing the interaction between genetic and environmental influences. One of the ways this relationship has been explored in recent years is through the emerging field of evolutionary developmental psychology. The dispute over innateness has been well represented in the field of language acquisition studies. A major question in this area is whether or not certain properties of human language are specified genetically or can be acquired through learning. The empiricist position on the issue of language acquisition suggests that the language input provides the necessary information required for learning the structure of language and that infants acquire language through a process of statistical learning. From this perspective, language can be acquired via general learning methods that also apply to other aspects of development, such as perceptual learning. The nativist position argues that the input from language is too impoverished for infants and children to acquire the structure of language. Linguist Noam Chomsky asserts that, evidenced by the lack of sufficient information in the language input, there is a universal grammar that applies to all human languages and is pre-specified. This has led to the idea that there is a special cognitive module suited for learning language, often called the language acquisition device. Chomsky's critique of the behaviorist model of language acquisition is regarded by many as a key turning point in the decline in the prominence of the theory of behaviorism generally. But Skinner's conception of "Verbal Behavior" has not died, perhaps in part because it has generated successful practical applications. Maybe there could be "strong interactions of both nature and nurture". Many researchers now emphasize that development results from a continuous, dynamic interaction between genetic predispositions and environmental influences. Rather than acting independently, nature and nurture are seen as intertwined forces, where genetic factors can shape sensitivity to environmental inputs, and environmental conditions can influence how genes are expressed across development. One of the major discussions in developmental psychology includes whether development is discontinuous or continuous. Continuous development is quantifiable and quantitative, whereas discontinuous development is qualitative. Quantitative estimations of development can be measuring the stature of a child, and measuring their memory or consideration span. "Particularly dramatic examples of qualitative changes are metamorphoses, such as the emergence of a caterpillar into a butterfly." Those psychologists who bolster the continuous view of improvement propose that improvement includes slow and progressing changes all through the life span, with behavior within the prior stages of advancement giving the premise of abilities and capacities required for the other stages. "To many, the concept of continuous, quantifiable measurement seems to be the essence of science". However, not all psychologists concur that advancement could be a continuous process. A few see advancement as a discontinuous process. They accept advancement includes unmistakable and partitioned stages with diverse sorts of behavior happening in each organization. This proposes that the development of certain capacities in each arrange, such as particular feelings or ways of considering, has a definite beginning and ending point. Nevertheless, there is no exact moment when a capacity suddenly appears or disappears. Although some sorts of considering, feeling or carrying on could seem to seem abruptly, it is more than likely that this has been developing gradually for some time. Stage theories of development rest on the suspicion that development may be a discontinuous process including particular stages which are characterized by subjective contrasts in behavior. They moreover assume that the structure of the stages is not variable concurring to each person, in any case, the time of each arrangement may shift separately. Stage theories can be differentiated with ceaseless hypotheses, which set that development is an incremental process. This issue involves the degree to which one becomes older renditions of their early experience or whether they develop into something different from who they were at an earlier point in development. It considers the extent to which early experiences (especially infancy) or later experiences are the key determinants of a person's development. Stability is defined as the consistent ordering of individual differences with respect to some attribute. Change is altering someone/something. Most human development lifespan developmentalists recognize that extreme positions are unwise. Therefore, the key to a comprehensive understanding of development at any stage requires the interaction of different factors and not only one. Theory of mind is the ability to attribute mental states to ourselves and others. It is a complex but vital process in which children begin to understand the emotions, motives, and feelings of not only themselves but also others. Theory of mind allows individuals to understand that others have unique beliefs and desires different from their own. This ability enables successful social interactions by recognizing and interpreting the mental states of others. If a child does not fully develop theory of mind within this crucial 5-year period, they can suffer from communication barriers that follow them into adolescence and adulthood. Exposure to more people and the availability of stimuli that encourages social-cognitive growth is a factor that relies heavily on family. Mathematical models Developmental psychology is concerned not only with describing the characteristics of psychological change over time but also seeks to explain the principles and internal workings underlying these changes. Psychologists have attempted to better understand these factors by using models. A model must simply account for the means by which a process takes place. This is sometimes done in reference to changes in the brain that may correspond to changes in behavior over the course of the development. Mathematical modeling is useful in developmental psychology for implementing theory in a precise and easy-to-study manner, allowing generation, explanation, integration, and prediction of diverse phenomena. Several modeling techniques are applied to development: symbolic, connectionist (neural network), or dynamical systems models. Dynamic systems models illustrate how many different features of a complex system may interact to yield emergent behaviors and abilities. Nonlinear dynamics has been applied to human systems specifically to address issues that require attention to temporality such as life transitions, human development, and behavioral or emotional change over time. Nonlinear dynamic systems is currently being explored as a way to explain discrete phenomena of human development such as affect, second language acquisition, and locomotion. Research areas One critical aspect of developmental psychology is the study of neural development, which investigates how the brain changes and develops during different stages of life. Neural development focuses on how the brain changes and develops during different stages of life. Studies have shown that the human brain undergoes rapid changes during prenatal and early postnatal periods. These changes include the formation of neurons, the development of neural networks, and the establishment of synaptic connections. The formation of neurons and the establishment of basic neural circuits in the developing brain are crucial for laying the foundation of the brain's structure and function, and disruptions during this period can have long-term effects on cognitive and emotional development. Experiences and environmental factors play a crucial role in shaping neural development. Early sensory experiences, such as exposure to language and visual stimuli, can influence the development of neural pathways related to perception and language processing. Genetic factors play a huge roll in neural development. Genetic factors can influence the timing and pattern of neural development, as well as the susceptibility to certain developmental disorders, such as autism spectrum disorder and attention-deficit/hyperactivity disorder. Research finds that the adolescent brain undergoes significant changes in neural connectivity and plasticity. During this period, there is a pruning process where certain neural connections are strengthened while others are eliminated, resulting in more efficient neural networks and increased cognitive abilities, such as decision-making and impulse control. The study of neural development provides crucial insights into the complex interplay between genetics, environment, and experiences in shaping the developing brain. By understanding the neural processes underlying developmental changes, researchers gain a better understanding of cognitive, emotional, and social development in humans. Cognitive development is primarily concerned with how infants and children acquire, develop, and use internal mental capabilities such as: problem-solving, memory, and language. Major topics in cognitive development are the study of language acquisition and the development of perceptual and motor skills. Piaget was one of the influential early psychologists to study the development of cognitive abilities. His theory suggests that development proceeds through a set of stages from infancy to adulthood and that there is an end point or goal. Other accounts, such as that of Lev Vygotsky, have suggested that development does not progress through stages, but rather that the developmental process that begins at birth and continues until death is too complex for such structure and finality. Rather, from this viewpoint, developmental processes proceed more continuously. Thus, development should be analyzed, instead of treated as a product to obtain. K. Warner Schaie has expanded the study of cognitive development into adulthood. Rather than being stable from adolescence, Schaie sees adults as progressing in the application of their cognitive abilities. Modern cognitive development has integrated the considerations of cognitive psychology and the psychology of individual differences into the interpretation and modeling of development. Specifically, the neo-Piagetian theories of cognitive development showed that the successive levels or stages of cognitive development are associated with increasing processing efficiency and working memory capacity. These increases explain differences between stages, progression to higher stages, and individual differences of children who are the same-age and of the same grade-level. However, other theories have moved away from Piagetian stage theories, and are influenced by accounts of domain-specific information processing, which posit that development is guided by innate evolutionarily-specified and content-specific information processing mechanisms. Developmental psychologists who are interested in social development examine how individuals develop social and emotional competencies. For example, they study how children form friendships, how they understand and deal with emotions, and how identity develops. Research in this area may involve study of the relationship between cognition or cognitive development and social behavior. Emotional regulation or ER refers to an individual's ability to modulate emotional responses across a variety of contexts. In young children, this modulation is in part controlled externally, by parents and other authority figures. As children develop, they take on more and more responsibility for their internal state. Studies have shown that the development of ER is affected by the emotional regulation children observe in parents and caretakers, the emotional climate in the home, and the reaction of parents and caretakers to the child's emotions. Music also has an influence on stimulating and enhancing the senses of a child through self-expression. A child's social and emotional development can be disrupted by motor coordination problems, evidenced by the environmental stress hypothesis. The environmental hypothesis explains how children with coordination problems and developmental coordination disorder are exposed to several psychosocial consequences which act as secondary stressors, leading to an increase in internalizing symptoms such as depression and anxiety. Motor coordination problems affect fine and gross motor movement as well as perceptual-motor skills. Secondary stressors commonly identified include the tendency for children with poor motor skills to be less likely to participate in organized play with other children and more likely to feel socially isolated. Social and emotional development focuses on five keys areas: Self-Awareness, Self Management, Social Awareness, Relationship Skills and Responsible Decision Making. Physical development concerns the physical maturation of an individual's body until it reaches the adult stature. Although physical growth is a highly regular process, all children differ tremendously in the timing of their growth spurts. Studies are being done to analyze how the differences in these timings affect and are related to other variables of developmental psychology such as information processing speed. Traditional measures of physical maturity using x-rays are less in practice nowadays, compared to simple measurements of body parts such as height, weight, head circumference, and arm span. A few other studies and practices with physical developmental psychology are the phonological abilities of mature 5- to 11-year-olds, and the controversial hypotheses of left-handers being maturationally delayed compared to right-handers. A study by Eaton, Chipperfield, Ritchot, and Kostiuk in 1996 found in three different samples that there was no difference between right- and left-handers. Researchers interested in memory development look at the way our memory develops from childhood and onward. According to fuzzy-trace theory, a theory of cognition originally proposed by Valerie F. Reyna and Charles Brainerd, people have two separate memory processes: verbatim and gist. These two traces begin to develop at different times as well as at a different pace. Children as young as four years old have verbatim memory, memory for surface information, which increases up to early adulthood, at which point it begins to decline. On the other hand, our capacity for gist memory, memory for semantic information, increases up to early adulthood, at which point it is consistent through old age. Furthermore, one's reliance on gist memory traces increases as one ages. Neuroscientific research has contributed to understanding the biological mechanisms behind memory development. A study using diffusion MRI in children aged four to twelve found that greater maturity in white matter tracts, specifically the uncinate fasciculus and dorsal cingulum bundle, was associated with stronger episodic memory recall. These findings suggest that the structural development of white matter pathways plays a significant role in memory function during childhood. Research methods and designs Developmental psychology employs many of the research methods used in other areas of psychology. However, infants and children cannot be tested in the same ways as adults, so different methods are often used to study their development. Developmental psychologists have a number of methods to study changes in individuals over time. Common research methods include systematic observation, including naturalistic observation or structured observation; self-reports, which could be clinical interviews or structured interviews; clinical or case study method; and ethnography or participant observation. These methods differ in the extent of control researchers impose on study conditions, and how they construct ideas about which variables to study. Every developmental investigation can be characterized in terms of whether its underlying strategy involves the experimental, correlational, or case study approach. The experimental method involves "actual manipulation of various treatments, circumstances, or events to which the participant or subject is exposed; the experimental design points to cause-and-effect relationships. This method allows for strong inferences to be made of causal relationships between the manipulation of one or more independent variables and subsequent behavior, as measured by the dependent variable. The advantage of using this research method is that it permits determination of cause-and-effect relationships among variables. On the other hand, the limitation is that data obtained in an artificial environment may lack generalizability. The correlational method explores the relationship between two or more events by gathering information about these variables without researcher intervention. The advantage of using a correlational design is that it estimates the strength and direction of relationships among variables in the natural environment; however, the limitation is that it does not permit determination of cause-and-effect relationships among variables. The case study approach allows investigations to obtain an in-depth understanding of an individual participant by collecting data based on interviews, structured questionnaires, observations, and test scores. Each of these methods have its strengths and weaknesses but the experimental method when appropriate is the preferred method of developmental scientists because it provides a controlled situation and conclusions to be drawn about cause-and-effect relationships. Most developmental studies, regardless of whether they employ the experimental, correlational, or case study method, can also be constructed using research designs. Research designs are logical frameworks used to make key comparisons within research studies such as: In a longitudinal study, a researcher observes many individuals born at or around the same time (a cohort) and carries out new observations as members of the cohort age. This method can be used to draw conclusions about which types of development are universal (or normative) and occur in most members of a cohort. As an example a longitudinal study of early literacy development examined in detail the early literacy experiences of one child in each of 30 families. Researchers may also observe ways that development varies between individuals, and hypothesize about the causes of variation in their data. Longitudinal studies often require large amounts of time and funding, making them unfeasible in some situations. Also, because members of a cohort all experience historical events unique to their generation, apparently normative developmental trends may, in fact, be universal only to their cohort. In a cross-sectional study, a researcher observes differences between individuals of different ages at the same time. This generally requires fewer resources than the longitudinal method, and because the individuals come from different cohorts, shared historical events are not so much of a confounding factor. By the same token, however, cross-sectional research may not be the most effective way to study differences between participants, as these differences may result not from their different ages but from their exposure to different historical events. A third study design, the sequential design, combines both methodologies. Here, a researcher observes members of different birth cohorts at the same time, and then tracks all participants over time, charting changes in the groups. While much more resource-intensive, the format aids in a clearer distinction between what changes can be attributed to an individual or historical environment from those that are truly universal. Because every method has some weaknesses, developmental psychologists rarely rely on one study or even one method to reach conclusions by finding consistent evidence from as many converging sources as possible. Life stages of psychological development Prenatal development is of interest to psychologists investigating the context of early psychological development. The whole prenatal development involves three main stages: germinal stage, embryonic stage and fetal stage. Germinal stage begins at conception until 2 weeks; embryonic stage means the development from 2 weeks to 8 weeks; fetal stage represents 9 weeks until birth of the baby. The senses develop in the womb itself: a fetus can both see and hear by the second trimester (13 to 24 weeks of age). The sense of touch develops in the embryonic stage (5 to 8 weeks). Most of the brain's billions of neurons also are developed by the second trimester. Babies are hence born with some odor, taste and sound preferences, largely related to the mother's environment. Some primitive reflexes too arise before birth and are still present in newborns. One hypothesis is that these reflexes are vestigial and have limited use in early human life. Piaget's theory of cognitive development suggested that some early reflexes are building blocks for infant sensorimotor development. For example, the tonic neck reflex may help development by bringing objects into the infant's field of view. Other reflexes, such as the walking reflex, appear to be replaced by more sophisticated voluntary control later in infancy. This may be because the infant gains too much weight after birth to be strong enough to use the reflex, or because the reflex and subsequent development are functionally different. It has also been suggested that some reflexes (for example the moro and walking reflexes) are predominantly adaptations to life in the womb with little connection to early infant development. Primitive reflexes reappear in adults under certain conditions, such as neurological conditions like dementia or traumatic lesions. Ultrasounds have shown that infants are capable of a range of movements in the womb, many of which appear to be more than simple reflexes. By the time they are born, infants can recognize and have a preference for their mother's voice suggesting some prenatal development of auditory perception. Prenatal development and birth complications may also be connected to neurodevelopmental disorders, for example in schizophrenia. With the advent of cognitive neuroscience, embryology and the neuroscience of prenatal development is of increasing interest to developmental psychology research. Theoretical extensions in the field now investigate "embryonic spiritual life". While debated, these theories cite fetal sensory capacities to suggest that prenatal experiences may influence the development of later spiritual representations and self-transcendence. Several environmental agents—teratogens—can cause damage during the prenatal period. These include prescription and nonprescription drugs, illegal drugs, tobacco, alcohol, environmental pollutants, infectious disease agents such as the rubella virus and the toxoplasmosis parasite, maternal malnutrition, maternal emotional stress, and Rh factor blood incompatibility between mother and child. There are many statistics which prove the effects of the aforementioned substances. A leading example of this would be that at least 100,000 "cocaine babies" were born in the United States annually in the late 1980s. "Cocaine babies" are proven to have quite severe and lasting difficulties which persist throughout infancy and right throughout childhood. The drug also encourages behavioural problems in the affected children and defects of various vital organs. From birth until the first year, children are referred to as infants. As they grow, children respond to their environment in unique ways. Developmental psychologists vary widely in their assessment of infant psychology, and the influence the outside world has upon it. The majority of a newborn infant's time is spent sleeping. At first, their sleep cycles are evenly spread throughout the day and night, but after a couple of months, infants generally become diurnal. In human or rodent infants, there is always the observation of a diurnal cortisol rhythm, which is sometimes entrained with a maternal substance. Nevertheless, the circadian rhythm starts to take shape, and a 24-hour rhythm is observed in just some few months after birth. Infants can be seen to have six states, grouped into pairs: Infant perception is what a newborn can see, hear, smell, taste, and touch. These five features are considered as the "five senses". Because of these different senses, infants respond to stimuli differently. Babies are born with the ability to discriminate virtually all sounds of all human languages. Infants of around six months can differentiate between phonemes in their own language, but not between similar phonemes in another language. Notably, infants are able to differentiate between various durations and sound levels and can easily differentiate all the languages they have encountered, hence easy for infants to understand a certain language compared to an adult. At this stage infants also start to babble, whereby they start making vowel consonant sound as they try to understand the true meaning of language and copy whatever they are hearing in their surrounding producing their own phonemes. In various cultures, a distinct form of speech called "babytalk" is used when communicating with newborns and young children. This register consists of simplified terms for common topics such as family members, food, hygiene, and familiar animals. It also exhibits specific phonological patterns, such as substituting alveolar sounds with initial velar sounds, especially in languages like English. Furthermore, babytalk often involves morphological simplifications, such as regularizing verb conjugations (for instance, saying "corned" instead of "cornered" or "goed" instead of "went"). This language is typically taught to children and is perceived as their natural way of communication. Interestingly, in mythology and popular culture, certain characters, such as the "Hausa trickster" or the Warner Bros cartoon character "Tweety Pie", are portrayed as speaking in a babytalk-like manner. Piaget suggested that an infant's perception and understanding of the world depended on their motor development, which was required for the infant to link visual, tactile and motor representations of objects. The concept of object permanence refers to the knowledge that an object exists even when it is not directly perceived or visible; in other words, something is still there even if it is not visible. This is a crucial developmental milestone for infants, who learn that something is not necessarily lost forever just because it is hidden. When a child displays object permanence, they will look for a toy that is hidden, showing that they are aware that the item is still there even when it is covered by a blanket. Most babies start to exhibit symptoms of object permanence around the age of eight months. According to this theory, infants develop object permanence through touching and handling objects. Piaget's sensorimotor stage comprised six sub-stages (see sensorimotor stages for more detail). In the early stages, development arises out of movements caused by primitive reflexes. Discovery of new behaviors results from classical and operant conditioning, and the formation of habits. From eight months the infant is able to uncover a hidden object but will persevere when the object is moved. Piaget concluded that infants lacked object permanence before 18 months when infants' before this age failed to look for an object where it had last been seen. Instead, infants continued to look for an object where it was first seen, committing the "A-not-B error". Some researchers have suggested that before the age of 8–9 months, infants' inability to understand object permanence extends to people, which explains why infants at this age do not cry when their mothers are gone ("Out of sight, out of mind"). In the 1980s and 1990s, researchers developed new methods of assessing infants' understanding of the world with far more precision and subtlety than Piaget was able to do in his time. Since then, many studies based on these methods suggest that young infants understand far more about the world than first thought. Based on recent findings, some researchers (such as Elizabeth Spelke and Renee Baillargeon) have proposed that an understanding of object permanence is not learned at all, but rather comprises part of the innate cognitive capacities of our species. According to Jean Piaget's developmental psychology, object permanence, or the awareness that objects exist even when they are no longer visible, was thought to emerge gradually between the ages of 8 and 12 months. However, experts such as Elizabeth Spelke and Renee Baillargeon have questioned this notion. They studied infants' comprehension of object permanence at a young age using novel experimental approaches such as violation-of-expectation paradigms. These findings imply that children as young as 3 to 4 months old may have an innate awareness of object permanence. Baillargeon's "drawbridge" experiment, for example, showed that infants were surprised when they saw occurrences that contradicted object permanence expectations. This proposition has important consequences for our understanding of infant cognition, implying that infants may be born with core cognitive abilities rather than developing them via experience and learning. Other research has suggested that young infants in their first six months of life may possess an understanding of numerous aspects of the world around them, including: There are critical periods in infancy and childhood during which development of certain perceptual, sensorimotor, social and language systems depends crucially on environmental stimulation. Feral children such as Genie, deprived of adequate stimulation, fail to acquire important skills and are unable to learn in later childhood. In this case, Genie is used to represent the case of a feral child because she was socially neglected and abused while she was just a young girl. She underwent abnormal child psychology which involved problems with her linguistics. This happened because she was neglected while she was very young with no one to care about her and had less human contact. The concept of critical periods is also well-established in neurophysiology, from the work of Hubel and Wiesel among others. Neurophysiology in infants generally provides correlating details that exists between neurophysiological details and clinical features and also focuses on vital information on rare and common neurological disorders that affect infants. Studies have been done to look at the differences in children who have developmental delays versus typical development. Normally when being compared to one another, mental age (MA) is not taken into consideration. There still may be differences in developmentally delayed (DD) children vs. typical development (TD) behavioral, emotional and other mental disorders. When compared to MA children there is a bigger difference between normal developmental behaviors overall. DDs can cause lower MA, so comparing DDs with TDs may not be as accurate. Pairing DDs specifically with TD children at similar MA can be more accurate. There are levels of behavioral differences that are considered as normal at certain ages. When evaluating DDs and MA in children, consider whether those with DDs have a larger amount of behavior that is not typical for their MA group. Developmental delays tend to contribute to other disorders or difficulties than their TD counterparts. Infants shift between ages of one and two to a developmental stage known as toddlerhood. In this stage, an infant's transition into toddlerhood is highlighted through self-awareness, developing maturity in language use, and presence of memory and imagination. During toddlerhood, babies begin learning how to walk, talk, and make decisions for themselves. An important characteristic of this age period is the development of language, where children are learning how to communicate and express their emotions and desires through the use of vocal sounds, babbling, and eventually words. Self-control also begins to develop. At this age, children take initiative to explore, experiment and learn from making mistakes. Caretakers who encourage toddlers to try new things and test their limits, help the child become autonomous, self-reliant, and confident. If the caretaker is overprotective or disapproving of independent actions, the toddler may begin to doubt their abilities and feel ashamed of the desire for independence. The child's autonomic development is inhibited, leaving them less prepared to deal with the world in the future. Toddlers also begin to identify themselves in gender roles, acting according to their perception of what a man or woman should do. Socially, the period of toddler-hood is commonly called the "terrible twos". Toddlers often use their new-found language abilities to voice their desires, but are often misunderstood by parents due to their language skills just beginning to develop. A person at this stage testing their independence is another reason behind the stage's infamous label. Tantrums in a fit of frustration are also common. Erik Erikson divides childhood into four stages, each with its distinct social crisis: As stated, the psychosocial crisis for Erikson is Trust versus Mistrust. Needs are the foundation for gaining or losing trust in the infant. If the needs are met, trust in the guardian and the world forms. If the needs are not met, or the infant is neglected, mistrust forms alongside feelings of anxiety and fear. Autonomy versus shame follows trust in infancy. The child begins to explore their world in this stage and discovers preferences in what they like. If autonomy is allowed, the child grows in independence and their abilities. If freedom of exploration is hindered, it leads to feelings of shame and low self-esteem. In the earliest years, children are "completely dependent on the care of others". Therefore, they develop a "social relationship" with their care givers and, later, with family members. During their preschool years (3–5), they "enlarge their social horizons" to include people outside the family. Preoperational and then operational thinking develops, which means actions are reversible, and egocentric thought diminishes. The motor skills of preschoolers increase so they can do more things for themselves. They become more independent. No longer completely dependent on the care of others, the world of this age group expands. More people have a role in shaping their individual personalities. Preschoolers explore and question their world. For Jean Piaget, the child is "a little scientist exploring and reflecting on these explorations to increase competence" and this is done in "a very independent way". Play is a major activity for ages 3–5. For Piaget, through play "a child reaches higher levels of cognitive development." In their expanded world, children in the 3–5 age group attempt to find their own way. If this is done in a socially acceptable way, the child develops the initiative. If not, the child develops guilt. Children who develop "guilt" rather than "initiative" have failed Erikson's psychosocial crisis for the 3–5 age group. For Erik Erikson, the psychosocial crisis during middle childhood is Industry vs. Inferiority which, if successfully met, instills a sense of Competency in the child. In all cultures, middle childhood is a time for developing "skills that will be needed in their society." School offers an arena in which children can gain a view of themselves as "industrious (and worthy)". They are "graded for their school work and often for their industry". They can also develop industry outside of school in sports, games, and doing volunteer work. Children who achieve "success in school or games might develop a feeling of competence." The "peril during this period is that feelings of inadequacy and inferiority will develop. Parents and teachers can "undermine" a child's development by failing to recognize accomplishments or being overly critical of a child's efforts. Children who are "encouraged and praised" develop a belief in their competence. Lack of encouragement or ability to excel lead to "feelings of inadequacy and inferiority". The Centers for Disease Control (CDC) divides Middle Childhood into two stages, 6–8 years and 9–11 years, and gives "developmental milestones for each stage". Entering elementary school, children in this age group begin to thinks about the future and their "place in the world". Working with other students and wanting their friendship and acceptance become more important. This leads to "more independence from parents and family". As students, they develop the mental and verbal skills "to describe experiences and talk about thoughts and feelings". They become less self-centered and show "more concern for others". For children ages 9–11 "friendships and peer relationships" increase in strength, complexity, and importance. This results in greater "peer pressure". They grow even less dependent on their families and they are challenged academically. To meet this challenge, they increase their attention span and learn to see other points of view. Adolescence is the period of life between the onset of puberty and the full commitment to an adult social role, such as worker, parent, and/or citizen. It is the period known for the formation of personal and social identity (see Erik Erikson) and the discovery of moral purpose (see William Damon). Intelligence is demonstrated through the logical use of symbols related to abstract concepts and formal reasoning. A return to egocentric thought often occurs early in the period. Only 35% develop the capacity to reason formally during adolescence or adulthood. (Huitt, W. and Hummel, J. January 1998) Erik Erikson labels this stage identity versus role confusion. Erikson emphasizes the importance of developing a sense of identity in adolescence because it affects the individual throughout their life. Identity is a lifelong process and is related with curiosity and active engagement. Role confusion is often considered the current state of identity of the individual. Identity exploration is the process of changing from role confusion to resolution. During Erik Erikson's identity versus role uncertainty stage, which occurs in adolescence, people struggle to form a cohesive sense of self while exploring many social roles and prospective life routes. This time is characterized by deep introspection, self-examination, and the pursuit of self-understanding. Adolescents are confronted with questions regarding their identity, beliefs, and future goals. The major problem is building a strong sense of identity in the face of society standards, peer pressure, and personal preferences. Adolescents participate in identity exploration, commitment, and synthesis, actively seeking out new experiences, embracing ideals and aspirations, and merging their changing sense of self into a coherent identity. Successfully navigating this stage builds the groundwork for good psychological development in adulthood, allowing people to pursue meaningful relationships, make positive contributions to society, and handle life's adversities with perseverance and purpose. It is divided into three parts, namely: The adolescent unconsciously explores questions such as "Who am I? Who do I want to be?" Like toddlers, adolescents must explore, test limits, become autonomous, and commit to an identity, or sense of self. Different roles, behaviors and ideologies must be tried out to select an identity. Role confusion and inability to choose vocation can result from a failure to achieve a sense of identity through, for example, friends. Early adulthood generally refers to the period between ages 18 to 39, and according to theorists such as Erik Erikson, is a stage where development is mainly focused on maintaining relationships. Erikson shows the importance of relationships by labeling this stage intimacy vs isolation. Intimacy suggests a process of becoming part of something larger than oneself by sacrificing in romantic relationships and working for both life and career goals. Other examples include creating bonds of intimacy, sustaining friendships, and starting a family. Some theorists state that development of intimacy skills rely on the resolution of previous developmental stages. A sense of identity gained in the previous stages is also necessary for intimacy to develop. If this skill is not learned the alternative is alienation, isolation, a fear of commitment, and the inability to depend on others. Isolation, on the other hand, suggests something different than most might expect. Erikson defined it as a delay of commitment in order to maintain freedom. Yet, this decision does not come without consequences. Erikson explained that choosing isolation may affect one's chances of getting married, progressing in a career, and overall development. A related framework for studying this part of the lifespan is that of emerging adulthood. Scholars of emerging adulthood, such as Jeffrey Arnett, are not necessarily interested in relationship development. Instead, this concept suggests that people transition after their teenage years into a period, not characterized as relationship building and an overall sense of constancy with life, but with years of living with parents, phases of self-discovery, and experimentation. Middle adulthood generally refers to the period between ages 40 to 64. During this period, middle-aged adults experience a conflict between generativity and stagnation. Generativity is the sense of contributing to society, the next generation, or their immediate community. On the other hand, stagnation results in a lack of purpose. The adult's identity continues to develop in middle-adulthood. Middle-aged adults often adopt opposite gender characteristics. The adult realizes they are half-way through their life and often reevaluate vocational and social roles. Life circumstances can also cause a reexamination of identity. Physically, the middle-aged experience a decline in muscular strength, reaction time, sensory keenness, and cardiac output. Also, women experience menopause at an average age of 48.8 and a sharp drop in the hormone estrogen. Men experience an equivalent endocrine system event to menopause. Andropause in males is a hormone fluctuation with physical and psychological effects that can be similar to those seen in menopausal females. As men age lowered testosterone levels can contribute to mood swings and a decline in sperm count. Sexual responsiveness can also be affected, including delays in erection and longer periods of penile stimulation required to achieve ejaculation. The important influence of biological and social changes experienced by women and men in middle adulthood is reflected in the fact that depression is highest at age 48.5 around the world. The World Health Organization finds "no general agreement on the age at which a person becomes old." Most "developed countries" set the age as 65 or 70. However, in developing countries inability to make "active contribution" to society, not chronological age, marks the beginning of old age. According to Erikson's stages of psychosocial development, old age is the stage in which individuals assess the quality of their lives. Erikson labels this stage as integrity versus despair. For integrated persons, there is a sense of fulfillment in life. They have become self-aware and optimistic due to life's commitments and connection to others. While reflecting on life, people in this stage develop feelings of contentment with their experiences. If a person falls into despair, they are often disappointed about failures or missed chances in life. They may feel that the time left in life is an insufficient amount to turn things around. Physically, older people experience a decline in muscular strength, reaction time, stamina, hearing, distance perception, and the sense of smell. They also are more susceptible to diseases such as cancer and pneumonia due to a weakened immune system. Programs aimed at balance, muscle strength, and mobility have been shown to reduce disability among mildly (but not more severely) disabled elderly. Sexual expression depends in large part upon the emotional and physical health of the individual. Many older adults continue to be sexually active and satisfied with their sexual activity. Mental disintegration may also occur, leading to dementia or ailments such as Alzheimer's disease. The average age of onset for dementia in males is 78.8 and 81.9 for women. It is generally believed that crystallized intelligence increases up to old age, while fluid intelligence decreases with age. Whether or not normal intelligence increases or decreases with age depends on the measure and study. Longitudinal studies show that perceptual speed, inductive reasoning, and spatial orientation decline. An article on adult cognitive development reports that cross-sectional studies show that "some abilities remained stable into early old age". Parenting Parenting variables alone have typically accounted for 20 to 50 percent of the variance in child outcomes. All parents have their own parenting styles. Parenting styles, according to Kimberly Kopko, are "based upon two aspects of parenting behavior; control and warmth. Parental control refers to the degree to which parents manage their children's behavior. Parental warmth refers to the degree to which parents are accepting and responsive to their children's behavior." The following parenting styles have been described in the child development literature: Parenting research has traditionally focused on mothers, but recent studies highlight the important role of fathers in child development. Children as young as 15 months benefit significantly from substantial engagement with their father. In particular, a study in the U.S. and New Zealand found the presence of the natural father was the most significant factor in reducing rates of early sexual activity and rates of teenage pregnancy in girls. However, neither a mother nor a father is actually essential in successful parenting, and both single parents as well as homosexual couples can support positive child outcomes. Children need at least one consistently responsible adult with whom they can form a positive emotional bond. Having multiple such figures further increases the likelihood of positive outcomes. Recent research also suggests that the way parents interact with infants can influence early brain development. Parents who guide their baby's attention during play by shifting their gaze between a toy and the child tend to have infants with more complex brain activity. This attention-guiding behavior helps infants process social cues more effectively. Another parental factor often debated in terms of its effects on child development is divorce. Divorce in itself is not a determining factor of negative child outcomes. In fact, the majority of children from divorcing families fall into the normal range on measures of psychological and cognitive functioning. A number of mediating factors play a role in determining the effects divorce has on a child, for example, divorcing families with young children often face harsher consequences in terms of demographic, social, and economic changes than do families with older children. Positive coparenting after divorce is part of a pattern associated with positive child coping, while hostile parenting behaviors lead to a destructive pattern leaving children at risk. Additionally, direct parental relationship with the child also affects the development of a child after a divorce. Overall, protective factors facilitating positive child development after a divorce are maternal warmth, positive father-child relationship, and cooperation between parents. Cross-cultural A way to improve developmental psychology is a representation of cross-cultural studies. The psychology field in general assumes that "basic" human developments are represented in any population, specifically the Western-Educated-Industrialized-Rich and Democratic (W.E.I.R.D.) subjects that are relied on for a majority of their studies. Previous research generalizes the findings done with W.E.I.R.D. samples because many in the Psychological field assume certain aspects of development are exempted from or are not affected by life experiences. However, many of the assumptions have been proven incorrect or are not supported by empirical research. For example, according to Kohlberg, moral reasoning is dependent on cognitive abilities. While both analytical and holistic cognitive systems do have the potential to develop in any adult, the West is still on the extreme end of analytical thinking, and the non-West tend to use holistic processes. Furthermore, moral reasoning in the West only considers aspects that support autonomy and the individual, whereas non-Western adults emphasize moral behaviors supporting the community and maintaining an image of holiness or divinity. Not all aspects of human development are universal and we can learn a lot from observing different regions and subjects. An example of a non-Western model for development stages is the Indian model, focusing a large amount of its psychological research on morality and interpersonal progress. The developmental stages in Indian models are founded by Hinduism, which primarily teaches stages of life in the process of someone discovering their fate or Dharma. This cross-cultural model can add another perspective to psychological development in which the West behavioral sciences have not emphasized kinship, ethnicity, or religion. Indian psychologists study the relevance of attentive families during the early stages of life. The early life stages conceptualize a different parenting style from the West because it does not try to rush children out of dependency. The family is meant to help the child grow into the next developmental stage at a particular age. This way, when children finally integrate into society, they are interconnected with those around them and reach renunciation when they are older. Children are raised in joint families so that in early childhood (ages 6 months to 2 years) the other family members help gradually wean the child from its mother. During ages 2 to 5, the parents do not rush toilet training. Instead of training the child to perform this behavior, the child learns to do it as they mature at their own pace. This model of early human development encourages dependency, unlike Western models that value autonomy and independence. By being attentive and not forcing the child to become independent, they are confident and have a sense of belonging by late childhood and adolescence. This stage in life (5–15 years) is also when children start education and increase their knowledge of Dharma. It is within early and middle adulthood that we see moral development progress. Early, middle, and late adulthood are all concerned with caring for others and fulfilling Dharma. The main distinction between early adulthood to middle or late adulthood is how far their influence reaches. Early adulthood emphasizes the importance of fulfilling the immediate family needs, until later adulthood when they broaden their responsibilities to the general public. The old-age life stage development reaches renunciation or a complete understanding of Dharma. The current mainstream views in the psychological field are against the Indian model for human development. The criticism against such models is that the parenting style is overly protective and encourages too much dependency. It focuses on interpersonal instead of individual goals. Also, there are some overlaps and similarities between Erikson's stages of human development and the Indian model but both of them still have major differences. The West prefers Erickson's ideas over the Indian model because they are supported by scientific studies. The life cycles based on Hinduism are not as favored, because it is not supported with research and it focuses on the ideal human development. See also References Sources Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/United_States#cite_note-298] | [TOKENS: 17273] |
Contents United States The United States of America (USA), also known as the United States (U.S.) or America, is a country primarily located in North America. It is a federal republic of 50 states and a federal capital district, Washington, D.C. The 48 contiguous states border Canada to the north and Mexico to the south, with the semi-exclave of Alaska in the northwest and the archipelago of Hawaii in the Pacific Ocean. The United States also asserts sovereignty over five major island territories and various uninhabited islands in Oceania and the Caribbean.[j] It is a megadiverse country, with the world's third-largest land area[c] and third-largest population, exceeding 341 million.[k] Paleo-Indians first migrated from North Asia to North America at least 15,000 years ago, and formed various civilizations. Spanish colonization established Spanish Florida in 1513, the first European colony in what is now the continental United States. British colonization followed with the 1607 settlement of Virginia, the first of the Thirteen Colonies. Enslavement of Africans was practiced in all colonies by 1770 and supplied most of the labor for the Southern Colonies' plantation economy. Clashes with the British Crown began as a civil protest over the illegality of taxation without representation in Parliament and the denial of other English rights. They evolved into the American Revolution, which led to the Declaration of Independence and a society based on universal rights. Victory in the 1775–1783 Revolutionary War brought international recognition of U.S. sovereignty and fueled westward expansion, further dispossessing native inhabitants. As more states were admitted, a North–South division over slavery led the Confederate States of America to declare secession and fight the Union in the 1861–1865 American Civil War. With the United States' victory and reunification, slavery was abolished nationally. By the late 19th century, the U.S. economy outpaced the French, German and British economies combined. As of 1900, the country had established itself as a great power, a status solidified after its involvement in World War I. Following Japan's attack on Pearl Harbor in 1941, the U.S. entered World War II. Its aftermath left the U.S. and the Soviet Union as rival superpowers, competing for ideological dominance and international influence during the Cold War. The Soviet Union's collapse in 1991 ended the Cold War, leaving the U.S. as the world's sole superpower. The U.S. federal government is a representative democracy with a president and a constitution that grants separation of powers under three branches: legislative, executive, and judicial. The United States Congress is a bicameral national legislature composed of the House of Representatives (a lower house based on population) and the Senate (an upper house based on equal representation for each state). Federalism grants substantial autonomy to the 50 states. In addition, 574 Native American tribes have sovereignty rights, and there are 326 Native American reservations. Since the 1850s, the Democratic and Republican parties have dominated American politics. American ideals and values are based on a democratic tradition inspired by the American Enlightenment movement. A developed country, the U.S. ranks high in economic competitiveness, innovation, and higher education. Accounting for over a quarter of nominal global GDP, its economy has been the world's largest since about 1890. It is the wealthiest country, with the highest disposable household income per capita among OECD members, though its wealth inequality is highly pronounced. Shaped by centuries of immigration, the culture of the U.S. is diverse and globally influential. Making up more than a third of global military spending, the country has one of the strongest armed forces and is a designated nuclear state. A member of numerous international organizations, the U.S. plays a major role in global political, cultural, economic, and military affairs. Etymology Documented use of the phrase "United States of America" dates back to January 2, 1776. On that day, Stephen Moylan, a Continental Army aide to General George Washington, wrote a letter to Joseph Reed, Washington's aide-de-camp, seeking to go "with full and ample powers from the United States of America to Spain" to seek assistance in the Revolutionary War effort. The first known public usage is an anonymous essay published in the Williamsburg newspaper The Virginia Gazette on April 6, 1776. Sometime on or after June 11, 1776, Thomas Jefferson wrote "United States of America" in a rough draft of the Declaration of Independence, which was adopted by the Second Continental Congress on July 4, 1776. The term "United States" and its initialism "U.S.", used as nouns or as adjectives in English, are common short names for the country. The initialism "USA", a noun, is also common. "United States" and "U.S." are the established terms throughout the U.S. federal government, with prescribed rules.[l] "The States" is an established colloquial shortening of the name, used particularly from abroad; "stateside" is the corresponding adjective or adverb. "America" is the feminine form of the first word of Americus Vesputius, the Latinized name of Italian explorer Amerigo Vespucci (1454–1512);[m] it was first used as a place name by the German cartographers Martin Waldseemüller and Matthias Ringmann in 1507.[n] Vespucci first proposed that the West Indies discovered by Christopher Columbus in 1492 were part of a previously unknown landmass and not among the Indies at the eastern limit of Asia. In English, the term "America" usually does not refer to topics unrelated to the United States, despite the usage of "the Americas" to describe the totality of the continents of North and South America. History The first inhabitants of North America migrated from Siberia approximately 15,000 years ago, either across the Bering land bridge or along the now-submerged Ice Age coastline. Small isolated groups of hunter-gatherers are said to have migrated alongside herds of large herbivores far into Alaska, with ice-free corridors developing along the Pacific coast and valleys of North America in c. 16,500 – c. 13,500 BCE (c. 18,500 – c. 15,500 BP). The Clovis culture, which appeared around 11,000 BCE, is believed to be the first widespread culture in the Americas. Over time, Indigenous North American cultures grew increasingly sophisticated, and some, such as the Mississippian culture, developed agriculture, architecture, and complex societies. In the post-archaic period, the Mississippian cultures were located in the midwestern, eastern, and southern regions, and the Algonquian in the Great Lakes region and along the Eastern Seaboard, while the Hohokam culture and Ancestral Puebloans inhabited the Southwest. Native population estimates of what is now the United States before the arrival of European colonizers range from around 500,000 to nearly 10 million. Christopher Columbus began exploring the Caribbean for Spain in 1492, leading to Spanish-speaking settlements and missions from what are now Puerto Rico and Florida to New Mexico and California. The first Spanish colony in the present-day continental United States was Spanish Florida, chartered in 1513. After several settlements failed there due to starvation and disease, Spain's first permanent town, Saint Augustine, was founded in 1565. France established its own settlements in French Florida in 1562, but they were either abandoned (Charlesfort, 1578) or destroyed by Spanish raids (Fort Caroline, 1565). Permanent French settlements were founded much later along the Great Lakes (Fort Detroit, 1701), the Mississippi River (Saint Louis, 1764) and especially the Gulf of Mexico (New Orleans, 1718). Early European colonies also included the thriving Dutch colony of New Nederland (settled 1626, present-day New York) and the small Swedish colony of New Sweden (settled 1638 in what became Delaware). British colonization of the East Coast began with the Virginia Colony (1607) and the Plymouth Colony (Massachusetts, 1620). The Mayflower Compact in Massachusetts and the Fundamental Orders of Connecticut established precedents for local representative self-governance and constitutionalism that would develop throughout the American colonies. While European settlers in what is now the United States experienced conflicts with Native Americans, they also engaged in trade, exchanging European tools for food and animal pelts.[o] Relations ranged from close cooperation to warfare and massacres. The colonial authorities often pursued policies that forced Native Americans to adopt European lifestyles, including conversion to Christianity. Along the eastern seaboard, settlers trafficked Africans through the Atlantic slave trade, largely to provide manual labor on plantations. The original Thirteen Colonies[p] that would later found the United States were administered as possessions of the British Empire by Crown-appointed governors, though local governments held elections open to most white male property owners. The colonial population grew rapidly from Maine to Georgia, eclipsing Native American populations; by the 1770s, the natural increase of the population was such that only a small minority of Americans had been born overseas. The colonies' distance from Britain facilitated the entrenchment of self-governance, and the First Great Awakening, a series of Christian revivals, fueled colonial interest in guaranteed religious liberty. Following its victory in the French and Indian War, Britain began to assert greater control over local affairs in the Thirteen Colonies, resulting in growing political resistance. One of the primary grievances of the colonists was the denial of their rights as Englishmen, particularly the right to representation in the British government that taxed them. To demonstrate their dissatisfaction and resolve, the First Continental Congress met in 1774 and passed the Continental Association, a colonial boycott of British goods enforced by local "committees of safety" that proved effective. The British attempt to then disarm the colonists resulted in the 1775 Battles of Lexington and Concord, igniting the American Revolutionary War. At the Second Continental Congress, the colonies appointed George Washington commander-in-chief of the Continental Army, and created a committee that named Thomas Jefferson to draft the Declaration of Independence. Two days after the Second Continental Congress passed the Lee Resolution to create an independent, sovereign nation, the Declaration was adopted on July 4, 1776. The political values of the American Revolution evolved from an armed rebellion demanding reform within an empire to a revolution that created a new social and governing system founded on the defense of liberty and the protection of inalienable natural rights; sovereignty of the people; republicanism over monarchy, aristocracy, and other hereditary political power; civic virtue; and an intolerance of political corruption. The Founding Fathers of the United States, who included Washington, Jefferson, John Adams, Benjamin Franklin, Alexander Hamilton, John Jay, James Madison, Thomas Paine, and many others, were inspired by Classical, Renaissance, and Enlightenment philosophies and ideas. Though in practical effect since its drafting in 1777, the Articles of Confederation was ratified in 1781 and formally established a decentralized government that operated until 1789. After the British surrender at the siege of Yorktown in 1781, American sovereignty was internationally recognized by the Treaty of Paris (1783), through which the U.S. gained territory stretching west to the Mississippi River, north to present-day Canada, and south to Spanish Florida. The Northwest Ordinance (1787) established the precedent by which the country's territory would expand with the admission of new states, rather than the expansion of existing states. The U.S. Constitution was drafted at the 1787 Constitutional Convention to overcome the limitations of the Articles. It went into effect in 1789, creating a federal republic governed by three separate branches that together formed a system of checks and balances. George Washington was elected the country's first president under the Constitution, and the Bill of Rights was adopted in 1791 to allay skeptics' concerns about the power of the more centralized government. His resignation as commander-in-chief after the Revolutionary War and his later refusal to run for a third term as the country's first president established a precedent for the supremacy of civil authority in the United States and the peaceful transfer of power. In the late 18th century, American settlers began to expand westward in larger numbers, many with a sense of manifest destiny. The Louisiana Purchase of 1803 from France nearly doubled the territory of the United States. Lingering issues with Britain remained, leading to the War of 1812, which was fought to a draw. Spain ceded Florida and its Gulf Coast territory in 1819. The Missouri Compromise of 1820, which admitted Missouri as a slave state and Maine as a free state, attempted to balance the desire of northern states to prevent the expansion of slavery into new territories with that of southern states to extend it there. Primarily, the compromise prohibited slavery in all other lands of the Louisiana Purchase north of the 36°30′ parallel. As Americans expanded further into territory inhabited by Native Americans, the federal government implemented policies of Indian removal or assimilation. The most significant such legislation was the Indian Removal Act of 1830, a key policy of President Andrew Jackson. It resulted in the Trail of Tears (1830–1850), in which an estimated 60,000 Native Americans living east of the Mississippi River were forcibly removed and displaced to lands far to the west, causing 13,200 to 16,700 deaths along the forced march. Settler expansion as well as this influx of Indigenous peoples from the East resulted in the American Indian Wars west of the Mississippi. During the colonial period, slavery became legal in all the Thirteen colonies, but by 1770 it provided the main labor force in the large-scale, agriculture-dependent economies of the Southern Colonies from Maryland to Georgia. The practice began to be significantly questioned during the American Revolution, and spurred by an active abolitionist movement that had reemerged in the 1830s, states in the North enacted laws to prohibit slavery within their boundaries. At the same time, support for slavery had strengthened in Southern states, with widespread use of inventions such as the cotton gin (1793) having made slavery immensely profitable for Southern elites. The United States annexed the Republic of Texas in 1845, and the 1846 Oregon Treaty led to U.S. control of the present-day American Northwest. Dispute with Mexico over Texas led to the Mexican–American War (1846–1848). After the victory of the U.S., Mexico recognized U.S. sovereignty over Texas, New Mexico, and California in the 1848 Mexican Cession; the cession's lands also included the future states of Nevada, Colorado and Utah. The California gold rush of 1848–1849 spurred a huge migration of white settlers to the Pacific coast, leading to even more confrontations with Native populations. One of the most violent, the California genocide of thousands of Native inhabitants, lasted into the mid-1870s. Additional western territories and states were created. Throughout the 1850s, the sectional conflict regarding slavery was further inflamed by national legislation in the U.S. Congress and decisions of the Supreme Court. In Congress, the Fugitive Slave Act of 1850 mandated the forcible return to their owners in the South of slaves taking refuge in non-slave states, while the Kansas–Nebraska Act of 1854 effectively gutted the anti-slavery requirements of the Missouri Compromise. In its Dred Scott decision of 1857, the Supreme Court ruled against a slave brought into non-slave territory, simultaneously declaring the entire Missouri Compromise to be unconstitutional. These and other events exacerbated tensions between North and South that would culminate in the American Civil War (1861–1865). Beginning with South Carolina, 11 slave-state governments voted to secede from the United States in 1861, joining to create the Confederate States of America. All other state governments remained loyal to the Union.[q] War broke out in April 1861 after the Confederacy bombarded Fort Sumter. Following the Emancipation Proclamation on January 1, 1863, many freed slaves joined the Union army. The war began to turn in the Union's favor following the 1863 Siege of Vicksburg and Battle of Gettysburg, and the Confederates surrendered in 1865 after the Union's victory in the Battle of Appomattox Court House. Efforts toward reconstruction in the secessionist South had begun as early as 1862, but it was only after President Lincoln's assassination that the three Reconstruction Amendments to the Constitution were ratified to protect civil rights. The amendments codified nationally the abolition of slavery and involuntary servitude except as punishment for crimes, promised equal protection under the law for all persons, and prohibited discrimination on the basis of race or previous enslavement. As a result, African Americans took an active political role in ex-Confederate states in the decade following the Civil War. The former Confederate states were readmitted to the Union, beginning with Tennessee in 1866 and ending with Georgia in 1870. National infrastructure, including transcontinental telegraph and railroads, spurred growth in the American frontier. This was accelerated by the Homestead Acts, through which nearly 10 percent of the total land area of the United States was given away free to some 1.6 million homesteaders. From 1865 through 1917, an unprecedented stream of immigrants arrived in the United States, including 24.4 million from Europe. Most came through the Port of New York, as New York City and other large cities on the East Coast became home to large Jewish, Irish, and Italian populations. Many Northern Europeans as well as significant numbers of Germans and other Central Europeans moved to the Midwest. At the same time, about one million French Canadians migrated from Quebec to New England. During the Great Migration, millions of African Americans left the rural South for urban areas in the North. Alaska was purchased from Russia in 1867. The Compromise of 1877 is generally considered the end of the Reconstruction era, as it resolved the electoral crisis following the 1876 presidential election and led President Rutherford B. Hayes to reduce the role of federal troops in the South. Immediately, the Redeemers began evicting the Carpetbaggers and quickly regained local control of Southern politics in the name of white supremacy. African Americans endured a period of heightened, overt racism following Reconstruction, a time often considered the nadir of American race relations. A series of Supreme Court decisions, including Plessy v. Ferguson, emptied the Fourteenth and Fifteenth Amendments of their force, allowing Jim Crow laws in the South to remain unchecked, sundown towns in the Midwest, and segregation in communities across the country, which would be reinforced in part by the policy of redlining later adopted by the federal Home Owners' Loan Corporation. An explosion of technological advancement, accompanied by the exploitation of cheap immigrant labor, led to rapid economic expansion during the Gilded Age of the late 19th century. It continued into the early 20th, when the United States already outpaced the economies of Britain, France, and Germany combined. This fostered the amassing of power by a few prominent industrialists, largely by their formation of trusts and monopolies to prevent competition. Tycoons led the nation's expansion in the railroad, petroleum, and steel industries. The United States emerged as a pioneer of the automotive industry. These changes resulted in significant increases in economic inequality, slum conditions, and social unrest, creating the environment for labor unions and socialist movements to begin to flourish. This period eventually ended with the advent of the Progressive Era, which was characterized by significant economic and social reforms. Pro-American elements in Hawaii overthrew the Hawaiian monarchy; the islands were annexed in 1898. That same year, Puerto Rico, the Philippines, and Guam were ceded to the U.S. by Spain after the latter's defeat in the Spanish–American War. (The Philippines was granted full independence from the U.S. on July 4, 1946, following World War II. Puerto Rico and Guam have remained U.S. territories.) American Samoa was acquired by the United States in 1900 after the Second Samoan Civil War. The U.S. Virgin Islands were purchased from Denmark in 1917. The United States entered World War I alongside the Allies in 1917 helping to turn the tide against the Central Powers. In 1920, a constitutional amendment granted nationwide women's suffrage. During the 1920s and 1930s, radio for mass communication and early television transformed communications nationwide. The Wall Street Crash of 1929 triggered the Great Depression, to which President Franklin D. Roosevelt responded with the New Deal plan of "reform, recovery and relief", a series of unprecedented and sweeping recovery programs and employment relief projects combined with financial reforms and regulations. Initially neutral during World War II, the U.S. began supplying war materiel to the Allies of World War II in March 1941 and entered the war in December after Japan's attack on Pearl Harbor. Agreeing to a "Europe first" policy, the U.S. concentrated its wartime efforts on Japan's allies Italy and Germany until their final defeat in May 1945. The U.S. developed the first nuclear weapons and used them against the Japanese cities of Hiroshima and Nagasaki in August 1945, ending the war. The United States was one of the "Four Policemen" who met to plan the post-war world, alongside the United Kingdom, the Soviet Union, and China. The U.S. emerged relatively unscathed from the war, with even greater economic power and international political influence. The end of World War II in 1945 left the U.S. and the Soviet Union as superpowers, each with its own political, military, and economic sphere of influence. Geopolitical tensions between the two superpowers soon led to the Cold War. The U.S. implemented a policy of containment intended to limit the Soviet Union's sphere of influence; engaged in regime change against governments perceived to be aligned with the Soviets; and prevailed in the Space Race, which culminated with the first crewed Moon landing in 1969. Domestically, the U.S. experienced economic growth, urbanization, and population growth following World War II. The civil rights movement emerged, with Martin Luther King Jr. becoming a prominent leader in the early 1960s. The Great Society plan of President Lyndon B. Johnson's administration resulted in groundbreaking and broad-reaching laws, policies and a constitutional amendment to counteract some of the worst effects of lingering institutional racism. The counterculture movement in the U.S. brought significant social changes, including the liberalization of attitudes toward recreational drug use and sexuality. It also encouraged open defiance of the military draft (leading to the end of conscription in 1973) and wide opposition to U.S. intervention in Vietnam, with the U.S. totally withdrawing in 1975. A societal shift in the roles of women was significantly responsible for the large increase in female paid labor participation starting in the 1970s, and by 1985 the majority of American women aged 16 and older were employed. The Fall of Communism and the dissolution of the Soviet Union from 1989 to 1991 marked the end of the Cold War and left the United States as the world's sole superpower. This cemented the United States' global influence, reinforcing the concept of the "American Century" as the U.S. dominated international political, cultural, economic, and military affairs. The 1990s saw the longest recorded economic expansion in American history, a dramatic decline in U.S. crime rates, and advances in technology. Throughout this decade, technological innovations such as the World Wide Web, the evolution of the Pentium microprocessor in accordance with Moore's law, rechargeable lithium-ion batteries, the first gene therapy trial, and cloning either emerged in the U.S. or were improved upon there. The Human Genome Project was formally launched in 1990, while Nasdaq became the first stock market in the United States to trade online in 1998. In the Gulf War of 1991, an American-led international coalition of states expelled an Iraqi invasion force that had occupied neighboring Kuwait. The September 11 attacks on the United States in 2001 by the pan-Islamist militant organization al-Qaeda led to the war on terror and subsequent military interventions in Afghanistan and in Iraq. The U.S. housing bubble culminated in 2007 with the Great Recession, the largest economic contraction since the Great Depression. In the 2010s and early 2020s, the United States has experienced increased political polarization and democratic backsliding. The country's polarization was violently reflected in the January 2021 Capitol attack, when a mob of insurrectionists entered the U.S. Capitol and sought to prevent the peaceful transfer of power in an attempted self-coup d'état. Geography The United States is the world's third-largest country by total area behind Russia and Canada.[c] The 48 contiguous states and the District of Columbia have a combined area of 3,119,885 square miles (8,080,470 km2). In 2021, the United States had 8% of the Earth's permanent meadows and pastures and 10% of its cropland. Starting in the east, the coastal plain of the Atlantic seaboard gives way to inland forests and rolling hills in the Piedmont plateau region. The Appalachian Mountains and the Adirondack Massif separate the East Coast from the Great Lakes and the grasslands of the Midwest. The Mississippi River System, the world's fourth-longest river system, runs predominantly north–south through the center of the country. The flat and fertile prairie of the Great Plains stretches to the west, interrupted by a highland region in the southeast. The Rocky Mountains, west of the Great Plains, extend north to south across the country, peaking at over 14,000 feet (4,300 m) in Colorado. The supervolcano underlying Yellowstone National Park in the Rocky Mountains, the Yellowstone Caldera, is the continent's largest volcanic feature. Farther west are the rocky Great Basin and the Chihuahuan, Sonoran, and Mojave deserts. In the northwest corner of Arizona, carved by the Colorado River, is the Grand Canyon, a steep-sided canyon and popular tourist destination known for its overwhelming visual size and intricate, colorful landscape. The Cascade and Sierra Nevada mountain ranges run close to the Pacific coast. The lowest and highest points in the contiguous United States are in the State of California, about 84 miles (135 km) apart. At an elevation of 20,310 feet (6,190.5 m), Alaska's Denali (also called Mount McKinley) is the highest peak in the country and on the continent. Active volcanoes in the U.S. are common throughout Alaska's Alexander and Aleutian Islands. Located entirely outside North America, the archipelago of Hawaii consists of volcanic islands, physiographically and ethnologically part of the Polynesian subregion of Oceania. In addition to its total land area, the United States has one of the world's largest marine exclusive economic zones spanning approximately 4.5 million square miles (11.7 million km2) of ocean. With its large size and geographic variety, the United States includes most climate types. East of the 100th meridian, the climate ranges from humid continental in the north to humid subtropical in the south. The western Great Plains are semi-arid. Many mountainous areas of the American West have an alpine climate. The climate is arid in the Southwest, Mediterranean in coastal California, and oceanic in coastal Oregon, Washington, and southern Alaska. Most of Alaska is subarctic or polar. Hawaii, the southern tip of Florida and U.S. territories in the Caribbean and Pacific are tropical. The United States receives more high-impact extreme weather incidents than any other country. States bordering the Gulf of Mexico are prone to hurricanes, and most of the world's tornadoes occur in the country, mainly in Tornado Alley. Due to climate change in the country, extreme weather has become more frequent in the U.S. in the 21st century, with three times the number of reported heat waves compared to the 1960s. Since the 1990s, droughts in the American Southwest have become more persistent and more severe. The regions considered as the most attractive to the population are the most vulnerable. The U.S. is one of 17 megadiverse countries containing large numbers of endemic species: about 17,000 species of vascular plants occur in the contiguous United States and Alaska, and over 1,800 species of flowering plants are found in Hawaii, few of which occur on the mainland. The United States is home to 428 mammal species, 784 birds, 311 reptiles, 295 amphibians, and around 91,000 insect species. There are 63 national parks, and hundreds of other federally managed monuments, forests, and wilderness areas, administered by the National Park Service and other agencies. About 28% of the country's land is publicly owned and federally managed, primarily in the Western States. Most of this land is protected, though some is leased for commercial use, and less than one percent is used for military purposes. Environmental issues in the United States include debates on non-renewable resources and nuclear energy, air and water pollution, biodiversity, logging and deforestation, and climate change. The U.S. Environmental Protection Agency (EPA) is the federal agency charged with addressing most environmental-related issues. The idea of wilderness has shaped the management of public lands since 1964, with the Wilderness Act. The Endangered Species Act of 1973 provides a way to protect threatened and endangered species and their habitats. The United States Fish and Wildlife Service implements and enforces the Act. In 2024, the U.S. ranked 35th among 180 countries in the Environmental Performance Index. Government and politics The United States is a federal republic of 50 states and a federal capital district, Washington, D.C. The U.S. asserts sovereignty over five unincorporated territories and several uninhabited island possessions. It is the world's oldest surviving federation, and its presidential system of federal government has been adopted, in whole or in part, by many newly independent states worldwide following their decolonization. The Constitution of the United States serves as the country's supreme legal document. Most scholars describe the United States as a liberal democracy.[r] Composed of three branches, all headquartered in Washington, D.C., the federal government is the national government of the United States. The U.S. Constitution establishes a separation of powers intended to provide a system of checks and balances to prevent any of the three branches from becoming supreme. The three-branch system is known as the presidential system, in contrast to the parliamentary system where the executive is part of the legislative body. Many countries around the world adopted this aspect of the 1789 Constitution of the United States, especially in the postcolonial Americas. In the U.S. federal system, sovereign powers are shared between three levels of government specified in the Constitution: the federal government, the states, and Indian tribes. The U.S. also asserts sovereignty over five permanently inhabited territories: American Samoa, Guam, the Northern Mariana Islands, Puerto Rico, and the U.S. Virgin Islands. Residents of the 50 states are governed by their elected state government, under state constitutions compatible with the national constitution, and by elected local governments that are administrative divisions of a state. States are subdivided into counties or county equivalents, and (except for Hawaii) further divided into municipalities, each administered by elected representatives. The District of Columbia is a federal district containing the U.S. capital, Washington, D.C. The federal district is an administrative division of the federal government. Indian country is made up of 574 federally recognized tribes and 326 Indian reservations. They hold a government-to-government relationship with the U.S. federal government in Washington and are legally defined as domestic dependent nations with inherent tribal sovereignty rights. In addition to the five major territories, the U.S. also asserts sovereignty over the United States Minor Outlying Islands in the Pacific Ocean and the Caribbean. The seven undisputed islands without permanent populations are Baker Island, Howland Island, Jarvis Island, Johnston Atoll, Kingman Reef, Midway Atoll, and Palmyra Atoll. U.S. sovereignty over the unpopulated Bajo Nuevo Bank, Navassa Island, Serranilla Bank, and Wake Island is disputed. The Constitution is silent on political parties. However, they developed independently in the 18th century with the Federalist and Anti-Federalist parties. Since then, the United States has operated as a de facto two-party system, though the parties have changed over time. Since the mid-19th century, the two main national parties have been the Democratic Party and the Republican Party. The former is perceived as relatively liberal in its political platform while the latter is perceived as relatively conservative in its platform. The United States has an established structure of foreign relations, with the world's second-largest diplomatic corps as of 2024[update]. It is a permanent member of the United Nations Security Council and home to the United Nations headquarters. The United States is a member of the G7, G20, and OECD intergovernmental organizations. Almost all countries have embassies and many have consulates (official representatives) in the country. Likewise, nearly all countries host formal diplomatic missions with the United States, except Iran, North Korea, and Bhutan. Though Taiwan does not have formal diplomatic relations with the U.S., it maintains close unofficial relations. The United States regularly supplies Taiwan with military equipment to deter potential Chinese aggression. Its geopolitical attention also turned to the Indo-Pacific when the United States joined the Quadrilateral Security Dialogue with Australia, India, and Japan. The United States has a "Special Relationship" with the United Kingdom and strong ties with Canada, Australia, New Zealand, the Philippines, Japan, South Korea, Israel, and several European Union countries such as France, Italy, Germany, Spain, and Poland. The U.S. works closely with its NATO allies on military and national security issues, and with countries in the Americas through the Organization of American States and the United States–Mexico–Canada Free Trade Agreement. The U.S. exercises full international defense authority and responsibility for Micronesia, the Marshall Islands, and Palau through the Compact of Free Association. It has increasingly conducted strategic cooperation with India, while its ties with China have steadily deteriorated. Beginning in 2014, the U.S. had become a key ally of Ukraine. After Donald Trump was elected U.S. president in 2024, he sought to negotiate an end to the Russo-Ukrainian War. He paused all military aid to Ukraine in March 2025, although the aid resumed later. Trump also ended U.S. intelligence sharing with the country, but this too was eventually restored. The president is the commander-in-chief of the United States Armed Forces and appoints its leaders, the secretary of defense and the Joint Chiefs of Staff. The Department of Defense, headquartered at the Pentagon near Washington, D.C., administers five of the six service branches, which are made up of the U.S. Army, Marine Corps, Navy, Air Force, and Space Force. The Coast Guard is administered by the Department of Homeland Security in peacetime and can be transferred to the Department of the Navy in wartime. Total strength of the entire military is about 1.3 million active duty with an additional 400,000 in reserve. The United States spent $997 billion on its military in 2024, which is by far the largest amount of any country, making up 37% of global military spending and accounting for 3.4% of the country's GDP. The U.S. possesses 42% of the world's nuclear weapons—the second-largest stockpile after that of Russia. The U.S. military is widely regarded as the most powerful and advanced in the world. The United States has the third-largest combined armed forces in the world, behind the Chinese People's Liberation Army and Indian Armed Forces. The U.S. military operates about 800 bases and facilities abroad, and maintains deployments greater than 100 active duty personnel in 25 foreign countries. The United States has engaged in over 400 military interventions since its founding in 1776, with over half of these occurring between 1950 and 2019 and 25% occurring in the post-Cold War era. State defense forces (SDFs) are military units that operate under the sole authority of a state government. SDFs are authorized by state and federal law but are under the command of the state's governor. By contrast, the 54 U.S. National Guard organizations[t] fall under the dual control of state or territorial governments and the federal government; their units can also become federalized entities, but SDFs cannot be federalized. The National Guard personnel of a state or territory can be federalized by the president under the National Defense Act Amendments of 1933; this legislation created the Guard and provides for the integration of Army National Guard and Air National Guard units and personnel into the U.S. Army and (since 1947) the U.S. Air Force. The total number of National Guard members is about 430,000, while the estimated combined strength of SDFs is less than 10,000. There are about 18,000 U.S. police agencies from local to national level in the United States. Law in the United States is mainly enforced by local police departments and sheriff departments in their municipal or county jurisdictions. The state police departments have authority in their respective state, and federal agencies such as the Federal Bureau of Investigation (FBI) and the U.S. Marshals Service have national jurisdiction and specialized duties, such as protecting civil rights, national security, enforcing U.S. federal courts' rulings and federal laws, and interstate criminal activity. State courts conduct almost all civil and criminal trials, while federal courts adjudicate the much smaller number of civil and criminal cases that relate to federal law. There is no unified "criminal justice system" in the United States. The American prison system is largely heterogenous, with thousands of relatively independent systems operating across federal, state, local, and tribal levels. In 2025, "these systems hold nearly 2 million people in 1,566 state prisons, 98 federal prisons, 3,116 local jails, 1,277 juvenile correctional facilities, 133 immigration detention facilities, and 80 Indian country jails, as well as in military prisons, civil commitment centers, state psychiatric hospitals, and prisons in the U.S. territories." Despite disparate systems of confinement, four main institutions dominate: federal prisons, state prisons, local jails, and juvenile correctional facilities. Federal prisons are run by the Federal Bureau of Prisons and hold pretrial detainees as well as people who have been convicted of federal crimes. State prisons, run by the department of corrections of each state, hold people sentenced and serving prison time (usually longer than one year) for felony offenses. Local jails are county or municipal facilities that incarcerate defendants prior to trial; they also hold those serving short sentences (typically under a year). Juvenile correctional facilities are operated by local or state governments and serve as longer-term placements for any minor adjudicated as delinquent and ordered by a judge to be confined. In January 2023, the United States had the sixth-highest per capita incarceration rate in the world—531 people per 100,000 inhabitants—and the largest prison and jail population in the world, with more than 1.9 million people incarcerated. An analysis of the World Health Organization Mortality Database from 2010 showed U.S. homicide rates "were 7 times higher than in other high-income countries, driven by a gun homicide rate that was 25 times higher". Economy The U.S. has a highly developed mixed economy that has been the world's largest nominally since about 1890. Its 2024 gross domestic product (GDP)[e] of more than $29 trillion constituted over 25% of nominal global economic output, or 15% at purchasing power parity (PPP). From 1983 to 2008, U.S. real compounded annual GDP growth was 3.3%, compared to a 2.3% weighted average for the rest of the G7. The country ranks first in the world by nominal GDP, second when adjusted for purchasing power parities (PPP), and ninth by PPP-adjusted GDP per capita. In February 2024, the total U.S. federal government debt was $34.4 trillion. Of the world's 500 largest companies by revenue, 138 were headquartered in the U.S. in 2025, the highest number of any country. The U.S. dollar is the currency most used in international transactions and the world's foremost reserve currency, backed by the country's dominant economy, its military, the petrodollar system, its large U.S. treasuries market, and its linked eurodollar. Several countries use it as their official currency, and in others it is the de facto currency. The U.S. has free trade agreements with several countries, including the USMCA. Although the United States has reached a post-industrial level of economic development and is often described as having a service economy, it remains a major industrial power; in 2024, the U.S. manufacturing sector was the world's second-largest by value output after China's. New York City is the world's principal financial center, and its metropolitan area is the world's largest metropolitan economy. The New York Stock Exchange and Nasdaq, both located in New York City, are the world's two largest stock exchanges by market capitalization and trade volume. The United States is at the forefront of technological advancement and innovation in many economic fields, especially in artificial intelligence; electronics and computers; pharmaceuticals; and medical, aerospace and military equipment. The country's economy is fueled by abundant natural resources, a well-developed infrastructure, and high productivity. The largest trading partners of the United States are the European Union, Mexico, Canada, China, Japan, South Korea, the United Kingdom, Vietnam, India, and Taiwan. The United States is the world's largest importer and second-largest exporter.[u] It is by far the world's largest exporter of services. Americans have the highest average household and employee income among OECD member states, and the fourth-highest median household income in 2023, up from sixth-highest in 2013. With personal consumption expenditures of over $18.5 trillion in 2023, the U.S. has a heavily consumer-driven economy and is the world's largest consumer market. The U.S. ranked first in the number of dollar billionaires and millionaires in 2023, with 735 billionaires and nearly 22 million millionaires. Wealth in the United States is highly concentrated; in 2011, the richest 10% of the adult population owned 72% of the country's household wealth, while the bottom 50% owned just 2%. U.S. wealth inequality increased substantially since the late 1980s, and income inequality in the U.S. reached a record high in 2019. In 2024, the country had some of the highest wealth and income inequality levels among OECD countries. Since the 1970s, there has been a decoupling of U.S. wage gains from worker productivity. In 2016, the top fifth of earners took home more than half of all income, giving the U.S. one of the widest income distributions among OECD countries. There were about 771,480 homeless persons in the U.S. in 2024. In 2022, 6.4 million children experienced food insecurity. Feeding America estimates that around one in five, or approximately 13 million, children experience hunger in the U.S. and do not know where or when they will get their next meal. Also in 2022, about 37.9 million people, or 11.5% of the U.S. population, were living in poverty. The United States has a smaller welfare state and redistributes less income through government action than most other high-income countries. It is the only advanced economy that does not guarantee its workers paid vacation nationally and one of a few countries in the world without federal paid family leave as a legal right. The United States has a higher percentage of low-income workers than almost any other developed country, largely because of a weak collective bargaining system and lack of government support for at-risk workers. The United States has been a leader in technological innovation since the late 19th century and scientific research since the mid-20th century. Methods for producing interchangeable parts and the establishment of a machine tool industry enabled the large-scale manufacturing of U.S. consumer products in the late 19th century. By the early 20th century, factory electrification, the introduction of the assembly line, and other labor-saving techniques created the system of mass production. In the 21st century, the United States continues to be one of the world's foremost scientific powers, though China has emerged as a major competitor in many fields. The U.S. has the highest research and development expenditures of any country and ranks ninth as a percentage of GDP. In 2022, the United States was (after China) the country with the second-highest number of published scientific papers. In 2021, the U.S. ranked second (also after China) by the number of patent applications, and third by trademark and industrial design applications (after China and Germany), according to World Intellectual Property Indicators. In 2025 the United States ranked third (after Switzerland and Sweden) in the Global Innovation Index. The United States is considered to be a world leader in the development of artificial intelligence technology. In 2023, the United States was ranked the second most technologically advanced country in the world (after South Korea) by Global Finance magazine. The United States has maintained a space program since the late 1950s, beginning with the establishment of the National Aeronautics and Space Administration (NASA) in 1958. NASA's Apollo program (1961–1972) achieved the first crewed Moon landing with the 1969 Apollo 11 mission; it remains one of the agency's most significant milestones. Other major endeavors by NASA include the Space Shuttle program (1981–2011), the Voyager program (1972–present), the Hubble and James Webb space telescopes (launched in 1990 and 2021, respectively), and the multi-mission Mars Exploration Program (Spirit and Opportunity, Curiosity, and Perseverance). NASA is one of five agencies collaborating on the International Space Station (ISS); U.S. contributions to the ISS include several modules, including Destiny (2001), Harmony (2007), and Tranquility (2010), as well as ongoing logistical and operational support. The United States private sector dominates the global commercial spaceflight industry. Prominent American spaceflight contractors include Blue Origin, Boeing, Lockheed Martin, Northrop Grumman, and SpaceX. NASA programs such as the Commercial Crew Program, Commercial Resupply Services, Commercial Lunar Payload Services, and NextSTEP have facilitated growing private-sector involvement in American spaceflight. In 2023, the United States received approximately 84% of its energy from fossil fuel, and its largest source of energy was petroleum (38%), followed by natural gas (36%), renewable sources (9%), coal (9%), and nuclear power (9%). In 2022, the United States constituted about 4% of the world's population, but consumed around 16% of the world's energy. The U.S. ranks as the second-highest emitter of greenhouse gases behind China. The U.S. is the world's largest producer of nuclear power, generating around 30% of the world's nuclear electricity. It also has the highest number of nuclear power reactors of any country. From 2024, the U.S. plans to triple its nuclear power capacity by 2050. The United States' 4 million miles (6.4 million kilometers) of road network, owned almost entirely by state and local governments, is the longest in the world. The extensive Interstate Highway System that connects all major U.S. cities is funded mostly by the federal government but maintained by state departments of transportation. The system is further extended by state highways and some private toll roads. The U.S. is among the top ten countries with the highest vehicle ownership per capita (850 vehicles per 1,000 people) in 2022. A 2022 study found that 76% of U.S. commuters drive alone and 14% ride a bicycle, including bike owners and users of bike-sharing networks. About 11% use some form of public transportation. Public transportation in the United States is well developed in the largest urban areas, notably New York City, Washington, D.C., Boston, Philadelphia, Chicago, and San Francisco; otherwise, coverage is generally less extensive than in most other developed countries. The U.S. also has many relatively car-dependent localities. Long-distance intercity travel is provided primarily by airlines, but travel by rail is more common along the Northeast Corridor, the only high-speed rail in the U.S. that meets international standards. Amtrak, the country's government-sponsored national passenger rail company, has a relatively sparse network compared to that of Western European countries. Service is concentrated in the Northeast, California, the Midwest, the Pacific Northwest, and Virginia/Southeast. The United States has an extensive air transportation network. U.S. civilian airlines are all privately owned. The three largest airlines in the world, by total number of passengers carried, are U.S.-based; American Airlines became the global leader after its 2013 merger with US Airways. Of the 50 busiest airports in the world, 16 are in the United States, as well as five of the top 10. The world's busiest airport by passenger volume is Hartsfield–Jackson Atlanta International in Atlanta, Georgia. In 2022, most of the 19,969 U.S. airports were owned and operated by local government authorities, and there are also some private airports. Some 5,193 are designated as "public use", including for general aviation. The Transportation Security Administration (TSA) has provided security at most major airports since 2001. The country's rail transport network, the longest in the world at 182,412.3 mi (293,564.2 km), handles mostly freight (in contrast to more passenger-centered rail in Europe). Because they are often privately owned operations, U.S. railroads lag behind those of the rest of the world in terms of electrification. The country's inland waterways are the world's fifth-longest, totaling 25,482 mi (41,009 km). They are used extensively for freight, recreation, and a small amount of passenger traffic. Of the world's 50 busiest container ports, four are located in the United States, with the busiest in the country being the Port of Los Angeles. Demographics The U.S. Census Bureau reported 331,449,281 residents on April 1, 2020,[v] making the United States the third-most-populous country in the world, after India and China. The Census Bureau's official 2025 population estimate was 341,784,857, an increase of 3.1% since the 2020 census. According to the Bureau's U.S. Population Clock, on July 1, 2024, the U.S. population had a net gain of one person every 16 seconds, or about 5400 people per day. In 2023, 51% of Americans age 15 and over were married, 6% were widowed, 10% were divorced, and 34% had never been married. In 2023, the total fertility rate for the U.S. stood at 1.6 children per woman, and, at 23%, it had the world's highest rate of children living in single-parent households in 2019. Most Americans live in the suburbs of major metropolitan areas. The United States has a diverse population; 37 ancestry groups have more than one million members. White Americans with ancestry from Europe, the Middle East, or North Africa form the largest racial and ethnic group at 57.8% of the United States population. Hispanic and Latino Americans form the second-largest group and are 18.7% of the United States population. African Americans constitute the country's third-largest ancestry group and are 12.1% of the total U.S. population. Asian Americans are the country's fourth-largest group, composing 5.9% of the United States population. The country's 3.7 million Native Americans account for about 1%, and some 574 native tribes are recognized by the federal government. In 2024, the median age of the United States population was 39.1 years. While many languages and dialects are spoken in the United States, English is by far the most commonly spoken and written. De facto, English is the official language of the United States, and in 2025, Executive Order 14224 declared English official. However, the U.S. has never had a de jure official language, as Congress has never passed a law to designate English as official for all three federal branches. Some laws, such as U.S. naturalization requirements, nonetheless standardize English. Twenty-eight states and the United States Virgin Islands have laws that designate English as the sole official language; 19 states and the District of Columbia have no official language. Three states and four U.S. territories have recognized local or indigenous languages in addition to English: Hawaii (Hawaiian), Alaska (twenty Native languages),[w] South Dakota (Sioux), American Samoa (Samoan), Puerto Rico (Spanish), Guam (Chamorro), and the Northern Mariana Islands (Carolinian and Chamorro). In total, 169 Native American languages are spoken in the United States. In Puerto Rico, Spanish is more widely spoken than English. According to the American Community Survey (2020), some 245.4 million people in the U.S. age five and older spoke only English at home. About 41.2 million spoke Spanish at home, making it the second most commonly used language. Other languages spoken at home by one million people or more include Chinese (3.40 million), Tagalog (1.71 million), Vietnamese (1.52 million), Arabic (1.39 million), French (1.18 million), Korean (1.07 million), and Russian (1.04 million). German, spoken by 1 million people at home in 2010, fell to 857,000 total speakers in 2020. America's immigrant population is by far the world's largest in absolute terms. In 2022, there were 87.7 million immigrants and U.S.-born children of immigrants in the United States, accounting for nearly 27% of the overall U.S. population. In 2017, out of the U.S. foreign-born population, some 45% (20.7 million) were naturalized citizens, 27% (12.3 million) were lawful permanent residents, 6% (2.2 million) were temporary lawful residents, and 23% (10.5 million) were unauthorized immigrants. In 2019, the top countries of origin for immigrants were Mexico (24% of immigrants), India (6%), China (5%), the Philippines (4.5%), and El Salvador (3%). In fiscal year 2022, over one million immigrants (most of whom entered through family reunification) were granted legal residence. The undocumented immigrant population in the U.S. reached a record high of 14 million in 2023. The First Amendment guarantees the free exercise of religion in the country and forbids Congress from passing laws respecting its establishment. Religious practice is widespread, among the most diverse in the world, and profoundly vibrant. The country has the world's largest Christian population, which includes the fourth-largest population of Catholics. Other notable faiths include Judaism, Buddhism, Hinduism, Islam, New Age, and Native American religions. Religious practice varies significantly by region. "Ceremonial deism" is common in American culture. The overwhelming majority of Americans believe in a higher power or spiritual force, engage in spiritual practices such as prayer, and consider themselves religious or spiritual. In the Southern United States' "Bible Belt", evangelical Protestantism plays a significant role culturally; New England and the Western United States tend to be more secular. Mormonism, a Restorationist movement founded in the U.S. in 1847, is the predominant religion in Utah and a major religion in Idaho. About 82% of Americans live in metropolitan areas, particularly in suburbs; about half of those reside in cities with populations over 50,000. In 2022, 333 incorporated municipalities had populations over 100,000, nine cities had more than one million residents, and four cities—New York City, Los Angeles, Chicago, and Houston—had populations exceeding two million. Many U.S. metropolitan populations are growing rapidly, particularly in the South and West. According to the Centers for Disease Control and Prevention (CDC), average U.S. life expectancy at birth reached 79.0 years in 2024, its highest recorded level. This was an increase of 0.6 years over 2023. The CDC attributed the improvement to a significant fall in the number of fatal drug overdoses in the country, noting that "heart disease continues to be the leading cause of death in the United States, followed by cancer and unintentional injuries." In 2024, life expectancy at birth for American men rose to 76.5 years (+0.7 years compared to 2023), while life expectancy for women was 81.4 years (+0.3 years). Starting in 1998, life expectancy in the U.S. fell behind that of other wealthy industrialized countries, and Americans' "health disadvantage" gap has been increasing ever since. The Commonwealth Fund reported in 2020 that the U.S. had the highest suicide rate among high-income countries. Approximately one-third of the U.S. adult population is obese and another third is overweight. The U.S. healthcare system far outspends that of any other country, measured both in per capita spending and as a percentage of GDP, but attains worse healthcare outcomes when compared to peer countries for reasons that are debated. The United States is the only developed country without a system of universal healthcare, and a significant proportion of the population that does not carry health insurance. Government-funded healthcare coverage for the poor (Medicaid) and for those age 65 and older (Medicare) is available to Americans who meet the programs' income or age qualifications. In 2010, then-President Obama passed the Patient Protection and Affordable Care Act.[x] Abortion in the United States is not federally protected, and is illegal or restricted in 17 states. American primary and secondary education, known in the U.S. as K–12 ("kindergarten through 12th grade"), is decentralized. School systems are operated by state, territorial, and sometimes municipal governments and regulated by the U.S. Department of Education. In general, children are required to attend school or an approved homeschool from the age of five or six (kindergarten or first grade) until they are 18 years old. This often brings students through the 12th grade, the final year of a U.S. high school, but some states and territories allow them to leave school earlier, at age 16 or 17. The U.S. spends more on education per student than any other country, an average of $18,614 per year per public elementary and secondary school student in 2020–2021. Among Americans age 25 and older, 92.2% graduated from high school, 62.7% attended some college, 37.7% earned a bachelor's degree, and 14.2% earned a graduate degree. The U.S. literacy rate is near-universal. The U.S. has produced the most Nobel Prize winners of any country, with 411 (having won 413 awards). U.S. tertiary or higher education has earned a global reputation. Many of the world's top universities, as listed by various ranking organizations, are in the United States, including 19 of the top 25. American higher education is dominated by state university systems, although the country's many private universities and colleges enroll about 20% of all American students. Local community colleges generally offer open admissions, lower tuition, and coursework leading to a two-year associate degree or a non-degree certificate. As for public expenditures on higher education, the U.S. spends more per student than the OECD average, and Americans spend more than all nations in combined public and private spending. Colleges and universities directly funded by the federal government do not charge tuition and are limited to military personnel and government employees, including: the U.S. service academies, the Naval Postgraduate School, and military staff colleges. Despite some student loan forgiveness programs in place, student loan debt increased by 102% between 2010 and 2020, and exceeded $1.7 trillion in 2022. Culture and society The United States is home to a wide variety of ethnic groups, traditions, and customs. The country has been described as having the values of individualism and personal autonomy, as well as a strong work ethic and competitiveness. Voluntary altruism towards others also plays a major role; according to a 2016 study by the Charities Aid Foundation, Americans donated 1.44% of total GDP to charity—the highest rate in the world by a large margin. Americans have traditionally been characterized by a unifying political belief in an "American Creed" emphasizing consent of the governed, liberty, equality under the law, democracy, social equality, property rights, and a preference for limited government. The U.S. has acquired significant hard and soft power through its diplomatic influence, economic power, military alliances, and cultural exports such as American movies, music, video games, sports, and food. The influence that the United States exerts on other countries through soft power is referred to as Americanization. Nearly all present Americans or their ancestors came from Europe, Africa, or Asia (the "Old World") within the past five centuries. Mainstream American culture is a Western culture largely derived from the traditions of European immigrants with influences from many other sources, such as traditions brought by slaves from Africa. More recent immigration from Asia and especially Latin America has added to a cultural mix that has been described as a homogenizing melting pot, and a heterogeneous salad bowl, with immigrants contributing to, and often assimilating into, mainstream American culture. Under the First Amendment to the Constitution, the United States is considered to have the strongest protections of free speech of any country. Flag desecration, hate speech, blasphemy, and lese majesty are all forms of protected expression. A 2016 Pew Research Center poll found that Americans were the most supportive of free expression of any polity measured. Additionally, they are the "most supportive of freedom of the press and the right to use the Internet without government censorship". The U.S. is a socially progressive country with permissive attitudes surrounding human sexuality. LGBTQ rights in the United States are among the most advanced by global standards. The American Dream, or the perception that Americans enjoy high levels of social mobility, plays a key role in attracting immigrants. Whether this perception is accurate has been a topic of debate. While mainstream culture holds that the United States is a classless society, scholars identify significant differences between the country's social classes, affecting socialization, language, and values. Americans tend to greatly value socioeconomic achievement, but being ordinary or average is promoted by some as a noble condition as well. The National Foundation on the Arts and the Humanities is an agency of the United States federal government that was established in 1965 with the purpose to "develop and promote a broadly conceived national policy of support for the humanities and the arts in the United States, and for institutions which preserve the cultural heritage of the United States." It is composed of four sub-agencies: Colonial American authors were influenced by John Locke and other Enlightenment philosophers. The American Revolutionary Period (1765–1783) is notable for the political writings of Benjamin Franklin, Alexander Hamilton, Thomas Paine, and Thomas Jefferson. Shortly before and after the Revolutionary War, the newspaper rose to prominence, filling a demand for anti-British national literature. An early novel is William Hill Brown's The Power of Sympathy, published in 1791. Writer and critic John Neal in the early- to mid-19th century helped advance America toward a unique literature and culture by criticizing predecessors such as Washington Irving for imitating their British counterparts, and by influencing writers such as Edgar Allan Poe, who took American poetry and short fiction in new directions. Ralph Waldo Emerson and Margaret Fuller pioneered the influential Transcendentalism movement; Henry David Thoreau, author of Walden, was influenced by this movement. The conflict surrounding abolitionism inspired writers, like Harriet Beecher Stowe, and authors of slave narratives, such as Frederick Douglass. Nathaniel Hawthorne's The Scarlet Letter (1850) explored the dark side of American history, as did Herman Melville's Moby-Dick (1851). Major American poets of the 19th century American Renaissance include Walt Whitman, Melville, and Emily Dickinson. Mark Twain was the first major American writer to be born in the West. Henry James achieved international recognition with novels like The Portrait of a Lady (1881). As literacy rates rose, periodicals published more stories centered around industrial workers, women, and the rural poor. Naturalism, regionalism, and realism were the major literary movements of the period. While modernism generally took on an international character, modernist authors working within the United States more often rooted their work in specific regions, peoples, and cultures. Following the Great Migration to northern cities, African-American and black West Indian authors of the Harlem Renaissance developed an independent tradition of literature that rebuked a history of inequality and celebrated black culture. An important cultural export during the Jazz Age, these writings were a key influence on Négritude, a philosophy emerging in the 1930s among francophone writers of the African diaspora. In the 1950s, an ideal of homogeneity led many authors to attempt to write the Great American Novel, while the Beat Generation rejected this conformity, using styles that elevated the impact of the spoken word over mechanics to describe drug use, sexuality, and the failings of society. Contemporary literature is more pluralistic than in previous eras, with the closest thing to a unifying feature being a trend toward self-conscious experiments with language. Twelve American laureates have won the Nobel Prize in Literature. Media in the United States is broadly uncensored, with the First Amendment providing significant protections, as reiterated in New York Times Co. v. United States. The four major broadcasters in the U.S. are the National Broadcasting Company (NBC), Columbia Broadcasting System (CBS), American Broadcasting Company (ABC), and Fox Broadcasting Company (Fox). The four major broadcast television networks are all commercial entities. The U.S. cable television system offers hundreds of channels catering to a variety of niches. In 2021, about 83% of Americans over age 12 listened to broadcast radio, while about 40% listened to podcasts. In the prior year, there were 15,460 licensed full-power radio stations in the U.S. according to the Federal Communications Commission (FCC). Much of the public radio broadcasting is supplied by National Public Radio (NPR), incorporated in February 1970 under the Public Broadcasting Act of 1967. U.S. newspapers with a global reach and reputation include The Wall Street Journal, The New York Times, The Washington Post, and USA Today. About 800 publications are produced in Spanish. With few exceptions, newspapers are privately owned, either by large chains such as Gannett or McClatchy, which own dozens or even hundreds of newspapers; by small chains that own a handful of papers; or, in an increasingly rare situation, by individuals or families. Major cities often have alternative newspapers to complement the mainstream daily papers, such as The Village Voice in New York City and LA Weekly in Los Angeles. The five most-visited websites in the world are Google, YouTube, Facebook, Instagram, and ChatGPT—all of them American-owned. Other popular platforms used include X (formerly Twitter) and Amazon. In 2025, the U.S. was the world's second-largest video game market by revenue (after China). In 2015, the U.S. video game industry consisted of 2,457 companies that employed around 220,000 jobs and generated $30.4 billion in revenue. There are 444 game publishers, developers, and hardware companies in California alone. According to the Game Developers Conference (GDC), the U.S. is the top location for video game development, with 58% of the world's game developers based there in 2025. The United States is well known for its theater. Mainstream theater in the United States derives from the old European theatrical tradition and has been heavily influenced by the British theater. By the middle of the 19th century, America had created new distinct dramatic forms in the Tom Shows, the showboat theater and the minstrel show. The central hub of the American theater scene is the Theater District in Manhattan, with its divisions of Broadway, off-Broadway, and off-off-Broadway. Many movie and television celebrities have gotten their big break working in New York productions. Outside New York City, many cities have professional regional or resident theater companies that produce their own seasons. The biggest-budget theatrical productions are musicals. U.S. theater has an active community theater culture. The Tony Awards recognizes excellence in live Broadway theater and are presented at an annual ceremony in Manhattan. The awards are given for Broadway productions and performances. One is also given for regional theater. Several discretionary non-competitive awards are given as well, including a Special Tony Award, the Tony Honors for Excellence in Theatre, and the Isabelle Stevenson Award. Folk art in colonial America grew out of artisanal craftsmanship in communities that allowed commonly trained people to individually express themselves. It was distinct from Europe's tradition of high art, which was less accessible and generally less relevant to early American settlers. Cultural movements in art and craftsmanship in colonial America generally lagged behind those of Western Europe. For example, the prevailing medieval style of woodworking and primitive sculpture became integral to early American folk art, despite the emergence of Renaissance styles in England in the late 16th and early 17th centuries. The new English styles would have been early enough to make a considerable impact on American folk art, but American styles and forms had already been firmly adopted. Not only did styles change slowly in early America, but there was a tendency for rural artisans there to continue their traditional forms longer than their urban counterparts did—and far longer than those in Western Europe. The Hudson River School was a mid-19th-century movement in the visual arts tradition of European naturalism. The 1913 Armory Show in New York City, an exhibition of European modernist art, shocked the public and transformed the U.S. art scene. American Realism and American Regionalism sought to reflect and give America new ways of looking at itself. Georgia O'Keeffe, Marsden Hartley, and others experimented with new and individualistic styles, which would become known as American modernism. Major artistic movements such as the abstract expressionism of Jackson Pollock and Willem de Kooning and the pop art of Andy Warhol and Roy Lichtenstein developed largely in the United States. Major photographers include Alfred Stieglitz, Edward Steichen, Dorothea Lange, Edward Weston, James Van Der Zee, Ansel Adams, and Gordon Parks. The tide of modernism and then postmodernism has brought global fame to American architects, including Frank Lloyd Wright, Philip Johnson, and Frank Gehry. The Metropolitan Museum of Art in Manhattan is the largest art museum in the United States and the fourth-largest in the world. American folk music encompasses numerous music genres, variously known as traditional music, traditional folk music, contemporary folk music, or roots music. Many traditional songs have been sung within the same family or folk group for generations, and sometimes trace back to such origins as the British Isles, mainland Europe, or Africa. The rhythmic and lyrical styles of African-American music in particular have influenced American music. Banjos were brought to America through the slave trade. Minstrel shows incorporating the instrument into their acts led to its increased popularity and widespread production in the 19th century. The electric guitar, first invented in the 1930s, and mass-produced by the 1940s, had an enormous influence on popular music, in particular due to the development of rock and roll. The synthesizer, turntablism, and electronic music were also largely developed in the U.S. Elements from folk idioms such as the blues and old-time music were adopted and transformed into popular genres with global audiences. Jazz grew from blues and ragtime in the early 20th century, developing from the innovations and recordings of composers such as W.C. Handy and Jelly Roll Morton. Louis Armstrong and Duke Ellington increased its popularity early in the 20th century. Country music developed in the 1920s, bluegrass and rhythm and blues in the 1940s, and rock and roll in the 1950s. In the 1960s, Bob Dylan emerged from the folk revival to become one of the country's most celebrated songwriters. The musical forms of punk and hip hop both originated in the United States in the 1970s. The United States has the world's largest music market, with a total retail value of $15.9 billion in 2022. Most of the world's major record companies are based in the U.S.; they are represented by the Recording Industry Association of America (RIAA). Mid-20th-century American pop stars, such as Frank Sinatra and Elvis Presley, became global celebrities and best-selling music artists, as have artists of the late 20th century, such as Michael Jackson, Madonna, Whitney Houston, and Mariah Carey, and of the early 21st century, such as Eminem, Britney Spears, Lady Gaga, Katy Perry, Taylor Swift and Beyoncé. The United States has the world's largest apparel market by revenue. Apart from professional business attire, American fashion is eclectic and predominantly informal. Americans' diverse cultural roots are reflected in their clothing; however, sneakers, jeans, T-shirts, and baseball caps are emblematic of American styles. New York, with its Fashion Week, is considered to be one of the "Big Four" global fashion capitals, along with Paris, Milan, and London. A study demonstrated that general proximity to Manhattan's Garment District has been synonymous with American fashion since its inception in the early 20th century. A number of well-known designer labels, among them Tommy Hilfiger, Ralph Lauren, Tom Ford and Calvin Klein, are headquartered in Manhattan. Labels cater to niche markets, such as preteens. New York Fashion Week is one of the most influential fashion shows in the world, and is held twice each year in Manhattan; the annual Met Gala, also in Manhattan, has been called the fashion world's "biggest night". The U.S. film industry has a worldwide influence and following. Hollywood, a district in central Los Angeles, the nation's second-most populous city, is also metonymous for the American filmmaking industry. The major film studios of the United States are the primary source of the most commercially successful movies selling the most tickets in the world. Largely centered in the New York City region from its beginnings in the late 19th century through the first decades of the 20th century, the U.S. film industry has since been primarily based in and around Hollywood. Nonetheless, American film companies have been subject to the forces of globalization in the 21st century, and an increasing number of films are made elsewhere. The Academy Awards, popularly known as "the Oscars", have been held annually by the Academy of Motion Picture Arts and Sciences since 1929, and the Golden Globe Awards have been held annually since January 1944. The industry peaked in what is commonly referred to as the "Golden Age of Hollywood", from the early sound period until the early 1960s, with screen actors such as John Wayne and Marilyn Monroe becoming iconic figures. In the 1970s, "New Hollywood", or the "Hollywood Renaissance", was defined by grittier films influenced by French and Italian realist pictures of the post-war period. The 21st century has been marked by the rise of American streaming platforms, which came to rival traditional cinema. Early settlers were introduced by Native Americans to foods such as turkey, sweet potatoes, corn, squash, and maple syrup. Of the most enduring and pervasive examples are variations of the native dish called succotash. Early settlers and later immigrants combined these with foods they were familiar with, such as wheat flour, beef, and milk, to create a distinctive American cuisine. New World crops, especially pumpkin, corn, potatoes, and turkey as the main course are part of a shared national menu on Thanksgiving, when many Americans prepare or purchase traditional dishes to celebrate the occasion. Characteristic American dishes such as apple pie, fried chicken, doughnuts, french fries, macaroni and cheese, ice cream, hamburgers, hot dogs, and American pizza derive from the recipes of various immigrant groups. Mexican dishes such as burritos and tacos preexisted the United States in areas later annexed from Mexico, and adaptations of Chinese cuisine as well as pasta dishes freely adapted from Italian sources are all widely consumed. American chefs have had a significant impact on society both domestically and internationally. In 1946, the Culinary Institute of America was founded by Katharine Angell and Frances Roth. This would become the United States' most prestigious culinary school, where many of the most talented American chefs would study prior to successful careers. The United States restaurant industry was projected at $899 billion in sales for 2020, and employed more than 15 million people, representing 10% of the nation's workforce directly. It is the country's second-largest private employer and the third-largest employer overall. The United States is home to over 220 Michelin star-rated restaurants, 70 of which are in New York City. Wine has been produced in what is now the United States since the 1500s, with the first widespread production beginning in what is now New Mexico in 1628. In the modern U.S., wine production is undertaken in all fifty states, with California producing 84 percent of all U.S. wine. With more than 1,100,000 acres (4,500 km2) under vine, the United States is the fourth-largest wine-producing country in the world, after Italy, Spain, and France. The classic American diner, a casual restaurant type originally intended for the working class, emerged during the 19th century from converted railroad dining cars made stationary. The diner soon evolved into purpose-built structures whose number expanded greatly in the 20th century. The American fast-food industry developed alongside the nation's car culture. American restaurants developed the drive-in format in the 1920s, which they began to replace with the drive-through format by the 1940s. American fast-food restaurant chains, such as McDonald's, Burger King, Chick-fil-A, Kentucky Fried Chicken, Dunkin' Donuts and many others, have numerous outlets around the world. The most popular spectator sports in the U.S. are American football, basketball, baseball, soccer, and ice hockey. Their premier leagues are, respectively, the National Football League, the National Basketball Association, Major League Baseball, Major League Soccer, and the National Hockey League, All these leagues enjoy wide-ranging domestic media coverage and, except for the MLS, all are considered the preeminent leagues in their respective sports in the world. While most major U.S. sports such as baseball and American football have evolved out of European practices, basketball, volleyball, skateboarding, and snowboarding are American inventions, many of which have become popular worldwide. Lacrosse and surfing arose from Native American and Native Hawaiian activities that predate European contact. The market for professional sports in the United States was approximately $69 billion in July 2013, roughly 50% larger than that of Europe, the Middle East, and Africa combined. American football is by several measures the most popular spectator sport in the United States. Although American football does not have a substantial following in other nations, the NFL does have the highest average attendance (67,254) of any professional sports league in the world. In the year 2024, the NFL generated over $23 billion, making them the most valued professional sports league in the United States and the world. Baseball has been regarded as the U.S. "national sport" since the late 19th century. The most-watched individual sports in the U.S. are golf and auto racing, particularly NASCAR and IndyCar. On the collegiate level, earnings for the member institutions exceed $1 billion annually, and college football and basketball attract large audiences, as the NCAA March Madness tournament and the College Football Playoff are some of the most watched national sporting events. In the U.S., the intercollegiate sports level serves as the main feeder system for professional and Olympic sports, with significant exceptions such as Minor League Baseball. This differs greatly from practices in nearly all other countries, where publicly and privately funded sports organizations serve this function. Eight Olympic Games have taken place in the United States. The 1904 Summer Olympics in St. Louis, Missouri, were the first-ever Olympic Games held outside of Europe. The Olympic Games will be held in the U.S. for a ninth time when Los Angeles hosts the 2028 Summer Olympics. U.S. athletes have won a total of 2,968 medals (1,179 gold) at the Olympic Games, the most of any country. In other international competition, the United States is the home of a number of prestigious events, including the America's Cup, World Baseball Classic, the U.S. Open, and the Masters Tournament. The U.S. men's national soccer team has qualified for eleven World Cups, while the women's national team has won the FIFA Women's World Cup and Olympic soccer tournament four and five times, respectively. The 1999 FIFA Women's World Cup was hosted by the United States. Its final match was attended by 90,185, setting the world record for largest women's sporting event crowd at the time. The United States hosted the 1994 FIFA World Cup and will co-host, along with Canada and Mexico, the 2026 FIFA World Cup. See also Notes References This article incorporates text from a free content work. Licensed under CC BY-SA IGO 3.0 (license statement/permission). Text taken from World Food and Agriculture – Statistical Yearbook 2023, FAO, FAO. External links 40°N 100°W / 40°N 100°W / 40; -100 (United States of America) |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Six_Assurances] | [TOKENS: 784] |
Contents Six Assurances The Six Assurances are six key foreign policy principles of the United States regarding United States–Taiwan relations. They were passed as unilateral U.S. clarifications to the Third Communiqué between the United States and the People's Republic of China in 1982. They were intended to reassure both Taiwan and the United States Congress that the US would continue to support Taiwan even if it had earlier cut formal diplomatic relations. The assurances were originally proposed by the then Kuomintang (Chinese Nationalist Party) government of the Republic of China on Taiwan during negotiations between the U.S. and the People's Republic of China.[citation needed] The U.S. Reagan administration agreed to the assurances and informed the United States Congress of them in July 1982. Today, the Six Assurances are part of semiformal guidelines used in conducting relations between the US and Taiwan. The assurances have been generally reaffirmed by successive U.S. administrations. Prior to 2016, they were purely informal, but in 2016, their formal content was adopted by the US House of Representatives and the Senate in non-binding resolutions, upgrading their status to formal but not directly enforceable. Legislative history The United States House of Representatives passed a concurrent resolution on May 16, 2016, giving the first formal wording for the Six Assurances by more or less directly adopting how the former Assistant Secretary of State for East Asian and Pacific Affairs John H. Holdridge expressed them in 1982 (which was delivered to Taiwan's President Chiang Ching-kuo by then-Director of the American Institute in Taiwan James R. Lilley): A similar resolution passed the Senate on July 6, 2016. Reaffirmation The State Department has reaffirmed the Six Assurances repeatedly. On May 19, 2016, one day before Tsai Ing-wen assumed the Presidency of the Republic of China, U.S. Senators Marco Rubio (R-FL), a member of the Senate Foreign Relations Committee and Senate Select Committee on Intelligence and Bob Menendez (D-NJ), former chair of the Senate Foreign Relations Committee and co-chair of the Senate Taiwan Caucus, introduced a concurrent resolution reaffirming the Taiwan Relations Act and the “Six Assurances” as cornerstones of United States–Taiwan relations. The 2016 Republican Party platform affirmed the Six Assurances to Taiwan, supported the Taiwan Relations Act, opposed unilateral changes to the status quo, and endorsed peaceful resolution of cross-strait issues. The Asia Reassurance Initiative Act (Pub. L. 115–409 (text) (PDF)) states that it is the policy of the U.S. to enforce commitments to Taiwan consistent with the Six Assurances. As of September 2018, the Donald Trump administration "has stated that the U.S.-Taiwan relationship is also 'guided' by [the] 'Six Assurances'". In November 2020 U.S. Secretary of State Mike Pompeo stated “Taiwan has not been a part of China, and that was recognized with the work that the Reagan administration did to lay out the policies that the United States has adhered to now for three and a half decades, and done so under both administrations.” which was seen as invoking clause 5. The National Defense Authorization Act for Fiscal Year 2021 reconfirmed the Taiwan Relations Act (TRA) and the Six Assurances as the foundation for US-Taiwan relations. On August 2, 2022, Speaker of the House, Nancy Pelosi, in a statement from a visit to Taiwan, made reference to the United States' continuing support of the TRA, Three Communiqués, and the Six Assurances. The Six Assurances to Taiwan Act, introduced in the US House in May 2025, will, if passed, codify the Six Assurances into law. See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Congolian_rainforests] | [TOKENS: 745] |
Contents Congolian rainforests The Congolian rainforests (French: Forêts tropicales congolaises) are a broad belt of lowland tropical moist broadleaf forests which extend across the basin of the Congo River and its tributaries in Central Africa. Description The Congolian rainforest is the world's second-largest tropical forest, after the Amazon rainforest. It covers over 500,000,000 acres (2,000,000 km2) across six countries and contains a quarter of the world's remaining tropical forest. The Congolian forests cover southeastern Cameroon, Gabon, Republic of the Congo, the northern and central Democratic Republic of the Congo, and portions of southern and central Africa. The Congolian rainforest is home to a large number of flora and fauna, including more than 10,000 species of plants and over 10,000 species of animals. It is estimated[by whom?] that the region contains more than a quarter of the world's plant species and is home to one of the world's most threatened primate species, the western lowland gorilla. There are also a number of other species of primates, including the chimpanzee, black colobus monkey, red colobus monkey, and olive baboon. To the north, south, and southwest, the forests transition to drier forest-savanna mosaic, a mosaic of drier forests, savannas, and grasslands. To the west, the Congolian forests transition to the coastal Lower Guinean forests, which extend from southwestern Cameroon into southern Nigeria and Benin; these forest zones share many similarities and are sometimes known as the Lower Guinean-Congolian forests. To the east, the lowland Congolian forests transition to the highland Albertine Rift montane forests, which cover the mountains lining the Albertine Rift, a branch of the East African Rift system. The World Wide Fund for Nature divides the Congolian forests into six distinct ecoregions: Flora and fauna The Congolian rainforests are home to over 10,000 species of plants of which 30% are endemic. The Congolian rainforests are less biodiverse than the Amazon and Southeast Asian rainforests. However, its plant and animal life is still more rich and varied than most other places on Earth. The Congolian Forests are a global 200 ecoregion. There are over 400 species of mammals in the rainforest, including African forest elephants, African bush elephants, leopards, bongos, red river hogs, chimpanzees, bonobos, mountain gorillas, and lowland gorillas. The okapi is endemic to the northeastern Congolian rainforests. The rainforests have 1,000 native species of birds like the grey parrot, brown nightjar and the bat hawk, and 700 species of fish like the Nile tilapia, Nile perch and the giraffe catfish. Conservation Threats to the rainforests include destruction and fragmentation of forests by commercial logging, oil palm plantations, and mining. The bushmeat trade and poaching is depleting the rainforests of wildlife. With annual forest loss of 0.3% during the 2000s, the region had the lowest deforestation rate of any major tropical forest zone. From 2015 to 2019, the rate of deforestation in the Democratic Republic of the Congo doubled. In 2021, deforestation of the Congolese rainforest increased by 5%. Over the past 20 years, 17.1 million hectares of forest have been cut down. References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/East_Sudanian_savanna] | [TOKENS: 834] |
Contents East Sudanian savanna The East Sudanian savanna is a hot, seasonally dry tropical savanna ecoregion of Central and East Africa. Geography The East Sudanian savanna is the eastern half of the Sudanian savanna belt which runs east and west across Africa. The eastern lies east of the Cameroon Highlands, and west of the Ethiopian Highlands. The Sahel belt of drier acacia savanna lies to the north, and beyond that is the Sahara Desert. More humid forest–savanna mosaic ecoregions lie to the south. The Sudd flooded grasslands in South Sudan divide the ecoregion into eastern and western blocks. The land is mainly flat, although there are some hillier sections around Lake Albert and in western Ethiopia. Climate The climate is a tropical savanna climate and a hot semi-arid climate (Köppen climate classification Aw and BSh) with a dry season and a wet season and the temperature being warm and hot year-round. Flora Typical species are deciduous Terminalia trees with an undergrowth of shrubs and grasses such as Combretum and tall elephant grass (Pennisetum purpureum). There are more than 1,000 endemic plant species. Fauna Threatened species include the African bush elephant (Loxodonta africana) (in Chad and the CAR), East African wild dog (Lycaon pictus lupinus), Northeast African cheetah (Acinonyx jubatus soemmeringii), African leopard (Panthera pardus paruds), lion (Panthera leo), and giant eland (Taurotragus derbianus). Urban areas and settlements In Cameroon the region is more or less contiguous with the North Region, where Bénoué National Park and Bouba Njida National Park contain some of the endangered species mentioned above. In Chad East Sudanian savanna covers the south including the industrial city of Moundou, Chad's second largest city, the oil town of Doba and the cotton-growing towns of Sarh and Pala. In the Central African Republic the region covers the sparsely populated north of the country, the larger towns include Bossangoa. In Sudan west of the Sudd swamp east Sudanian savanna covers the Bahr el Ghazal area including the town of Wau. East of the Sudd the ecoregion runs north to south from northern Uganda, through south-eastern Sudan east of the White Nile (including the area around the southern cities of Juba and Eastern Equatoria around Torit), and up along the Ethiopia–Sudan border. Much of this area has seen combat in recent decades and is in various states of reconstruction. Threats and preservation Seasonal cultivation and herding are lifestyles which lead the population of the savanna to overgraze, overharvest the trees for firewood or charcoal and cause fires. This has reduced the woodland considerably. However large areas of unspoilt habitat remain even outside protected areas, especially compared with the more heavily populated West Sudanian savanna. Poaching is another problem, indeed the black rhinoceros (Diceros bicornis) and northern white rhinoceros (Ceratotherium simum cottoni) were formerly native to the ecoregion but have been eliminated through over-hunting. Protected areas 24.68% of the ecoregion is in protected areas. Protected areas include Bouba Njida National Park in Cameroon, Bamingui-Bangoran National Park and Biosphere Reserve, Andre Felix National Park, and Manovo-Gounda St. Floris National Park in the Central African Republic, Zakouma National Park in Chad, Gambella National Park in Ethiopia, Dinder National Park and Radom National Park in Sudan, Boma National Park and Kidepo Game Reserve in South Sudan, and Kidepo Valley National Park in Uganda. Most protected areas are severely under-resourced, and apart from hunting for sport in the Central African Republic there is little wildlife-based tourism. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Computer#cite_note-FOOTNOTETOP5002006[[Category:Wikipedia_articles_needing_page_number_citations_from_March_2022]]<sup_class="noprint_Inline-Template_"_style="white-space:nowrap;">[<i>[[Wikipedia:Citing_sources|<span_title="This_citation_requires_a_reference_to_the_specific_page_or_range_of_pages_in_which_the_material_appears. (March_2022)">page needed</span>]]</i>]</sup>-139] | [TOKENS: 10628] |
Contents Computer A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs, which enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation, or to a group of computers that are linked and function together, such as a computer network or computer cluster. A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users. Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joysticks, etc.), output devices (monitors, printers, etc.), and input/output devices that perform both functions (e.g. touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. Etymology It was not until the mid-20th century that the word acquired its modern definition; according to the Oxford English Dictionary, the first known use of the word computer was in a different sense, in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine". The name has remained, although modern computers are capable of many higher-level functions. History Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a] The use of counting rods is one example. The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft. In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates. In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which through a system of pulleys and cylinders could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences. Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables". He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. In his work Essays on Automatics published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like a x ( y − z ) 2 {\displaystyle a^{x}(y-z)^{2}} , for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson. The art of mechanical analog computing reached its zenith with the differential analyzer, completed in 1931 by Vannevar Bush at MIT. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).[citation needed] Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers. By 1938, the United States Navy had developed the Torpedo Data Computer, an electromechanical analog computer for submarines that used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II, similar devices were developed in other countries. Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer. In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete. Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. The Z4 served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls". It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945. The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick work on semiconductor surface passivation by silicon dioxide. Modern monolithic ICs are predominantly MOS (metal–oxide–semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel.[b] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip. System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. The first mobile computers were heavy and ran from mains power. The 50 lb (23 kg) IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin. Types Computers can be classified in a number of different ways, including: A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer,[c] a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that processes information qualifies as a computer. Hardware The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Input devices are the means by which the operations of a computer are controlled and it is provided with data. Examples include: Output devices are the means by which a computer provides the results of its calculations in a human-accessible form. Examples include: The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[e] Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[f] The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor. The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic. Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[g] In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[h] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Software Software is the part of a computer system that consists of the encoded information that determines the computer's operation, such as data or instructions on how to process the data. In contrast to the physical hardware from which the system is built, software is immaterial. Software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither is useful on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware". The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. This section applies to most common RAM machine–based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language: Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[i] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. A programming language is a notation system for writing the source code from which a computer program is produced. Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of programming languages—some intended for general purpose programming, others useful for only highly specialized applications. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[j] Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[k] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[l] Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. Networking and the Internet Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. In the 20th century, artificial intelligence systems were predominantly symbolic: they executed code that was explicitly programmed by software developers. Machine learning models, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular of neural networks) has rapidly improved with progress in hardware for parallel computing, mainly graphics processing units (GPUs). Some large language models are able to control computers or robots. AI progress may lead to the creation of artificial general intelligence (AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans. Professions and organizations As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. See also Notes References Sources External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Great_Rift_Valley] | [TOKENS: 1114] |
Contents Great Rift Valley The Great Rift Valley (Swahili: Bonde la ufa) is a series of contiguous geographic depressions, approximately 6,000 or 7,000 kilometres (4,300 mi) in total length, the definition varying between sources, that runs from the southern Turkish Hatay Province in Asia, through the Red Sea, to Mozambique in Southeast Africa. While the name remains in some usages, it is rarely used in geology where the term "Afro-Arabian Rift System" is preferred. This valley extends southward from Western Asia into the eastern part of Africa, where several deep, elongated lakes, called ribbon lakes, exist on the rift valley floor, Lake Malawi and Lake Tanganyika being two such examples. The region has a unique ecosystem and contains a number of Africa's wildlife parks. The term Great Rift Valley is most often used to refer to the valley of the East African Rift, the divergent plate boundary which extends from the Afar triple junction southward through eastern Africa, and is in the process of splitting the African plate into two new and separate plates. Geologists generally refer to these evolving plates as the Nubian plate and the Somali plate. Theoretical extent Today these rifts and faults are seen as distinct, although connected. Originally, the Great Rift Valley was thought to be a single feature that extended from Lebanon[dubious – discuss] in the north to Mozambique in the south, where it constitutes one of two distinct physiographic provinces of the East African mountains. It included what today is called the Lebanese section[dubious – discuss] of the Dead Sea Transform (Turkey to Straits of Tiran[dubious – discuss]), the Jordan Rift Valley (geographic term for section including entire course of the Jordan River, the Dead Sea, and the Arabah Valley), Red Sea Rift, and the East African Rift. These rifts and faults are considered to having been formed 35 million years ago. Asia The northernmost part of the Rift corresponds to the central[dubious – discuss] section of what is today called the Dead Sea Transform (DST) or Rift. This midsection of the DST forms the Beqaa Valley in Lebanon, separating the Mount Lebanon range from the Anti-Lebanon Mountains. Further south it is known as the Hula Valley separating the Galilee mountains and the Golan Heights.[failed verification][clarification needed] The Jordan River begins here and flows southward through Lake Hula into the Sea of Galilee in Israel. The Rift then continues south through the Jordan Rift Valley into the Dead Sea, on the Israeli-Jordanian border. From the Dead Sea southwards, the Rift is occupied by the Wadi Arabah, then the Gulf of Aqaba, and then the Red Sea. Off the southern tip of Sinai in the Red Sea, the Dead Sea Transform meets the Red Sea Rift which runs the length of the Red Sea. The Red Sea Rift comes ashore to meet the East African Rift and the Aden Ridge, in the Afar Depression of East Africa. The junction of these three rifts is called the Afar triple junction. Africa The East African Rift follows the Red Sea to the end before turning inland into the Ethiopian highlands, dividing the country into two large and adjacent but separate mountainous regions. In Kenya, Uganda, and the fringes of South Sudan, the Great Rift runs along two separate branches that are joined to each other only at their southern end, in Southern Tanzania along its border with Zambia. The two branches are called the Western Rift Valley and the Eastern Rift Valley. The Western Rift, also called the Albertine Rift, is bordered by some of the highest mountains in Africa, including the Virunga Mountains, Mitumba Mountains, and Ruwenzori Range. It contains the Rift Valley lakes, which include some of the deepest lakes in the world (up to 1,470 metres (4,820 ft) deep at Lake Tanganyika). Much of this area lies within the boundaries of national parks, such as Virunga National Park in the Democratic Republic of Congo, Rwenzori National Park and Queen Elizabeth National Park in Uganda, and Volcanoes National Park in Rwanda. Lake Victoria is considered to be part of the rift valley system although it actually lies between the two branches. All of the African Great Lakes were formed as the result of the rift, and most lie in territories within the rift. In Kenya, the valley is deepest to the north of Nairobi. As the lakes in the Eastern Rift have no output to the sea and tend to be shallow, they have a high mineral content as the evaporation of water leaves the salts behind. For example, Lake Magadi has high concentrations of soda (sodium carbonate) and Lake Elmenteita, Lake Bogoria, and Lake Nakuru are all strongly alkaline, while the freshwater springs supplying Lake Naivasha are essential to support its current biological variety. The southern section of the Rift Valley includes Lake Malawi, the third-deepest freshwater body in the world, which reaches 706 metres (2,316 ft) in depth and separates the Nyassa plateau of Northern Mozambique from Malawi. The rift extends southwards from Lake Malawi as the valley of the Shire River, which flows from the lake into the Zambezi River. The rift continues south of the Zambezi as the Urema Valley of central Mozambique. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/List_of_open-source_health_software] | [TOKENS: 79] |
Contents List of open-source health software The following is a list of notable software packages and applications licensed under an open-source license or in the public domain for use in the health care industry. Public health and biosurveillance Electronic records and medical practice management Health system management Disease management Imaging/visualization Medical information systems Research Mobile devices Source: Out-of-the-box distributions Interoperability Specifications See also References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/GamePro] | [TOKENS: 2748] |
Contents GamePro GamePro was an American multiplatform video game magazine media company that published online and print content covering the video game industry, video game hardware and video game software. The magazine featured content on various video game consoles, personal computers and mobile devices. GamePro Media properties included GamePro magazine and their website. The company was also a part subsidiary of the privately held International Data Group (IDG), a media, events and research technology group. The magazine and its parent publication printing the magazine went defunct in 2011, but is outlasted by Gamepro.com. Originally published in 1989, GamePro magazine provided feature articles, news, previews and reviews on various video games, video game hardware and the entertainment video game industry. The magazine was published monthly (most recently from its headquarters in Oakland, California) with October 2011 being its last issue, after over 22 years of publication. GamePro's February 2010 issue introduced a redesigned layout and a new editorial direction focused on the people and culture of its gaming. Despite the shutdown of U.S. operations, the magazine continues to operate internationally in France, Germany, and Spain. GamePro.com was officially launched in 1998. Updated daily, the website's content included feature articles, news, previews, reviews, screenshots and videos covering video games, video game hardware and the entertainment gaming industry. The website also included user content such as forums, reviews and blogs. In January 2010, the website was redesigned to reflect the same new editorial changes being made in the print magazine. The website was based at Gamepro's headquarters in San Francisco from 1998 to 2002 and then in Oakland, California from 2002 to 2011. History and establishment Gamepro was first established in late 1988 by Patrick Ferrell, his sister-in-law Leeanne McDermott, and the husband-wife design team of Michael and Lynne Kavish. They worked out of their houses throughout the San Francisco Bay Area before leasing their first office in Redwood City, California at the end of 1989. Lacking the cashflow to be able to sustain growth after publishing the first issue, the founding management team sought a major publisher and in 1989 found one with IDG Peterborough, a New Hampshire-based division of the global giant IDG. Led by a merger and acquisition team comprising IDG Peterborough President Roger Murphy and two other executives, Jim McBrian and Roger Strukhoff, the magazine was acquired, then a few months later spun off as an independent business unit of IDG, under the leadership of Ferrell as president/CEO. The later addition of John Rousseau as publisher and editor-in-chief Wes Nihei, as well as renowned artist Francis Mao, established Gamepro as a large, profitable worldwide publication. Francis Mao, acting in his role as art director for the nascent GamePro, contracted game illustrator Marc Ericksen to create the premiere cover for the first edition of the magazine. Ericksen would go on to produce five of the first ten covers for GamePro, eventually creating eight in total, and would continue a secondary role creating a number of the double page spreads for the very popular monthly Pro Tips section. The magazine had a monthly circulation of 300,000. Over the years, the Gamepro offices have moved from Redwood City (1989–1991) to San Mateo (1991–1998) to San Francisco (1998–2002) and lastly Oakland. In 1993, the company was renamed from Gamepro Inc. to Infotainment World in reflection of its growing and diverse publication lines. The magazine was known for its editors using comic book-like avatars and monikers when reviewing games. As of January 2004, however, Gamepro ceased to use the avatars due to a change in the overall design and layout of the magazine. Meanwhile, editorial voices carried over to the community on its online sister publication, www.gamepro.com. There was a TV show called GamePro TV. The show was hosted by J. D. Roth and Brennan Howard. The show was nationally syndicated for one year, then moved to cable (USA and Sci-Fi) for a second year. In 1993, Patrick Ferrell sent Debra Vernon, VP of marketing, to a meeting between the games industry and the Consumer Electronics Show (CES). Realizing an opportunity, the team at the now-entitled Infotainment World launched E3, the Electronic Entertainment Expo. The industry backed E3 and Ferrell partnered with the IDSA to produce the event. It was one of the biggest trade show launches in history. Early in its lifespan, the magazine also included comic book pages about the adventures of a superhero named Gamepro who was a video game player from the real world brought into a dimension where video games were real to save it from creatures called the Evil Darklings. In 2003, Joyride Studios produced limited-edition action figures of some of the Gamepro editorial characters. Gamepro also appeared in several international editions, including France, Germany, Spain, Portugal, Italy, Turkey, Australia, Brazil and Greece. Some of these publications share the North American content, while some others share only the name and logo but do feature different content. Early in 2006, IDG Entertainment began to change internally and shift operational focus from a "Print to Online" to "Online to Print" publishing mentality. The first steps; build a large online network of web sites and rebuild the editorial team. Enter: George Jones, industry veteran. In February 2006, Gamepro's online video channel, Games.net, launched a series of video-game related shows. The extensive online programming is geared towards an older and more mature audience. In August 2006, the Gamepro online team spun off a new cheats site, GamerHelp.com. It was shortly followed by a video game information aggregation site, Games.net, and a dedicated gaming downloads site, GameDownloads.com. Under the new leadership of George Jones, Gamepro magazine underwent a massive overhaul in the March 2007 issue. While losing some of the more dated elements of the magazine, the new arrangement focused on five main insertions: HD game images, more reviews and previews per issue, www.gamepro.com community showcase, user contributions and insider news. However the German Gamepro website is still run, however this time, by "GameStar" as their partner, as that website have a message at the top of the screen saying "Partner of GameStar" (Note: This is written in German) In 2009, Gamepro's 20th anniversary coincided with 20-year industry veteran John Davison joining the newly named Gamepro Media team in October 2009 as executive vice president of content. Under Davison's direction, the magazine and website were redesigned in early 2010 with an editorial shift toward focusing on the people and culture of gaming. The redesigned magazine and website were met with an enthusiastic audience response. In addition to announcing the hire of Davison in October 2009, the company also announced an "aggressive growth plan throughout 2009 and beyond, with numerous online media initiatives to deepen consumer engagement and create new opportunities for advertisers." Plans included partnering with sister company IDG TechNetwork to build a "boutique online network of sites." The result was the introduction of the Gamepro Media Network. In September 2010, Gamepro Media announced a new alliance with online magazine The Escapist offering marketers joint advertising programs for reaching an unduplicated male audience. The partnership was named the Gamepro Escapist Media Group. In November 2010, Julian Rignall joined Gamepro Media as its new vice-president of content, replacing John Davison, who resigned in September 2010. Gamepro ended monthly publication after over 22 years with its October 2011 issue. Shortly after that issue, the magazine changed to Gamepro Quarterly, which was a quarterly publication using higher quality paper stock as well as being larger and thicker than all of the previous standard magazine issues. Gamepro Quarterly hit newsstands within the first half of November 2011. The quarterly endeavor lasted for only one issue before being scrapped. On November 30, it was announced that Gamepro as a magazine and a website would be shutting down on December 5, 2011. Gamepro then became part of the PC World website as a small section of the site covering the latest video games, run by the PC World staff. Content In February 2010, the magazine's main sections were:- At first, games were rated by five categories: Graphics, Sound, Gameplay, FunFactor, and Challenge. Later the "Challenge" category was dropped and the "Gameplay" category was renamed "Control". The ratings were initially on a scale of 1.0 to 5.0, in increments of 0.5, but a possible 0.5 score was later added. The first game to receive such a score was Battle Arena Toshinden URA for the Sega Saturn. Starting in October 1990, each score was accentuated with a cartoon face (The Gamepro Dude) depicting different expressions for different ratings. The ratings faces remained in use until about 2002. GamePro's reviews became esteemed enough that some games would display their GamePro ratings on their retail boxes. After 2002, the category system was eliminated in favor of a single overall rating for each game on a scale of 1.0 to 5.0 stars. A graphic of five stars were shown alongside the written review. The number of stars a game earned was indicated by the number of solid stars (e.g., a game's 4-star rating was represented by showing 4 solid stars and one hollow star). No game ever received less than one star. An Editors' Choice Award was given to a game that earned either 4.5 or 5.0 stars. GamePro had a "Role-Player's Realm" section dedicated to the coverage and reviews of role-playing video games. In the January 1997 issue, they published a list of "The Top Ten Best RPGs Ever" which consisted of the following games: Later in 2008, GamePro published another list of "The 26 Best RPGs of the All Time", the top ten of which consisted of the following games: GamePro is credited with coming up with the concept of "Protip", a short piece of advice as if spoken by an expert usually attached to an image, which was explained by former writer Dan Amrich that as part of their editorial process, they were encouraged to caption the three-to-seven images used in an article with such advice. One purported image from a GamePro review of Doom (1993) had a caption for an image of one of the game's bosses as "PROTIP: To defeat the Cyberdemon, shoot at it until it dies". The apparent advice, which is common sense and self-evident for players of first-person shooters like Doom, was widely mocked and created a meme of similarly obvious ProTips added as captions to pictures. However, the image was revealed to be a fake, created as an April Fools' joke for a fansite doomworld.com. Every April until 2007, as an April Fools' Day prank, Gamepro printed a 2-5 page satirical spoof of the magazine called Lamepro, a parody of Gamepro's own official title. The feature contained humorous game titles and fake news similar to The Onion, though some content, such as ways to get useless game glitches (games getting stuck, reset, or otherwise), was real. The section parodied GamePro itself, as well as other game magazines. PC Games What was called a "sister publication" to GamePro, PC Games, was published by IDG until 1999. It was founded in August 1988, but changed its name to Electronic Entertainment in late 1993 and PC Entertainment in early 1996. The title reverted to PC Games in June 1996. Its PC Games Online website was merged with several other IDG properties, including GamePro Online, to form the IDG Games Network in late 1997. The print version of PC Games was the fourth-largest computer game magazine in the United States during 1998, with a circulation of 169,281. In March 1999, it was purchased and closed by Imagine Publishing; its April 1999 issue was its last. Following this event, Imagine sent former subscribers of PC Games issues of PC Gamer US and PC Accelerator in its place. According to GameDaily, the move came as part of IDG's rebranding effort to lean more heavily on the GamePro name: coverage of computer games was thereafter centralized at PCGamePro.com, and in the "PC GamePro" section of GamePro's print edition. Australian GamePro Australian GamePro was a bi-monthly video games magazine published by IDG from 10 November 2003 to February 2007. The founding editor was Stuart Clarke, who was succeeded in January 2006 by Chris Stead. According to the latter, the magazine had doubled its sales from 2006 to 2007, but the decision to discontinue the publication came as a result of internal restructuring. The Australian GamePro team put together a number of special issues, including: References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Meta_Platforms#cite_ref-59] | [TOKENS: 8626] |
Contents Meta Platforms Meta Platforms, Inc. (doing business as Meta) is an American multinational technology company headquartered in Menlo Park, California. Meta owns and operates several prominent social media platforms and communication services, including Facebook, Instagram, WhatsApp, Messenger, Threads and Manus. The company also operates an advertising network for its own sites and third parties; as of 2023[update], advertising accounted for 97.8 percent of its total revenue. Meta has been described as a part of Big Tech, which refers to the largest six tech companies in the United States, Alphabet (Google), Amazon, Apple, Meta (Facebook), Microsoft, and Nvidia, which are also the largest companies in the world by market capitalization. The company was originally established in 2004 as TheFacebook, Inc., and was renamed Facebook, Inc. in 2005. In 2021, it rebranded as Meta Platforms, Inc. to reflect a strategic shift toward developing the metaverse—an interconnected digital ecosystem spanning virtual and augmented reality technologies. In 2023, Meta was ranked 31st on the Forbes Global 2000 list of the world's largest public companies. As of 2022, it was the world's third-largest spender on research and development, with R&D expenses totaling US$35.3 billion. History Facebook filed for an initial public offering (IPO) on January 1, 2012. The preliminary prospectus stated that the company sought to raise $5 billion, had 845 million monthly active users, and a website accruing 2.7 billion likes and comments daily. After the IPO, Zuckerberg would retain 22% of the total shares and 57% of the total voting power in Facebook. Underwriters valued the shares at $38 each, valuing the company at $104 billion, the largest valuation yet for a newly public company. On May 16, one day before the IPO, Facebook announced it would sell 25% more shares than originally planned due to high demand. The IPO raised $16 billion, making it the third-largest in US history (slightly ahead of AT&T Mobility and behind only General Motors and Visa). The stock price left the company with a higher market capitalization than all but a few U.S. corporations—surpassing heavyweights such as Amazon, McDonald's, Disney, and Kraft Foods—and made Zuckerberg's stock worth $19 billion. The New York Times stated that the offering overcame questions about Facebook's difficulties in attracting advertisers to transform the company into a "must-own stock". Jimmy Lee of JPMorgan Chase described it as "the next great blue-chip". Writers at TechCrunch, on the other hand, expressed skepticism, stating, "That's a big multiple to live up to, and Facebook will likely need to add bold new revenue streams to justify the mammoth valuation." Trading in the stock, which began on May 18, was delayed that day due to technical problems with the Nasdaq exchange. The stock struggled to stay above the IPO price for most of the day, forcing underwriters to buy back shares to support the price. At the closing bell, shares were valued at $38.23, only $0.23 above the IPO price and down $3.82 from the opening bell value. The opening was widely described by the financial press as a disappointment. The stock set a new record for trading volume of an IPO. On May 25, 2012, the stock ended its first full week of trading at $31.91, a 16.5% decline. On May 22, 2012, regulators from Wall Street's Financial Industry Regulatory Authority announced that they had begun to investigate whether banks underwriting Facebook had improperly shared information only with select clients rather than the general public. Massachusetts Secretary of State William F. Galvin subpoenaed Morgan Stanley over the same issue. The allegations sparked "fury" among some investors and led to the immediate filing of several lawsuits, one of them a class action suit claiming more than $2.5 billion in losses due to the IPO. Bloomberg estimated that retail investors may have lost approximately $630 million on Facebook stock since its debut. S&P Global Ratings added Facebook to its S&P 500 index on December 21, 2013. On May 2, 2014, Zuckerberg announced that the company would be changing its internal motto from "Move fast and break things" to "Move fast with stable infrastructure". The earlier motto had been described as Zuckerberg's "prime directive to his developers and team" in a 2009 interview in Business Insider, in which he also said, "Unless you are breaking stuff, you are not moving fast enough." In November 2016, Facebook announced the Microsoft Windows client of gaming service Facebook Gameroom, formerly Facebook Games Arcade, at the Unity Technologies developers conference. The client allows Facebook users to play "native" games in addition to its web games. The service was closed in June 2021. Lasso was a short-video sharing app from Facebook similar to TikTok that was launched on iOS and Android in 2018 and was aimed at teenagers. On July 2, 2020, Facebook announced that Lasso would be shutting down on July 10. In 2018, the Oculus lead Jason Rubin sent his 50-page vision document titled "The Metaverse" to Facebook's leadership. In the document, Rubin acknowledged that Facebook's virtual reality business had not caught on as expected, despite the hundreds of millions of dollars spent on content for early adopters. He also urged the company to execute fast and invest heavily in the vision, to shut out HTC, Apple, Google and other competitors in the VR space. Regarding other players' participation in the metaverse vision, he called for the company to build the "metaverse" to prevent their competitors from "being in the VR business in a meaningful way at all". In May 2019, Facebook founded Libra Networks, reportedly to develop their own stablecoin cryptocurrency. Later, it was reported that Libra was being supported by financial companies such as Visa, Mastercard, PayPal and Uber. The consortium of companies was expected to pool in $10 million each to fund the launch of the cryptocurrency coin named Libra. Depending on when it would receive approval from the Swiss Financial Market Supervisory authority to operate as a payments service, the Libra Association had planned to launch a limited format cryptocurrency in 2021. Libra was renamed Diem, before being shut down and sold in January 2022 after backlash from Swiss government regulators and the public. During the COVID-19 pandemic, the use of online services, including Facebook, grew globally. Zuckerberg predicted this would be a "permanent acceleration" that would continue after the pandemic. Facebook hired aggressively, growing from 48,268 employees in March 2020 to more than 87,000 by September 2022. Following a period of intense scrutiny and damaging whistleblower leaks, news started to emerge on October 21, 2021 about Facebook's plan to rebrand the company and change its name. In the Q3 2021 earnings call on October 25, Mark Zuckerberg discussed the ongoing criticism of the company's social services and the way it operates, and pointed to the pivoting efforts to building the metaverse – without mentioning the rebranding and the name change. The metaverse vision and the name change from Facebook, Inc. to Meta Platforms was introduced at Facebook Connect on October 28, 2021. Based on Facebook's PR campaign, the name change reflects the company's shifting long term focus of building the metaverse, a digital extension of the physical world by social media, virtual reality and augmented reality features. "Meta" had been registered as a trademark in the United States in 2018 (after an initial filing in 2015) for marketing, advertising, and computer services, by a Canadian company that provided big data analysis of scientific literature. This company was acquired in 2017 by the Chan Zuckerberg Initiative (CZI), a foundation established by Zuckerberg and his wife, Priscilla Chan, and became one of their projects. Following the rebranding announcement, CZI announced that it had already decided to deprioritize the earlier Meta project, thus it would be transferring its rights to the name to Meta Platforms, and the previous project would end in 2022. Soon after the rebranding, in early February 2022, Meta reported a greater-than-expected decline in profits in the fourth quarter of 2021. It reported no growth in monthly users, and indicated it expected revenue growth to stall. It also expected measures taken by Apple Inc. to protect user privacy to cost it some $10 billion in advertisement revenue, an amount equal to roughly 8% of its revenue for 2021. In meeting with Meta staff the day after earnings were reported, Zuckerberg blamed competition for user attention, particularly from video-based apps such as TikTok. The 27% reduction in the company's share price which occurred in reaction to the news eliminated some $230 billion of value from Meta's market capitalization. Bloomberg described the decline as "an epic rout that, in its sheer scale, is unlike anything Wall Street or Silicon Valley has ever seen". Zuckerberg's net worth fell by as much as $31 billion. Zuckerberg owns 13% of Meta, and the holding makes up the bulk of his wealth. According to published reports by Bloomberg on March 30, 2022, Meta turned over data such as phone numbers, physical addresses, and IP addresses to hackers posing as law enforcement officials using forged documents. The law enforcement requests sometimes included forged signatures of real or fictional officials. When asked about the allegations, a Meta representative said, "We review every data request for legal sufficiency and use advanced systems and processes to validate law enforcement requests and detect abuse." In June 2022, Sheryl Sandberg, the chief operating officer of 14 years, announced she would step down that year. Zuckerberg said that Javier Olivan would replace Sandberg, though in a “more traditional” role. In March 2022, Meta (except Meta-owned WhatsApp) and Instagram were banned in Russia and added to the Russian list of terrorist and extremist organizations for alleged Russophobia and hate speech (up to genocidal calls) amid the ongoing Russian invasion of Ukraine. Meta appealed against the ban, but it was upheld by a Moscow court in June of the same year. Also in March 2022, Meta and Italian eyewear giant Luxottica released Ray-Ban Stories, a series of smartglasses which could play music and take pictures. Meta and Luxottica parent company EssilorLuxottica declined to disclose sales on the line of products as of September 2022, though Meta has expressed satisfaction with its customer feedback. In July 2022, Meta saw its first year-on-year revenue decline when its total revenue slipped by 1% to $28.8bn. Analysts and journalists accredited the loss to its advertising business, which has been limited by Apple's app tracking transparency feature and the number of people who have opted not to be tracked by Meta apps. Zuckerberg also accredited the decline to increasing competition from TikTok. On October 27, 2022, Meta's market value dropped to $268 billion, a loss of around $700 billion compared to 2021, and its shares fell by 24%. It lost its spot among the top 20 US companies by market cap, despite reaching the top 5 in the previous year. In November 2022, Meta laid off 11,000 employees, 13% of its workforce. Zuckerberg said the decision to aggressively increase Meta's investments had been a mistake, as he had wrongly predicted that the surge in e-commerce would last beyond the COVID-19 pandemic. He also attributed the decline to increased competition, a global economic downturn and "ads signal loss". Plans to lay off a further 10,000 employees began in April 2023. The layoffs were part of a general downturn in the technology industry, alongside layoffs by companies including Google, Amazon, Tesla, Snap, Twitter and Lyft. Starting from 2022, Meta scrambled to catch up to other tech companies in adopting specialized artificial intelligence hardware and software. It had been using less expensive CPUs instead of GPUs for AI work, but that approach turned out to be less efficient. The company gifted the Inter-university Consortium for Political and Social Research $1.3 million to finance the Social Media Archive's aim to make their data available to social science research. In 2023, Ireland's Data Protection Commissioner imposed a record EUR 1.2 billion fine on Meta for transferring data from Europe to the United States without adequate protections for EU citizens.: 250 In March 2023, Meta announced a new round of layoffs that would cut 10,000 employees and close 5,000 open positions to make the company more efficient. Meta revenue surpassed analyst expectations for the first quarter of 2023 after announcing that it was increasing its focus on AI. On July 6, Meta launched a new app, Threads, a competitor to Twitter. Meta announced its artificial intelligence model Llama 2 in July 2023, available for commercial use via partnerships with major cloud providers like Microsoft. It was the first project to be unveiled out of Meta's generative AI group after it was set up in February. It would not charge access or usage but instead operate with an open-source model to allow Meta to ascertain what improvements need to be made. Prior to this announcement, Meta said it had no plans to release Llama 2 for commercial use. An earlier version of Llama was released to academics. In August 2023, Meta announced its permanent removal of news content from Facebook and Instagram in Canada due to the Online News Act, which requires Canadian news outlets to be compensated for content shared on its platform. The Online News Act was in effect by year-end, but Meta will not participate in the regulatory process. In October 2023, Zuckerberg said that AI would be Meta's biggest investment area in 2024. Meta finished 2023 as one of the best-performing technology stocks of the year, with its share price up 150 percent. Its stock reached an all-time high in January 2024, bringing Meta within 2% of achieving $1 trillion market capitalization. In November 2023 Meta Platforms launched an ad-free service in Europe, allowing subscribers to opt-out of personal data being collected for targeted advertising. A group of 28 European organizations, including Max Schrems' advocacy group NOYB, the Irish Council for Civil Liberties, Wikimedia Europe, and the Electronic Privacy Information Center, signed a 2024 letter to the European Data Protection Board (EDPB) expressing concern that this subscriber model would undermine privacy protections, specifically GDPR data protection standards. Meta removed the Facebook and Instagram accounts of Iran's Supreme Leader Ali Khamenei in February 2024, citing repeated violations of its Dangerous Organizations & Individuals policy. As of March, Meta was under investigation by the FDA for alleged use of their social media platforms to sell illegal drugs. On 16 May 2024, the European Commission began an investigation into Meta over concerns related to child safety. In May 2023, Iraqi social media influencer Esaa Ahmed-Adnan encountered a troubling issue when Instagram removed his posts, citing false copyright violations despite his content being original and free from copyrighted material. He discovered that extortionists were behind these takedowns, offering to restore his content for $3,000 or provide ongoing protection for $1,000 per month. This scam, exploiting Meta’s rights management tools, became widespread in the Middle East, revealing a gap in Meta’s enforcement in developing regions. An Iraqi nonprofit Tech4Peace’s founder, Aws al-Saadi helped Ahmed-Adnan and others, but the restoration process was slow, leading to significant financial losses for many victims, including prominent figures like Ammar al-Hakim. This situation highlighted Meta’s challenges in balancing global growth with effective content moderation and protection. On 16 September 2024, Meta announced it had banned Russian state media outlets from its platforms worldwide due to concerns about "foreign interference activity." This decision followed allegations that RT and its employees funneled $10 million through shell companies to secretly fund influence campaigns on various social media channels. Meta's actions were part of a broader effort to counter Russian covert influence operations, which had intensified since the invasion. At its 2024 Connect conference, Meta presented Orion, its first pair of augmented reality glasses. Though Orion was originally intended to be sold to consumers, the manufacturing process turned out to be too complex and expensive. Instead, the company pivoted to producing a small number of the glasses to be used internally. On 4 October 2024, Meta announced about its new AI model called Movie Gen, capable of generating realistic video and audio clips based on user prompts. Meta stated it would not release Movie Gen for open development, preferring to collaborate directly with content creators and integrate it into its products by the following year. The model was built using a combination of licensed and publicly available datasets. On October 31, 2024, ProPublica published an investigation into deceptive political advertisement scams that sometimes use hundreds of hijacked profiles and facebook pages run by organized networks of scammers. The authors cited spotty enforcement by Meta as a major reason for the extent of the issue. In November 2024, TechCrunch reported that Meta were considering building a $10bn global underwater cable spanning 25,000 miles. In the same month, Meta closed down 2 million accounts on Facebook and Instagram that were linked to scam centers in Myanmar, Laos, Cambodia, the Philippines, and the United Arab Emirates doing pig butchering scams. In December 2024, Meta announced that, beginning February 2025, they would require advertisers to run ads about financial services in Australia to verify information about who are the beneficiary and the payer in a bid to regulate scams. On December 4, 2024, Meta announced it will invest US$10 billion for its largest AI data center in northeast Louisiana, powered by natural gas facilities. On the 11th of that month, Meta experienced a global outage, impacting accounts on all of their social media and messaging applications. Outage reports from DownDetector reached 70,000+ and 100,000+ within minutes for Instagram and Facebook, respectively. In January 2025, Meta announced plans to roll back its diversity, equity, and inclusion (DEI) initiatives, citing shifts in the "legal and policy landscape" in the United States following the 2024 presidential election. The decision followed reports that CEO Mark Zuckerberg sought to align the company more closely with the incoming Trump administration, including changes to content moderation policies and executive leadership. The new content moderation policies continued to bar insults about a person's intellect or mental illness, but made an exception to allow calling LGBTQ people mentally ill because they are gay or transgender. Later that month, Meta agreed to pay $25 million to settle a 2021 lawsuit brought by Donald Trump for suspending his social media accounts after the January 6 riots. Changes to Meta's moderation policies were controversial among its oversight board, with a significant divide in opinion between the board's US conservatives and its global members. In June 2025, Meta Platforms Inc. has decided to make a multibillion-dollar investment into artificial intelligence startup Scale AI. The financing could exceed $10 billion in value which would make it one of the largest private company funding events of all time. In October 2025, it was announced that Meta would be laying off 600 employees in the artificial intelligence unit to perform better and simpler. They referred to their AI unit as "bloated" and are seeking to trim down the department. This mass layoff is going to impact Meta’s AI infrastructure units, Fundamental Artificial Intelligence Research unit (FAIR) and other product-related positions. Mergers and acquisitions Meta has acquired multiple companies (often identified as talent acquisitions). One of its first major acquisitions was in April 2012, when it acquired Instagram for approximately US$1 billion in cash and stock. In October 2013, Facebook, Inc. acquired Onavo, an Israeli mobile web analytics company. In February 2014, Facebook, Inc. announced it would buy mobile messaging company WhatsApp for US$19 billion in cash and stock. The acquisition was completed on October 6. Later that year, Facebook bought Oculus VR for $2.3 billion in cash and stock, which released its first consumer virtual reality headset in 2016. In late November 2019, Facebook, Inc. announced the acquisition of the game developer Beat Games, responsible for developing one of that year's most popular VR games, Beat Saber. In Late 2022, after Facebook Inc rebranded to Meta Platforms Inc, Oculus was rebranded to Meta Quest. In May 2020, Facebook, Inc. announced it had acquired Giphy for a reported cash price of $400 million. It will be integrated with the Instagram team. However, in August 2021, UK's Competition and Markets Authority (CMA) stated that Facebook, Inc. might have to sell Giphy, after an investigation found that the deal between the two companies would harm competition in display advertising market. Facebook, Inc. was fined $70 million by CMA for deliberately failing to report all information regarding the acquisition and the ongoing antitrust investigation. In October 2022, the CMA ruled for a second time that Meta be required to divest Giphy, stating that Meta already controls half of the advertising in the UK. Meta agreed to the sale, though it stated that it disagrees with the decision itself. In May 2023, Giphy was divested to Shutterstock for $53 million. In November 2020, Facebook, Inc. announced that it planned to purchase the customer-service platform and chatbot specialist startup Kustomer to promote companies to use their platform for business. It has been reported that Kustomer valued at slightly over $1 billion. The deal was closed in February 2022 after regulatory approval. In September 2022, Meta acquired Lofelt, a Berlin-based haptic tech startup. In December 2025, it was announced Meta had acquired the AI-wearables startup, Limitless. In the same month, they also acquired another AI startup, Manus AI, for $2 billion. Manus announced in December that its platform had achieved $100mm in recurring revenue just 8 months after its launch and Meta said it will scale the platform to many other businesses. In January 2026, it was announced Meta proposed acquisition of Manus was undergoing preliminary scrutiny by Chinese regulators. The examination concerns the cross-border transfer of artificial intelligence technology developed in China. Lobbying In 2020, Facebook, Inc. spent $19.7 million on lobbying, hiring 79 lobbyists. In 2019, it had spent $16.7 million on lobbying and had a team of 71 lobbyists, up from $12.6 million and 51 lobbyists in 2018. Facebook was the largest spender of lobbying money among the Big Tech companies in 2020. The lobbying team includes top congressional aide John Branscome, who was hired in September 2021, to help the company fend off threats from Democratic lawmakers and the Biden administration. In December 2024, Meta donated $1 million to the inauguration fund for then-President-elect Donald Trump. In 2025, Meta was listed among the donors funding the construction of the White House State Ballroom. Partnerships February 2026, Meta announced a long-term partnership with Nvidia. Censorship In August 2024, Mark Zuckerberg sent a letter to Jim Jordan indicating that during the COVID-19 pandemic the Biden administration repeatedly asked Meta to limit certain COVID-19 content, including humor and satire, on Facebook and Instagram. In 2016 Meta hired Jordana Cutler, formerly an employee at the Israeli Embassy to the United States, as its policy chief for Israel and the Jewish Diaspora. In this role, Cutler pushed for the censorship of accounts belonging to Students for Justice in Palestine chapters in the United States. Critics have said that Cutler's position gives the Israeli government an undue influence over Meta policy, and that few countries have such high levels of contact with Meta policymakers. Following the election of Donald Trump in 2025, various sources noted possible censorship related to the Democratic Party on Instagram and other Meta platforms. In February 2025, a Meta rep flagged journalist Gil Duran's article and other "critiques of tech industry figures" as spam or sensitive content, limiting their reach. In March 2025, Meta attempted to block former employee Sarah Wynn-Williams from promoting or further distributing her memoir, Careless People, that includes allegations of unaddressed sexual harassment in the workplace by senior executives. The New York Times reports that the arbitration is among Meta's most forcible attempts to repudiate a former employee's account of workplace dynamics. Publisher Macmillan reacted to the ruling by the Emergency International Arbitral Tribunal by stating that it will ignore its provisions. As of 15 March 2025[update], hardback and digital versions of Careless People were being offered for sale by major online retailers. From October 2025, Meta began removing and restricting access for accounts related to LGBTQ, reproductive health and abortion information pages on its platforms. Martha Dimitratou, executive director of Repro Uncensored, called Meta's shadow-banning of these issues "One of the biggest waves of censorship we are seeing". Disinformation concerns Since its inception, Meta has been accused of being a host for fake news and misinformation. In the wake of the 2016 United States presidential election, Zuckerberg began to take steps to eliminate the prevalence of fake news, as the platform had been criticized for its potential influence on the outcome of the election. The company initially partnered with ABC News, the Associated Press, FactCheck.org, Snopes and PolitiFact for its fact-checking initiative; as of 2018, it had over 40 fact-checking partners across the world, including The Weekly Standard. A May 2017 review by The Guardian found that the platform's fact-checking initiatives of partnering with third-party fact-checkers and publicly flagging fake news were regularly ineffective, and appeared to be having minimal impact in some cases. In 2018, journalists working as fact-checkers for the company criticized the partnership, stating that it had produced minimal results and that the company had ignored their concerns. In 2024 Meta's decision to continue to disseminate a falsified video of US president Joe Biden, even after it had been proven to be fake, attracted criticism and concern. In January 2025, Meta ended its use of third-party fact-checkers in favor of a user-run community notes system similar to the one used on X. While Zuckerberg supported these changes, saying that the amount of censorship on the platform was excessive, the decision received criticism by fact-checking institutions, stating that the changes would make it more difficult for users to identify misinformation. Meta also faced criticism for weakening its policies on hate speech that were designed to protect minorities and LGBTQ+ individuals from bullying and discrimination. While moving its content review teams from California to Texas, Meta changed their hateful conduct policy to eliminate restrictions on anti-LGBT and anti-immigrant hate speech, as well as explicitly allowing users to accuse LGBT people of being mentally ill or abnormal based on their sexual orientation or gender identity. In January 2025, Meta faced significant criticism for its role in removing LGBTQ+ content from its platforms, amid its broader efforts to address anti-LGBTQ+ hate speech. The removal of LGBTQ+ themes was noted as part of the wider crackdown on content deemed to violate its community guidelines. Meta's content moderation policies, which were designed to combat harmful speech and protect users from discrimination, inadvertently led to the removal or restriction of LGBTQ+ content, particularly posts highlighting LGBTQ+ identities, support, or political issues. According to reports, LGBTQ+ posts, including those that simply celebrated pride or advocated for LGBTQ+ rights, were flagged and removed for reasons that some critics argue were vague or inconsistently applied. Many LGBTQ+ activists and users on Meta's platforms expressed concern that such actions stifled visibility and expression, potentially isolating LGBTQ+ individuals and communities, especially in spaces that were historically important for outreach and support. Lawsuits Numerous lawsuits have been filed against the company, both when it was known as Facebook, Inc., and as Meta Platforms. In March 2020, the Office of the Australian Information Commissioner (OAIC) sued Facebook, for significant and persistent infringements of the rule on privacy involving the Cambridge Analytica fiasco. Every violation of the Privacy Act is subject to a theoretical cumulative liability of $1.7 million. The OAIC estimated that a total of 311,127 Australians had been exposed. On December 8, 2020, the U.S. Federal Trade Commission and 46 states (excluding Alabama, Georgia, South Carolina, and South Dakota), the District of Columbia and the territory of Guam, launched Federal Trade Commission v. Facebook as an antitrust lawsuit against Facebook. The lawsuit concerns Facebook's acquisition of two competitors—Instagram and WhatsApp—and the ensuing monopolistic situation. FTC alleges that Facebook holds monopolistic power in the U.S. social networking market and seeks to force the company to divest from Instagram and WhatsApp to break up the conglomerate. William Kovacic, a former chairman of the Federal Trade Commission, argued the case will be difficult to win as it would require the government to create a counterfactual argument of an internet where the Facebook-WhatsApp-Instagram entity did not exist, and prove that harmed competition or consumers. In November 2025, it was ruled that Meta did not violate antitrust laws and holds no monopoly in the market. On December 24, 2021, a court in Russia fined Meta for $27 million after the company declined to remove unspecified banned content. The fine was reportedly tied to the company's annual revenue in the country. In May 2022, a lawsuit was filed in Kenya against Meta and its local outsourcing company Sama. Allegedly, Meta has poor working conditions in Kenya for workers moderating Facebook posts. According to the lawsuit, 260 screeners were declared redundant with confusing reasoning. The lawsuit seeks financial compensation and an order that outsourced moderators be given the same health benefits and pay scale as Meta employees. In June 2022, 8 lawsuits were filed across the U.S. over the allege that excessive exposure to platforms including Facebook and Instagram has led to attempted or actual suicides, eating disorders and sleeplessness, among other issues. The litigation follows a former Facebook employee's testimony in Congress that the company refused to take responsibility. The company noted that tools have been developed for parents to keep track of their children's activity on Instagram and set time limits, in addition to Meta's "Take a break" reminders. In addition, the company is providing resources specific to eating disorders as well as developing AI to prevent children under the age of 13 signing up for Facebook or Instagram. In June 2022, Meta settled a lawsuit with the US Department of Justice. The lawsuit, which was filed in 2019, alleged that the company enabled housing discrimination through targeted advertising, as it allowed homeowners and landlords to run housing ads excluding people based on sex, race, religion, and other characteristics. The U.S. Department of Justice stated that this was in violation of the Fair Housing Act. Meta was handed a penalty of $115,054 and given until December 31, 2022, to shadow the algorithm tool. In January 2023, Meta was fined €390 million for violations of the European Union General Data Protection Regulation. In May 2023, the European Data Protection Board fined Meta a record €1.2 billion for breaching European Union data privacy laws by transferring personal data of Facebook users to servers in the U.S. In July 2024, Meta agreed to pay the state of Texas US$1.4 billion to settle a lawsuit brought by Texas Attorney General Ken Paxton accusing the company of collecting users' biometric data without consent, setting a record for the largest privacy-related settlement ever obtained by a state attorney general. In October 2024, Meta Platforms faced lawsuits in Japan from 30 plaintiffs who claimed they were defrauded by fake investment ads on Facebook and Instagram, featuring false celebrity endorsements. The plaintiffs are seeking approximately $2.8 million in damages. In April 2025, the Kenyan High Court ruled that a US$2.4 billion lawsuit in which three plaintiffs claim that Facebook inflamed civil violence in Ethiopia in 2021 could proceed. In April 2025, Meta was fined €200 million ($230 million) for breaking the Digital Markets Act, by imposing a “consent or pay” system that forces users to either allow their personal data to be used to target advertisements, or pay a subscription fee for advertising-free versions of Facebook and Instagram. In late April 2025, a case was filed against Meta in Ghana over the alleged psychological distress experienced by content moderators employed to take down disturbing social media content including depictions of murders, extreme violence and child sexual abuse. Meta moved the moderation service to the Ghanaian capital of Accra after legal issues in the previous location Kenya. The new moderation company is Teleperformance, a multinational corporation with a history of worker's rights violation. Reports suggests the conditions are worse here than in the previous Kenyan location, with many workers afraid of speaking out due to fear of returning to conflict zones. Workers reported developing mental illnesses, attempted suicides, and low pay. In 26 January 2026, a New Mexico state court case was filed, suggesting that Mark Zuckerberg approved allowing minors to access artificial intelligence chatbot companions that safety staffers warned were capable of sexual interactions. In 2020, the company UReputation, which had been involved in several cases concerning the management of digital armies[clarification needed], filed a lawsuit against Facebook, accusing it of unlawfully transmitting personal data to third parties. Legal actions were initiated in Tunisia, France, and the United States. In 2025, the United States District court for the Northern District of Georgia approved a discovery procedure, allowing UReputation to access documents and evidence held by Meta. Structure Meta's key management consists of: As of October 2022[update], Meta had 83,553 employees worldwide. As of June 2024[update], Meta's board consisted of the following directors; Meta Platforms is mainly owned by institutional investors, who hold around 80% of all shares. Insiders control the majority of voting shares. The three largest individual investors in 2024 were Mark Zuckerberg, Sheryl Sandberg and Christopher K. Cox. The largest shareholders in late 2024/early 2025 were: Roger McNamee, an early Facebook investor and Zuckerberg's former mentor, said Facebook had "the most centralized decision-making structure I have ever encountered in a large company". Facebook co-founder Chris Hughes has stated that chief executive officer Mark Zuckerberg has too much power, that the company is now a monopoly, and that, as a result, it should be split into multiple smaller companies. In an op-ed in The New York Times, Hughes said he was concerned that Zuckerberg had surrounded himself with a team that did not challenge him, and that it is the U.S. government's job to hold him accountable and curb his "unchecked power". He also said that "Mark's power is unprecedented and un-American." Several U.S. politicians agreed with Hughes. European Union Commissioner for Competition Margrethe Vestager stated that splitting Facebook should be done only as "a remedy of the very last resort", and that it would not solve Facebook's underlying problems. Revenue Facebook ranked No. 34 in the 2020 Fortune 500 list of the largest United States corporations by revenue, with almost $86 billion in revenue most of it coming from advertising. One analysis of 2017 data determined that the company earned US$20.21 per user from advertising. According to New York, since its rebranding, Meta has reportedly lost $500 billion as a result of new privacy measures put in place by companies such as Apple and Google which prevents Meta from gathering users' data. In February 2015, Facebook announced it had reached two million active advertisers, with most of the gain coming from small businesses. An active advertiser was defined as an entity that had advertised on the Facebook platform in the last 28 days. In March 2016, Facebook announced it had reached three million active advertisers with more than 70% from outside the United States. Prices for advertising follow a variable pricing model based on auctioning ad placements, and potential engagement levels of the advertisement itself. Similar to other online advertising platforms like Google and Twitter, targeting of advertisements is one of the chief merits of digital advertising compared to traditional media. Marketing on Meta is employed through two methods based on the viewing habits, likes and shares, and purchasing data of the audience, namely targeted audiences and "look alike" audiences. The U.S. IRS challenged the valuation Facebook used when it transferred IP from the U.S. to Facebook Ireland (now Meta Platforms Ireland) in 2010 (which Facebook Ireland then revalued higher before charging out), as it was building its double Irish tax structure. The case is ongoing and Meta faces a potential fine of $3–5bn. The U.S. Tax Cuts and Jobs Act of 2017 changed Facebook's global tax calculations. Meta Platforms Ireland is subject to the U.S. GILTI tax of 10.5% on global intangible profits (i.e. Irish profits). On the basis that Meta Platforms Ireland Limited is paying some tax, the effective minimum US tax for Facebook Ireland will be circa 11%. In contrast, Meta Platforms Inc. would incur a special IP tax rate of 13.125% (the FDII rate) if its Irish business relocated to the U.S. Tax relief in the U.S. (21% vs. Irish at the GILTI rate) and accelerated capital expensing, would make this effective U.S. rate around 12%. The insignificance of the U.S./Irish tax difference was demonstrated when Facebook moved 1.5bn non-EU accounts to the U.S. to limit exposure to GDPR. Facilities Users outside of the U.S. and Canada contract with Meta's Irish subsidiary, Meta Platforms Ireland Limited (formerly Facebook Ireland Limited), allowing Meta to avoid US taxes for all users in Europe, Asia, Australia, Africa and South America. Meta is making use of the Double Irish arrangement which allows it to pay 2–3% corporation tax on all international revenue. In 2010, Facebook opened its fourth office, in Hyderabad, India, which houses online advertising and developer support teams and provides support to users and advertisers. In India, Meta is registered as Facebook India Online Services Pvt Ltd. It also has offices or planned sites in Chittagong, Bangladesh; Dublin, Ireland; and Austin, Texas, among other cities. Facebook opened its London headquarters in 2017 in Fitzrovia in central London. Facebook opened an office in Cambridge, Massachusetts in 2018. The offices were initially home to the "Connectivity Lab", a group focused on bringing Internet access to those who do not have access to the Internet. In April 2019, Facebook opened its Taiwan headquarters in Taipei. In March 2022, Meta opened new regional headquarters in Dubai. In September 2023, it was reported that Meta had paid £149m to British Land to break the lease on Triton Square London office. Meta reportedly had another 18 years left on its lease on the site. As of 2023, Facebook operated 21 data centers. It committed to purchase 100% renewable energy and to reduce its greenhouse gas emissions 75% by 2020. Its data center technologies include Fabric Aggregator, a distributed network system that accommodates larger regions and varied traffic patterns. Reception US Representative Alexandria Ocasio-Cortez responded in a tweet to Zuckerberg's announcement about Meta, saying: "Meta as in 'we are a cancer to democracy metastasizing into a global surveillance and propaganda machine for boosting authoritarian regimes and destroying civil society ... for profit!'" Ex-Facebook employee Frances Haugen and whistleblower behind the Facebook Papers responded to the rebranding efforts by expressing doubts about the company's ability to improve while led by Mark Zuckerberg, and urged the chief executive officer to resign. In November 2021, a video published by Inspired by Iceland went viral, in which a Zuckerberg look-alike promoted the Icelandverse, a place of "enhanced actual reality without silly looking headsets". In a December 2021 interview, SpaceX and Tesla chief executive officer Elon Musk said he could not see a compelling use-case for the VR-driven metaverse, adding: "I don't see someone strapping a frigging screen to their face all day." In January 2022, Louise Eccles of The Sunday Times logged into the metaverse with the intention of making a video guide. She wrote: Initially, my experience with the Oculus went well. I attended work meetings as an avatar and tried an exercise class set in the streets of Paris. The headset enabled me to feel the thrill of carving down mountains on a snowboard and the adrenaline rush of climbing a mountain without ropes. Yet switching to the social apps, where you mingle with strangers also using VR headsets, it was at times predatory and vile. Eccles described being sexually harassed by another user, as well as "accents from all over the world, American, Indian, English, Australian, using racist, sexist, homophobic and transphobic language". She also encountered users as young as 7 years old on the platform, despite Oculus headsets being intended for users over 13. See also References External links 37°29′06″N 122°08′54″W / 37.48500°N 122.14833°W / 37.48500; -122.14833 |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.