text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/Kingdom_of_Israel_(united_monarchy)] | [TOKENS: 4180] |
Contents Kingdom of Israel (united monarchy) The Kingdom of Israel (Hebrew: מַמְלֶכֶת יִשְׂרָאֵל, Mamleḵeṯ Yīśrāʾēl) was an Israelite kingdom that may have existed in the Southern Levant. The first extra-biblical mention of Israel dates from the Merneptah Stele created by Pharaoh Merneptah in 1208 BC. According to the Deuteronomistic history in the Hebrew Bible, the United Kingdom of Israel or the United Monarchy existed under the reigns of Saul, Ish-bosheth, David, and Solomon, encompassing the territories of both the later kingdoms of Judah and Israel. Whether the United Monarchy existed—and, if so, to what extent—is a matter of ongoing academic debate. During the 1980s, some biblical scholars began to argue that the archaeological evidence for an extensive kingdom before the late 8th century BCE is too weak, and that the methodology used to obtain the evidence is flawed. Scholars remain divided among those who support the historicity of the biblical narrative, those who doubt or dismiss it, and those who support the kingdom's theoretical existence while maintaining that the biblical narrative is exaggerated. Proponents of the kingdom's existence traditionally date it to between c. 1047 BCE and c. 930 BCE. In the 1990s, Israeli archaeologist Israel Finkelstein contended that existing archaeological evidence for the United Monarchy in the 10th century BCE should be dated to the 9th century BCE.: 40 : 59–61 This model placed the biblical kingdom in Iron Age I, suggesting that it was not functioning as a country under centralized governance but rather as tribal chiefdom over a small polity in Judah, disconnected from the north's Israelite tribes. The rival chronology of Israeli archaeologist Amihai Mazar places the relevant period beginning in the early 10th century BCE and ending in the mid-9th century BCE, addressing the problems of the traditional chronology while still aligning pertinent findings with the time of Saul, David, and Solomon. Mazar's chronology and the traditional one have been fairly widely accepted, though there is no current consensus on the topic. Recent archaeological discoveries by Israeli archaeologists Eilat Mazar and Yosef Garfinkel in Jerusalem and Khirbet Qeiyafa, respectively, seem to support the existence of the United Monarchy, but the dating and identifications are not universally accepted. The historicity of Solomon and his rule is the subject of significant debate. Current scholarly consensus allows for a historical Solomon, but regards his reign as king over Israel and Judah in the 10th century BCE as uncertain and the biblical portrayal of his apparent empire's opulence as most probably an anachronistic exaggeration. According to the biblical account, on the succession of Solomon's son Rehoboam, the United Monarchy split into two separate kingdoms: the Kingdom of Israel in the north, containing the cities of Shechem and Samaria; and the Kingdom of Judah in the south, containing Jerusalem and the Jewish Temple. Archaeological record In the 1980s, a few biblical scholars began to assert that the archaeological evidence for an extensive kingdom before the late 8th century BCE is too weak, and that the methodology used to obtain the evidence is flawed. In 1995 and 1996, Israel Finkelstein published two papers where he proposed a Low Chronology for the stratigraphy of Iron Age Israel. Finkelstein's model would push stratigraphic dates assigned by the conventional chronology by up to a century later, so Finkelstein concluded that much of the monumental architecture characterizing Israel in the 10th century BCE that biblical United Monarchy has been traditionally associated with instead belongs to the 9th century. Finkelstein wrote that "Accepting the Low Chronology means stripping the United Monarchy of monumental buildings, including ashlar masonry and proto-Ionic capitals" According to Finkelstein and Neil Silberman, the authors of The Bible Unearthed, ideas of a united monarchy is not accurate history but "creative expressions of a powerful religious reform movement" that are possibly "based on certain historical kernels." Finkelstein and Silberman accept that David and Solomon were real kings of Judah around the 10th century BCE, but they cite the fact that the earliest independent reference to the Kingdom of Israel dates to about 890 BCE and that to the Kingdom of Judah dates to about 750 BCE. Some see the united monarchy as fabricated during the Babylonian Exile transforming David and Solomon from local folk heroes into rulers of international status. Finkelstein has posited a potential United Monarchy under Jeroboam II in the 8th century BCE, whereas the former one was potentially invented during the reign of Josiah to justify his territorial expansion. Finkelstein's views have been strongly criticized by Amihai Mazar; in response, Mazar proposed the Modified Conventional Chronology, which places the beginning of the Iron IIA period in the early 10th century and its end in the mid-9th century, solving the problems of the High Chronology while still dating the archeological discoveries to the 10th century BCE. Finkelstein's Low Chronology and views about the monarchy have received strong criticism from other scholars, including Amnon Ben-Tor, William G. Dever, Kenneth Kitchen, Doron Ben-Ami, Raz Kletter and Lawrence Stager. Though Amélie Kuhrt acknowledges that "there are no royal inscriptions from the time of the united monarchy (indeed very little written material altogether) and not a single contemporary reference to either David or Solomon," she concludes, "Against this must be set the evidence for substantial development and growth at several sites, which is plausibly related to the tenth century." Kenneth Kitchen (University of Liverpool) reaches a similar conclusion, arguing that "the physical archaeology of tenth-century Canaan is consistent with the former existence of a unified state on its terrain." On August 4, 2005, archaeologist Eilat Mazar announced that she had discovered in Jerusalem what may have been the palace of King David. Now referred to as the Large Stone structure, Mazar's discovery consists of a public building she dated from the 10th century BCE, a copper scroll, pottery from the same period, and a clay bulla, or inscribed seal, of Jehucal, son of Shelemiah, son of Shevi, an official mentioned at least twice in the Book of Jeremiah. In July 2008, she also found a second bulla, belonging to Gedaliah ben Pashhur, who is mentioned together with Jehucal in Jeremiah 38:1. Amihai Mazar called the find "something of a miracle." He has said that he believes the building may be the Fortress of Zion that David is said to have captured. Other scholars are skeptical that the foundation walls are from David's palace. Garfinkel also claimed to have discovered David's palace in 2013, 25 kilometres away, at Khirbet Qeiyafa. Excavations at Khirbet Qeiyafa, an Iron Age site in Judah, found an urbanized settlement radiocarbon dated well before scholars such as Finkelstein suggest that urbanization had begun in Judah, which supports the existence of an urbanized kingdom in the 10th century BCE. The Israel Antiquities Authority stated, "The excavations at Khirbat Qeiyafa reveal an urban society that existed in Judah already in the late eleventh century BCE. It can no longer be argued that the Kingdom of Judah developed only in the late eighth century BCE or at some other later date." The techniques and interpretations to reach some conclusions related to Khirbet Qeiyafa have been criticized by some scholars, such as Finkelstein and Alexander Fantalkin. In 2010, archaeologist Eilat Mazar announced the discovery of part of the ancient city walls around the City of David, which she believes dates to the tenth century BCE. According to Mazar, "It's the most significant construction we have from First Temple days in Israel," and "It means that at that time, the 10th century, in Jerusalem, there was a regime capable of carrying out such construction." The 10th century is the period the Bible describes as the reign of King Solomon. Not all archaeologists agree with Mazar, and archaeologist Aren Maeir is dubious about such claims and Mazar's dating. In the Jewish Study Bible (2014), Oded Lipschits states the concept of United Monarchy should be abandoned, while Aren Maeir believes there is insufficient evidence in support of the United Monarchy. In August 2015, Israeli archaeologists discovered massive fortifications in the ruins of the ancient city of Gath, supposed birthplace of Goliath. The size of the fortifications shows that Gath was a large city in the 10th century BCE, perhaps the largest in Canaan at the time. The professor leading the dig, Aren Maeir, estimated that Gath was as much as four times the size of contemporary Jerusalem, which cast doubt that David's kingdom could have been as powerful as described in the Bible. In his book, The Forgotten Kingdom (2013), Israel Finkelstein considered that Saul, originally from the Benjamin territory, had gained power in his natal Gibeon region around the 10th century BCE and that he conquered Jerusalem in the south and Shechem to the north, creating a polity dangerous to Egypt's geopolitical intentions. So, Shoshenq I, from Egypt, invaded the territory and destroyed this new polity, and installed David of Bethlehem in Jerusalem (Judah) and Jeroboam I in Shechem (Israel) as small local rulers who were vassals of Egypt. Finkelstein concludes that the memory of a united monarchy was inspired by Saul's conquered territory serving first the ideal of a great united monarchy ruled by a northern king in the times of Jeroboam II and next to the idea of a united monarchy ruled from Jerusalem. In an article on the Biblical Archaeology Review, William G. Dever strongly criticized Finkelstein's theory, calling it full of "numerous errors, misrepresentations, over-simplifications and contradictions." Dever noted that Finkelstein proposes that Saul ruled a polity extending as far north as Jezreel and as far south as Hebron and reaching a border with Gath, with a capital located in Gibeon rather than Jerusalem. According to Dever, such a polity is a united monarchy in its own right, ironically confirming the biblical tradition. In addition, he rejected the notion that Gibeon was the capital of such polity since there is "no clear archaeological evidence of occupation in the tenth century, much less monumental architecture." Dever went as far as to dismiss Finkelstein's theory as "a product of his fantasy, stemmed by his obsession to prove that Saul, David and Solomon were not real kings and that the United Monarchy is an invention of a Judahite-biased biblical writer." Dever concluded by stating that "Finkelstein has not discovered a forgotten kingdom. He had invented it. The careful reader will nevertheless gain some insights into Israel—Israel Finkelstein, that is." Another more moderate review was written in the same magazine by Aaron Burke: Burke described Finkelstein's book as "ambitious" and praised its literary style but did not accept his conclusions: according to Burke, Finkelstein's thesis is mainly based on his proposed Low Chronology, ignoring the criticism that it has received from scholars like Amihai Mazar, Christopher Bronk Ramsey and others, and engages in several speculations that archeology, biblical and extrabiblical sources cannot prove. He also criticized him for persistently trying to downgrade the role of David in the development of ancient Israel. In his books, Beyond the Texts (2018) and Has Archeology Buried the Bible? (2020), William G. Dever has defended the historicity of the United Monarchy, maintaining that the reigns of Saul, David and Solomon are "reasonably well attested." Similar arguments were advanced by Amihai Mazar in two essays written in 2010 and 2013, which point toward archaeological evidence emerged from excavation sites in Jerusalem by Eilat Mazar and in Khirbet Qeiyafa by Yosef Garfinkel. The archaeologist Avraham Faust, reviewing Beyond the Texts stated "Dever’s view of the historicity of the united monarchy, which will probably be the main interest of many readers, is that the state or states appeared in the early tenth century but should be defined as 'early inchoate state' (363), not the empire described in the Bible." In 2018, Faust announced that his excavations at Tel 'Eton (believed to be the biblical Eglon) had uncovered an elite house (which he referred to as "the governor's residency"), whose foundations were dated by carbon-14 analysis in the late 11th–10th century BCE, the time usually ascribed to Saul, David and Solomon. Such dating would strengthen the thesis that a centralized state existed at the time of David. According to Dever (2021), 10th century Judah was "something like in 'early inchoate state,' one that will not be fully consolidated until the 9th century BCE" while Israel had a separate development. In their book, The Bible's First Kings (2025), Avraham Faust and Zev Farber have argued that the United Monarchy was a historical mini-empire and that archaeological evidence and early biblical traditions attest to its emergence in the 10th century BCE. Faust and Farber say that as of 2025[update] Bible scholars embrace radical skepticism about the United Monarchy, but archaeologists do not. Historical sources According to mainstream source criticism, several contrasting source texts were spliced together to produce the current Books of Samuel. The most prominent sections in the early parts of the first book come from a pro-monarchical source and from an anti-monarchical source. By identifying both sources, two separate accounts can be reconstructed. The pro-monarchical source describes the divinely-appointed birth of Saul (a single word being changed by a later editor so that it referred to Samuel) and his leading of an army to victory over the Ammonites, which resulted in the people clamouring for him to lead them against the Philistines when he is appointed king. Many scholars believe that the Books of Samuel exhibit too many anachronisms to have been a contemporary account. For example, the text mentions later armour (1 Samuel 17:4–7, 38–39; 25:13), the use of camels (1 Samuel 30:17), cavalry (as distinct from chariotry) (1 Samuel 13:5, 2 Samuel 1:6), and iron picks and axes (as if they were prevalent) (2 Samuel 12:31). Most scholars believe that the text of the Books of Samuel was compiled in the 8th century BCE - rather than in the 10th century when most of the events described took place - based on historical and legendary sources. The narrative served primarily to fill the gap in Israelite history after the events described in Deuteronomy. Biblical narrative According to the biblical account, the united monarchy was formed when the elders of Israel expressed the desire for a king. God and Samuel seem to have a distaste for the monarchy, with God telling Samuel that "[Israel has] rejected me, that I should not be king over them." However, Samuel still proceeds with the establishment of a monarchy by anointing Saul. In the Second Book of Samuel, Saul's disobedience prompts Yahweh to curtail his reign and to hand his kingdom over to another dynasty, leading to Saul's death in battle against the Philistines. His heir Ish-bosheth rules for only two years before being assassinated. Though David was only the King of Judah, he ends the conspiracy and is appointed King of Israel in Ish-bosheth's place. Some textual critics and biblical scholars suggest that David was responsible for the assassination and that his innocence was a later invention to legitimize his actions. Israel rebels against David and crowns David's son Absalom. David is forced into exile east of the Jordan River but eventually launches a successful counterattack, which results in the death of Absalom. Having retaken Judah and asserted control over Israel, David returns west of the Jordan. Throughout the monarchy of Saul, the capital is in Gibeah. After Saul's death, Ish-bosheth rules over the Kingdom of Israel from Mahanaim, and David establishes the capital of the Kingdom of Judah in Hebron. After the civil war with Saul, David forges a powerful and unified Israelite monarchy and rules from c. 1000 to 961 BCE. Some modern archaeologists, however, believe that the two distinct cultures and geographic entities of Judah and Israel continued uninterrupted, and if a political union between them existed, it might have had no practical effect on their relationship. In the biblical account, David embarks on successful military campaigns against the enemies of Judah and Israel and defeats such regional entities as the Philistines to secure his borders. Israel grows from kingdom to empire, its military and political sphere of influence expanding to control the weaker client states of Philistia, Moab, Edom and Ammon, with Aramaean city-states Aram-Zobah and Aram-Damascus becoming vassal states. David is succeeded by his son Solomon, who obtains the throne in a somewhat-disreputable manner from the rival claimant Adonijah, his elder brother. Like David's Palace, Solomon's temple is designed and built with the assistance of Tyrian architects, skilled labourers, money, jewels, cedar and other goods obtained in exchange for land ceded to Tyre. Solomon goes on to rebuild numerous significant cities, including Megiddo, Hazor and Gezer. Some scholars have attributed aspects of archaeological remains excavated from these sites, including six-chambered gates and ashlar palaces, to the building programme. However, Israel Finkelstein's Low Chronology would propose to date them to the 9th century BCE. Yigael Yadin later concluded that the stables that had been believed to have served Solomon's vast collection of horses were built by King Ahab in the 9th century BCE. Following Solomon's death in c. 926 BCE, tensions between the northern part of Israel, containing the ten northern tribes, and the southern section, dominated by Jerusalem and the southern tribes, reached a boiling point. When Solomon's son and successor Rehoboam dealt tactlessly with economic complaints of the northern tribes, in about 930 BCE (there are differences of opinion as to the actual year), the Kingdom of Israel and Judah splits into two kingdoms: the northern Kingdom of Israel, which included the cities of Shechem and Samaria, and the southern Kingdom of Judah, which contained Jerusalem. The Kingdom of Israel (or the Northern Kingdom or Samaria) existed as an independent state until 722 BCE when it was conquered by the Neo-Assyrian Empire. The Kingdom of Judah (or the Southern Kingdom) existed as an independent state until 586 BCE when it was conquered by the Neo-Babylonian Empire. Biblical chronology Many alternative chronologies have been suggested, and there is no ultimate consensus between the different factions and scholarly disciplines concerned with the period as to when it is depicted as having begun or when it ended. Most biblical scholars follow either of the older chronologies established by American archaeologists William F. Albright and Edwin R. Thiele or the newer one by Israeli historian Gershon Galil. Thiele's chronology generally corresponds with Galil's chronology below, with a difference of one year at most. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_ref-maher20231208_64-4] | [TOKENS: 10728] |
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Social_data_revolution] | [TOKENS: 2400] |
Contents Social data revolution The social data revolution is the shift in human communication patterns towards increased personal information sharing and its related implications, made possible by the rise of social networks in the early 2000s. This phenomenon has resulted in the accumulation of unprecedented amounts of public data. This large and frequently updated data source has been described as a new type of scientific instrument for the social sciences. Several independent researchers have used social data to "nowcast" and forecast trends such as unemployment, flu outbreaks, mood of whole populations, travel spending and political opinions in a way that is faster, more accurate and cheaper than standard government reports or Gallup polls. Social data refers to data individuals create that is knowingly and voluntarily shared by them. Cost and overhead previously rendered this semi-public form of communication unfeasible, but advances in social networking technology from 2004–2010 has made broader concepts of sharing possible. The types of data users are sharing include geolocation, medical data, dating preferences, open thoughts, interesting news articles, etc. The social data revolution enables not only new business models like the ones on Amazon.com but also provides large opportunities to improve decision-making for public policy and international development. The analysis of large amounts of social data leads to the field of computational social science. Classic examples include the study of media content or social media content. Evolution of social data Every internet activity leaves behind traces of data (a digital footprint) which can be used to learn more about the user. As use of the internet is becoming more widespread, the datafication of the world is progressing rapidly: Currently, around 16 zettabytes of data are produced per year and for the year 2025 163 zettabytes of data are expected. This has led to data becoming a critical commodity. This ties together all societal actors: Public institutions, private firms, as well as individuals, each relying on data in a unique way. Governments have been collecting data for centuries to ensure the continuance of institutional systems, through limiting the risk of defaulting credits, collecting tax based on income and providing the necessary infrastructure under consideration of their citizens' demographic distribution. In its beginnings, this data entailed written information for record keeping and control, including a census system. This analogue process was very time- and cost-intensive, leaving little room for interpreting larger data sets. Meanwhile, corporate technological developments have moved this offline data into the digital age, allowing visualization and data analytics. In the public sphere, connecting the survey and poll methodologies with database computing, resulted in the ability to gather and store large data sets on individuals. Web 2.0 and social network sites Over the last few decades, the internet has shifted from being used mostly as a source of information about the world to being primarily used for communication, user-generated content, data sharing, and community building. This is what many consider to be the development of "Web 2.0" social network sites such as Facebook and YouTube are the foundation of the development of Web 2.0 and the shift to social data sharing. Early examples of social data websites are Craigslist and the wishlists of Amazon.com. Both enable users to communicate information to anybody who is looking for it. They differ in their approach to identity. Craigslist leverages the power of anonymity, while Amazon.com leverages the power of persistent identity, based on the history of the customer with the firm. The job market is even being shaped by the information people share about themselves on sites like LinkedIn and Facebook. Examples of more sophisticated social data sites are Twitter and Facebook. On Twitter, sending a message or tweet is as simple as sending an SMS text message. Twitter made this C2W, customer to the world: Any tweet a user sends can potentially be read by the entire world. Facebook focuses on interactions between friends, C2C in traditional language. It provides many ways for collecting data from its users: "tag" a friend in a photo, "comment" on what they posted, or just "like" it. These data are the basis for sophisticated models of the relationships between users. They can be used to significantly increase the relevance of what is shown to the user, and for advertising purposes. By 2009, the popularity of social networking sites had increased to four times of what it had been in 2005. As of 2013, Twitter has over 250 million users sharing almost 500 million tweets per day, and Facebook has well over one billion users around the world. Business sector and social data Companies often use the data that is shared via social networking sites and other forms of data sharing avenues, advertisers, etc. Social networking sites, for example, can sell user data to advertisers and other entities which they can then influence consumer decisions. Data mining is also used to gather this information. While websites and other applications were the origins of this data collection, with improvements in technology, many devices that are used in daily life have the ability to collect data on individuals and therefore are increasing the amount of personal data that is available (ex. smartphones, tech watches, music devices, etc.). This growth of people's digital identity – the information available via these electronic sources- is being used by companies and organizations to improve products and services and to reduce costs by targeting what consumers want/expect. The data that can be gathered can include shopping experiences, social media preferences, demographic information and more. Using this data can allow for better personalization of products and has become an expected and vital aspect of product use and production. The data that is accessible about consumers can be used to infer behavioral patterns of consumers. For example, location information is used to assess when and where consumers are going to target ads and promotions based on what stores consumers are going to. Online retailers also have gained insight as to how better personalize the online shopping experience through data gathered during the online transaction. Businesses can even use consumer data to determine whether different shelf spacing of products has an effect on consumer purchasing decisions as well as assess potential cross-item marketing potentials based on items often purchased together. While businesses and advertisers often take advantage of the consumer data available, consumers also use other users' information for their purchase decisions. Social commerce sites are where consumers share product/service experiences and opinions and other information. A famous example of such a site is Pinterest which has over 100 million users. These sites and other online sources of product/brand information are influential on consumer's purchasing decisions. It is estimated that about 67% of online customers use this information in making their purchase decisions. These sites create an environment that is considered trusted by consumers since the information is coming from other consumers. Other uses of social data With the vast amount of data available about individuals that are accessible, the potential uses of this information are growing. The healthcare sector has many potential uses for this data. Information gathered from social media, and other social data sharing sources can be used to predict the flu, disease outbreaks, how emergency responses are handled, and more. With the use of Twitter and geotags, medical researchers can evaluate the health of a particular neighborhood and use that information to provide better outreach and services. Medtronic has developed a digital blood glucose meter that allows health care providers and patients know about low levels. Social data can also be used to assess reactions to crises. After Hurricane Sandy, researchers used Twitter to evaluate the emotions and issues that those affected were facing. This information can potentially be used to help better prepare and respond to future crises. This data can be used to assist with urban planning. The city of Boston has used rider information from Uber to improve transportation planning and road maintenance. Using social data for research purposes has led to the development of computational social science. Computational social science combines social science, computer science, and network science. This field emerged in 2009. Before the rise of social data and the technological advances that supported it, researchers were limited to a narrow view of information based on individuals since their primary form of research relied on interviews. With the vast amount of social data available today, researchers can now analyze a wider group and can obtain a broader view of information. They can use social networks, cell phone data, and perform online experiments that allow them to gather more information than before. Privacy concerns With the amount of data available about individuals accessible by many sources, privacy has become a major concern. Security breaches of customer and other social information such as the compromise of more than 56 million Home Depot customers' credit card information have impacted the concern of privacy with social data. How companies are using, and the potential misuse of the personal information gathered is a concern for the majority of consumers. Despite this, many people do not know how social networking sites and other sources are using and selling their data. In 2014 study, only 25% of online users knew that their location could be accessed and only 14% knew that their web-surfing history could be accessed and shared. Even though privacy concern is a critical factor in people's sharing of personal information on the internet and overall internet involvement, most people are willing to share this information if the benefits of doing so outweigh the potential privacy and security costs. Consumers enjoy the personalization of products and services that are possible because of this information gathering and despite the concerns, continue to use them. International development "From a macro-perspective, it is expected that Big Data-informed decision-making will have a similar positive effect on efficiency and productivity as ICT have had during the recent decade." — Hilbert 2013 In his study of the data revolution in international development, Social Sciences Professor at UC Davis, Martin Hilbert, argued that the natural next step from information societies, fueled by ICT, since the late 1990s are knowledge societies informed by Big Data analysis. Decision-making informed by big data analysis has improved both efficiency and productivity in the developed world. Hilbert examines the challenges and potential of the data revolution on "the unruly world of international development." Types of data Hilbert identified four types of data available in large quantities by 2013: words, locations, nature, and behavior. Individual interactions with the internet, such as words in comments, social media postings, and Google search term volumes, offer an increasingly large source of big data. Typically statistics are generated through a census or a probability survey, for example, the Annual Social and Economic Supplement (ASEC), Current Population Survey (CPS), American Community Survey (ACS), National Health Interview Survey (NHIS) in the United States or administrative records, such as payroll, unemployment, Social Security income taxes, scanner data and credit card data and other commercial transaction records. "Google has analyzed clusters of search terms by region in the United States to predict flu outbreaks faster than was possible using hospital admission records." — Shaw 2014 "Why "Big Data" Is a Big Deal" Weatherhead University Professor Gary King described how the revolution is not just regarding the quantity of data available but in the ability to do something with the data to benefit society. Global Positioning System (GPS)-enabled mobile tablets, phones, Radio-frequency identification (RFID) chips (part of Automatic identification and data capture (AIDC) technologies), telematics, Location-based games, etc. provide data on absolute location and relative movement. Hilbert categorizes data on natural processes under 'Nature' which includes sensors that provide data on moisture in the air and temperature. Data can be generated from user-behavior in multiplayer online games, such as League of Legends, World of Warcraft, Minecraft, Call of Duty, and Dota 2. Nathan Eagle's, a computer scientist at the Santa Fe Institute in New Mexico, began using cellphones in the early 2000s to collect accurate, large-scale data about real social interactions. The project was named one of the "10 Technologies Most Likely To Change The Way We Live" by the MIT Technology Review. See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Special:BookSources/978-0857727886] | [TOKENS: 380] |
Contents Book sources This page allows users to search multiple sources for a book given a 10- or 13-digit International Standard Book Number. Spaces and dashes in the ISBN do not matter. This page links to catalogs of libraries, booksellers, and other book sources where you will be able to search for the book by its International Standard Book Number (ISBN). Online text Google Books and other retail sources below may be helpful if you want to verify citations in Wikipedia articles, because they often let you search an online version of the book for specific words or phrases, or you can browse through the book (although for copyright reasons the entire book is usually not available). At the Open Library (part of the Internet Archive) you can borrow and read entire books online. Online databases Subscription eBook databases Libraries Alabama Alaska California Colorado Connecticut Delaware Florida Georgia Illinois Indiana Iowa Kansas Kentucky Massachusetts Michigan Minnesota Missouri Nebraska New Jersey New Mexico New York North Carolina Ohio Oklahoma Oregon Pennsylvania Rhode Island South Carolina South Dakota Tennessee Texas Utah Washington state Wisconsin Bookselling and swapping Find your book on a site that compiles results from other online sites: These sites allow you to search the catalogs of many individual booksellers: Non-English book sources If the book you are looking for is in a language other than English, you might find it helpful to look at the equivalent pages on other Wikipedias, linked below – they are more likely to have sources appropriate for that language. Find other editions The WorldCat xISBN tool for finding other editions is no longer available. However, there is often a "view all editions" link on the results page from an ISBN search. Google books often lists other editions of a book and related books under the "about this book" link. You can convert between 10 and 13 digit ISBNs with these tools: Find on Wikipedia See also Get free access to research! Research tools and services Outreach Get involved |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/United_States#cite_note-278] | [TOKENS: 17273] |
Contents United States The United States of America (USA), also known as the United States (U.S.) or America, is a country primarily located in North America. It is a federal republic of 50 states and a federal capital district, Washington, D.C. The 48 contiguous states border Canada to the north and Mexico to the south, with the semi-exclave of Alaska in the northwest and the archipelago of Hawaii in the Pacific Ocean. The United States also asserts sovereignty over five major island territories and various uninhabited islands in Oceania and the Caribbean.[j] It is a megadiverse country, with the world's third-largest land area[c] and third-largest population, exceeding 341 million.[k] Paleo-Indians first migrated from North Asia to North America at least 15,000 years ago, and formed various civilizations. Spanish colonization established Spanish Florida in 1513, the first European colony in what is now the continental United States. British colonization followed with the 1607 settlement of Virginia, the first of the Thirteen Colonies. Enslavement of Africans was practiced in all colonies by 1770 and supplied most of the labor for the Southern Colonies' plantation economy. Clashes with the British Crown began as a civil protest over the illegality of taxation without representation in Parliament and the denial of other English rights. They evolved into the American Revolution, which led to the Declaration of Independence and a society based on universal rights. Victory in the 1775–1783 Revolutionary War brought international recognition of U.S. sovereignty and fueled westward expansion, further dispossessing native inhabitants. As more states were admitted, a North–South division over slavery led the Confederate States of America to declare secession and fight the Union in the 1861–1865 American Civil War. With the United States' victory and reunification, slavery was abolished nationally. By the late 19th century, the U.S. economy outpaced the French, German and British economies combined. As of 1900, the country had established itself as a great power, a status solidified after its involvement in World War I. Following Japan's attack on Pearl Harbor in 1941, the U.S. entered World War II. Its aftermath left the U.S. and the Soviet Union as rival superpowers, competing for ideological dominance and international influence during the Cold War. The Soviet Union's collapse in 1991 ended the Cold War, leaving the U.S. as the world's sole superpower. The U.S. federal government is a representative democracy with a president and a constitution that grants separation of powers under three branches: legislative, executive, and judicial. The United States Congress is a bicameral national legislature composed of the House of Representatives (a lower house based on population) and the Senate (an upper house based on equal representation for each state). Federalism grants substantial autonomy to the 50 states. In addition, 574 Native American tribes have sovereignty rights, and there are 326 Native American reservations. Since the 1850s, the Democratic and Republican parties have dominated American politics. American ideals and values are based on a democratic tradition inspired by the American Enlightenment movement. A developed country, the U.S. ranks high in economic competitiveness, innovation, and higher education. Accounting for over a quarter of nominal global GDP, its economy has been the world's largest since about 1890. It is the wealthiest country, with the highest disposable household income per capita among OECD members, though its wealth inequality is highly pronounced. Shaped by centuries of immigration, the culture of the U.S. is diverse and globally influential. Making up more than a third of global military spending, the country has one of the strongest armed forces and is a designated nuclear state. A member of numerous international organizations, the U.S. plays a major role in global political, cultural, economic, and military affairs. Etymology Documented use of the phrase "United States of America" dates back to January 2, 1776. On that day, Stephen Moylan, a Continental Army aide to General George Washington, wrote a letter to Joseph Reed, Washington's aide-de-camp, seeking to go "with full and ample powers from the United States of America to Spain" to seek assistance in the Revolutionary War effort. The first known public usage is an anonymous essay published in the Williamsburg newspaper The Virginia Gazette on April 6, 1776. Sometime on or after June 11, 1776, Thomas Jefferson wrote "United States of America" in a rough draft of the Declaration of Independence, which was adopted by the Second Continental Congress on July 4, 1776. The term "United States" and its initialism "U.S.", used as nouns or as adjectives in English, are common short names for the country. The initialism "USA", a noun, is also common. "United States" and "U.S." are the established terms throughout the U.S. federal government, with prescribed rules.[l] "The States" is an established colloquial shortening of the name, used particularly from abroad; "stateside" is the corresponding adjective or adverb. "America" is the feminine form of the first word of Americus Vesputius, the Latinized name of Italian explorer Amerigo Vespucci (1454–1512);[m] it was first used as a place name by the German cartographers Martin Waldseemüller and Matthias Ringmann in 1507.[n] Vespucci first proposed that the West Indies discovered by Christopher Columbus in 1492 were part of a previously unknown landmass and not among the Indies at the eastern limit of Asia. In English, the term "America" usually does not refer to topics unrelated to the United States, despite the usage of "the Americas" to describe the totality of the continents of North and South America. History The first inhabitants of North America migrated from Siberia approximately 15,000 years ago, either across the Bering land bridge or along the now-submerged Ice Age coastline. Small isolated groups of hunter-gatherers are said to have migrated alongside herds of large herbivores far into Alaska, with ice-free corridors developing along the Pacific coast and valleys of North America in c. 16,500 – c. 13,500 BCE (c. 18,500 – c. 15,500 BP). The Clovis culture, which appeared around 11,000 BCE, is believed to be the first widespread culture in the Americas. Over time, Indigenous North American cultures grew increasingly sophisticated, and some, such as the Mississippian culture, developed agriculture, architecture, and complex societies. In the post-archaic period, the Mississippian cultures were located in the midwestern, eastern, and southern regions, and the Algonquian in the Great Lakes region and along the Eastern Seaboard, while the Hohokam culture and Ancestral Puebloans inhabited the Southwest. Native population estimates of what is now the United States before the arrival of European colonizers range from around 500,000 to nearly 10 million. Christopher Columbus began exploring the Caribbean for Spain in 1492, leading to Spanish-speaking settlements and missions from what are now Puerto Rico and Florida to New Mexico and California. The first Spanish colony in the present-day continental United States was Spanish Florida, chartered in 1513. After several settlements failed there due to starvation and disease, Spain's first permanent town, Saint Augustine, was founded in 1565. France established its own settlements in French Florida in 1562, but they were either abandoned (Charlesfort, 1578) or destroyed by Spanish raids (Fort Caroline, 1565). Permanent French settlements were founded much later along the Great Lakes (Fort Detroit, 1701), the Mississippi River (Saint Louis, 1764) and especially the Gulf of Mexico (New Orleans, 1718). Early European colonies also included the thriving Dutch colony of New Nederland (settled 1626, present-day New York) and the small Swedish colony of New Sweden (settled 1638 in what became Delaware). British colonization of the East Coast began with the Virginia Colony (1607) and the Plymouth Colony (Massachusetts, 1620). The Mayflower Compact in Massachusetts and the Fundamental Orders of Connecticut established precedents for local representative self-governance and constitutionalism that would develop throughout the American colonies. While European settlers in what is now the United States experienced conflicts with Native Americans, they also engaged in trade, exchanging European tools for food and animal pelts.[o] Relations ranged from close cooperation to warfare and massacres. The colonial authorities often pursued policies that forced Native Americans to adopt European lifestyles, including conversion to Christianity. Along the eastern seaboard, settlers trafficked Africans through the Atlantic slave trade, largely to provide manual labor on plantations. The original Thirteen Colonies[p] that would later found the United States were administered as possessions of the British Empire by Crown-appointed governors, though local governments held elections open to most white male property owners. The colonial population grew rapidly from Maine to Georgia, eclipsing Native American populations; by the 1770s, the natural increase of the population was such that only a small minority of Americans had been born overseas. The colonies' distance from Britain facilitated the entrenchment of self-governance, and the First Great Awakening, a series of Christian revivals, fueled colonial interest in guaranteed religious liberty. Following its victory in the French and Indian War, Britain began to assert greater control over local affairs in the Thirteen Colonies, resulting in growing political resistance. One of the primary grievances of the colonists was the denial of their rights as Englishmen, particularly the right to representation in the British government that taxed them. To demonstrate their dissatisfaction and resolve, the First Continental Congress met in 1774 and passed the Continental Association, a colonial boycott of British goods enforced by local "committees of safety" that proved effective. The British attempt to then disarm the colonists resulted in the 1775 Battles of Lexington and Concord, igniting the American Revolutionary War. At the Second Continental Congress, the colonies appointed George Washington commander-in-chief of the Continental Army, and created a committee that named Thomas Jefferson to draft the Declaration of Independence. Two days after the Second Continental Congress passed the Lee Resolution to create an independent, sovereign nation, the Declaration was adopted on July 4, 1776. The political values of the American Revolution evolved from an armed rebellion demanding reform within an empire to a revolution that created a new social and governing system founded on the defense of liberty and the protection of inalienable natural rights; sovereignty of the people; republicanism over monarchy, aristocracy, and other hereditary political power; civic virtue; and an intolerance of political corruption. The Founding Fathers of the United States, who included Washington, Jefferson, John Adams, Benjamin Franklin, Alexander Hamilton, John Jay, James Madison, Thomas Paine, and many others, were inspired by Classical, Renaissance, and Enlightenment philosophies and ideas. Though in practical effect since its drafting in 1777, the Articles of Confederation was ratified in 1781 and formally established a decentralized government that operated until 1789. After the British surrender at the siege of Yorktown in 1781, American sovereignty was internationally recognized by the Treaty of Paris (1783), through which the U.S. gained territory stretching west to the Mississippi River, north to present-day Canada, and south to Spanish Florida. The Northwest Ordinance (1787) established the precedent by which the country's territory would expand with the admission of new states, rather than the expansion of existing states. The U.S. Constitution was drafted at the 1787 Constitutional Convention to overcome the limitations of the Articles. It went into effect in 1789, creating a federal republic governed by three separate branches that together formed a system of checks and balances. George Washington was elected the country's first president under the Constitution, and the Bill of Rights was adopted in 1791 to allay skeptics' concerns about the power of the more centralized government. His resignation as commander-in-chief after the Revolutionary War and his later refusal to run for a third term as the country's first president established a precedent for the supremacy of civil authority in the United States and the peaceful transfer of power. In the late 18th century, American settlers began to expand westward in larger numbers, many with a sense of manifest destiny. The Louisiana Purchase of 1803 from France nearly doubled the territory of the United States. Lingering issues with Britain remained, leading to the War of 1812, which was fought to a draw. Spain ceded Florida and its Gulf Coast territory in 1819. The Missouri Compromise of 1820, which admitted Missouri as a slave state and Maine as a free state, attempted to balance the desire of northern states to prevent the expansion of slavery into new territories with that of southern states to extend it there. Primarily, the compromise prohibited slavery in all other lands of the Louisiana Purchase north of the 36°30′ parallel. As Americans expanded further into territory inhabited by Native Americans, the federal government implemented policies of Indian removal or assimilation. The most significant such legislation was the Indian Removal Act of 1830, a key policy of President Andrew Jackson. It resulted in the Trail of Tears (1830–1850), in which an estimated 60,000 Native Americans living east of the Mississippi River were forcibly removed and displaced to lands far to the west, causing 13,200 to 16,700 deaths along the forced march. Settler expansion as well as this influx of Indigenous peoples from the East resulted in the American Indian Wars west of the Mississippi. During the colonial period, slavery became legal in all the Thirteen colonies, but by 1770 it provided the main labor force in the large-scale, agriculture-dependent economies of the Southern Colonies from Maryland to Georgia. The practice began to be significantly questioned during the American Revolution, and spurred by an active abolitionist movement that had reemerged in the 1830s, states in the North enacted laws to prohibit slavery within their boundaries. At the same time, support for slavery had strengthened in Southern states, with widespread use of inventions such as the cotton gin (1793) having made slavery immensely profitable for Southern elites. The United States annexed the Republic of Texas in 1845, and the 1846 Oregon Treaty led to U.S. control of the present-day American Northwest. Dispute with Mexico over Texas led to the Mexican–American War (1846–1848). After the victory of the U.S., Mexico recognized U.S. sovereignty over Texas, New Mexico, and California in the 1848 Mexican Cession; the cession's lands also included the future states of Nevada, Colorado and Utah. The California gold rush of 1848–1849 spurred a huge migration of white settlers to the Pacific coast, leading to even more confrontations with Native populations. One of the most violent, the California genocide of thousands of Native inhabitants, lasted into the mid-1870s. Additional western territories and states were created. Throughout the 1850s, the sectional conflict regarding slavery was further inflamed by national legislation in the U.S. Congress and decisions of the Supreme Court. In Congress, the Fugitive Slave Act of 1850 mandated the forcible return to their owners in the South of slaves taking refuge in non-slave states, while the Kansas–Nebraska Act of 1854 effectively gutted the anti-slavery requirements of the Missouri Compromise. In its Dred Scott decision of 1857, the Supreme Court ruled against a slave brought into non-slave territory, simultaneously declaring the entire Missouri Compromise to be unconstitutional. These and other events exacerbated tensions between North and South that would culminate in the American Civil War (1861–1865). Beginning with South Carolina, 11 slave-state governments voted to secede from the United States in 1861, joining to create the Confederate States of America. All other state governments remained loyal to the Union.[q] War broke out in April 1861 after the Confederacy bombarded Fort Sumter. Following the Emancipation Proclamation on January 1, 1863, many freed slaves joined the Union army. The war began to turn in the Union's favor following the 1863 Siege of Vicksburg and Battle of Gettysburg, and the Confederates surrendered in 1865 after the Union's victory in the Battle of Appomattox Court House. Efforts toward reconstruction in the secessionist South had begun as early as 1862, but it was only after President Lincoln's assassination that the three Reconstruction Amendments to the Constitution were ratified to protect civil rights. The amendments codified nationally the abolition of slavery and involuntary servitude except as punishment for crimes, promised equal protection under the law for all persons, and prohibited discrimination on the basis of race or previous enslavement. As a result, African Americans took an active political role in ex-Confederate states in the decade following the Civil War. The former Confederate states were readmitted to the Union, beginning with Tennessee in 1866 and ending with Georgia in 1870. National infrastructure, including transcontinental telegraph and railroads, spurred growth in the American frontier. This was accelerated by the Homestead Acts, through which nearly 10 percent of the total land area of the United States was given away free to some 1.6 million homesteaders. From 1865 through 1917, an unprecedented stream of immigrants arrived in the United States, including 24.4 million from Europe. Most came through the Port of New York, as New York City and other large cities on the East Coast became home to large Jewish, Irish, and Italian populations. Many Northern Europeans as well as significant numbers of Germans and other Central Europeans moved to the Midwest. At the same time, about one million French Canadians migrated from Quebec to New England. During the Great Migration, millions of African Americans left the rural South for urban areas in the North. Alaska was purchased from Russia in 1867. The Compromise of 1877 is generally considered the end of the Reconstruction era, as it resolved the electoral crisis following the 1876 presidential election and led President Rutherford B. Hayes to reduce the role of federal troops in the South. Immediately, the Redeemers began evicting the Carpetbaggers and quickly regained local control of Southern politics in the name of white supremacy. African Americans endured a period of heightened, overt racism following Reconstruction, a time often considered the nadir of American race relations. A series of Supreme Court decisions, including Plessy v. Ferguson, emptied the Fourteenth and Fifteenth Amendments of their force, allowing Jim Crow laws in the South to remain unchecked, sundown towns in the Midwest, and segregation in communities across the country, which would be reinforced in part by the policy of redlining later adopted by the federal Home Owners' Loan Corporation. An explosion of technological advancement, accompanied by the exploitation of cheap immigrant labor, led to rapid economic expansion during the Gilded Age of the late 19th century. It continued into the early 20th, when the United States already outpaced the economies of Britain, France, and Germany combined. This fostered the amassing of power by a few prominent industrialists, largely by their formation of trusts and monopolies to prevent competition. Tycoons led the nation's expansion in the railroad, petroleum, and steel industries. The United States emerged as a pioneer of the automotive industry. These changes resulted in significant increases in economic inequality, slum conditions, and social unrest, creating the environment for labor unions and socialist movements to begin to flourish. This period eventually ended with the advent of the Progressive Era, which was characterized by significant economic and social reforms. Pro-American elements in Hawaii overthrew the Hawaiian monarchy; the islands were annexed in 1898. That same year, Puerto Rico, the Philippines, and Guam were ceded to the U.S. by Spain after the latter's defeat in the Spanish–American War. (The Philippines was granted full independence from the U.S. on July 4, 1946, following World War II. Puerto Rico and Guam have remained U.S. territories.) American Samoa was acquired by the United States in 1900 after the Second Samoan Civil War. The U.S. Virgin Islands were purchased from Denmark in 1917. The United States entered World War I alongside the Allies in 1917 helping to turn the tide against the Central Powers. In 1920, a constitutional amendment granted nationwide women's suffrage. During the 1920s and 1930s, radio for mass communication and early television transformed communications nationwide. The Wall Street Crash of 1929 triggered the Great Depression, to which President Franklin D. Roosevelt responded with the New Deal plan of "reform, recovery and relief", a series of unprecedented and sweeping recovery programs and employment relief projects combined with financial reforms and regulations. Initially neutral during World War II, the U.S. began supplying war materiel to the Allies of World War II in March 1941 and entered the war in December after Japan's attack on Pearl Harbor. Agreeing to a "Europe first" policy, the U.S. concentrated its wartime efforts on Japan's allies Italy and Germany until their final defeat in May 1945. The U.S. developed the first nuclear weapons and used them against the Japanese cities of Hiroshima and Nagasaki in August 1945, ending the war. The United States was one of the "Four Policemen" who met to plan the post-war world, alongside the United Kingdom, the Soviet Union, and China. The U.S. emerged relatively unscathed from the war, with even greater economic power and international political influence. The end of World War II in 1945 left the U.S. and the Soviet Union as superpowers, each with its own political, military, and economic sphere of influence. Geopolitical tensions between the two superpowers soon led to the Cold War. The U.S. implemented a policy of containment intended to limit the Soviet Union's sphere of influence; engaged in regime change against governments perceived to be aligned with the Soviets; and prevailed in the Space Race, which culminated with the first crewed Moon landing in 1969. Domestically, the U.S. experienced economic growth, urbanization, and population growth following World War II. The civil rights movement emerged, with Martin Luther King Jr. becoming a prominent leader in the early 1960s. The Great Society plan of President Lyndon B. Johnson's administration resulted in groundbreaking and broad-reaching laws, policies and a constitutional amendment to counteract some of the worst effects of lingering institutional racism. The counterculture movement in the U.S. brought significant social changes, including the liberalization of attitudes toward recreational drug use and sexuality. It also encouraged open defiance of the military draft (leading to the end of conscription in 1973) and wide opposition to U.S. intervention in Vietnam, with the U.S. totally withdrawing in 1975. A societal shift in the roles of women was significantly responsible for the large increase in female paid labor participation starting in the 1970s, and by 1985 the majority of American women aged 16 and older were employed. The Fall of Communism and the dissolution of the Soviet Union from 1989 to 1991 marked the end of the Cold War and left the United States as the world's sole superpower. This cemented the United States' global influence, reinforcing the concept of the "American Century" as the U.S. dominated international political, cultural, economic, and military affairs. The 1990s saw the longest recorded economic expansion in American history, a dramatic decline in U.S. crime rates, and advances in technology. Throughout this decade, technological innovations such as the World Wide Web, the evolution of the Pentium microprocessor in accordance with Moore's law, rechargeable lithium-ion batteries, the first gene therapy trial, and cloning either emerged in the U.S. or were improved upon there. The Human Genome Project was formally launched in 1990, while Nasdaq became the first stock market in the United States to trade online in 1998. In the Gulf War of 1991, an American-led international coalition of states expelled an Iraqi invasion force that had occupied neighboring Kuwait. The September 11 attacks on the United States in 2001 by the pan-Islamist militant organization al-Qaeda led to the war on terror and subsequent military interventions in Afghanistan and in Iraq. The U.S. housing bubble culminated in 2007 with the Great Recession, the largest economic contraction since the Great Depression. In the 2010s and early 2020s, the United States has experienced increased political polarization and democratic backsliding. The country's polarization was violently reflected in the January 2021 Capitol attack, when a mob of insurrectionists entered the U.S. Capitol and sought to prevent the peaceful transfer of power in an attempted self-coup d'état. Geography The United States is the world's third-largest country by total area behind Russia and Canada.[c] The 48 contiguous states and the District of Columbia have a combined area of 3,119,885 square miles (8,080,470 km2). In 2021, the United States had 8% of the Earth's permanent meadows and pastures and 10% of its cropland. Starting in the east, the coastal plain of the Atlantic seaboard gives way to inland forests and rolling hills in the Piedmont plateau region. The Appalachian Mountains and the Adirondack Massif separate the East Coast from the Great Lakes and the grasslands of the Midwest. The Mississippi River System, the world's fourth-longest river system, runs predominantly north–south through the center of the country. The flat and fertile prairie of the Great Plains stretches to the west, interrupted by a highland region in the southeast. The Rocky Mountains, west of the Great Plains, extend north to south across the country, peaking at over 14,000 feet (4,300 m) in Colorado. The supervolcano underlying Yellowstone National Park in the Rocky Mountains, the Yellowstone Caldera, is the continent's largest volcanic feature. Farther west are the rocky Great Basin and the Chihuahuan, Sonoran, and Mojave deserts. In the northwest corner of Arizona, carved by the Colorado River, is the Grand Canyon, a steep-sided canyon and popular tourist destination known for its overwhelming visual size and intricate, colorful landscape. The Cascade and Sierra Nevada mountain ranges run close to the Pacific coast. The lowest and highest points in the contiguous United States are in the State of California, about 84 miles (135 km) apart. At an elevation of 20,310 feet (6,190.5 m), Alaska's Denali (also called Mount McKinley) is the highest peak in the country and on the continent. Active volcanoes in the U.S. are common throughout Alaska's Alexander and Aleutian Islands. Located entirely outside North America, the archipelago of Hawaii consists of volcanic islands, physiographically and ethnologically part of the Polynesian subregion of Oceania. In addition to its total land area, the United States has one of the world's largest marine exclusive economic zones spanning approximately 4.5 million square miles (11.7 million km2) of ocean. With its large size and geographic variety, the United States includes most climate types. East of the 100th meridian, the climate ranges from humid continental in the north to humid subtropical in the south. The western Great Plains are semi-arid. Many mountainous areas of the American West have an alpine climate. The climate is arid in the Southwest, Mediterranean in coastal California, and oceanic in coastal Oregon, Washington, and southern Alaska. Most of Alaska is subarctic or polar. Hawaii, the southern tip of Florida and U.S. territories in the Caribbean and Pacific are tropical. The United States receives more high-impact extreme weather incidents than any other country. States bordering the Gulf of Mexico are prone to hurricanes, and most of the world's tornadoes occur in the country, mainly in Tornado Alley. Due to climate change in the country, extreme weather has become more frequent in the U.S. in the 21st century, with three times the number of reported heat waves compared to the 1960s. Since the 1990s, droughts in the American Southwest have become more persistent and more severe. The regions considered as the most attractive to the population are the most vulnerable. The U.S. is one of 17 megadiverse countries containing large numbers of endemic species: about 17,000 species of vascular plants occur in the contiguous United States and Alaska, and over 1,800 species of flowering plants are found in Hawaii, few of which occur on the mainland. The United States is home to 428 mammal species, 784 birds, 311 reptiles, 295 amphibians, and around 91,000 insect species. There are 63 national parks, and hundreds of other federally managed monuments, forests, and wilderness areas, administered by the National Park Service and other agencies. About 28% of the country's land is publicly owned and federally managed, primarily in the Western States. Most of this land is protected, though some is leased for commercial use, and less than one percent is used for military purposes. Environmental issues in the United States include debates on non-renewable resources and nuclear energy, air and water pollution, biodiversity, logging and deforestation, and climate change. The U.S. Environmental Protection Agency (EPA) is the federal agency charged with addressing most environmental-related issues. The idea of wilderness has shaped the management of public lands since 1964, with the Wilderness Act. The Endangered Species Act of 1973 provides a way to protect threatened and endangered species and their habitats. The United States Fish and Wildlife Service implements and enforces the Act. In 2024, the U.S. ranked 35th among 180 countries in the Environmental Performance Index. Government and politics The United States is a federal republic of 50 states and a federal capital district, Washington, D.C. The U.S. asserts sovereignty over five unincorporated territories and several uninhabited island possessions. It is the world's oldest surviving federation, and its presidential system of federal government has been adopted, in whole or in part, by many newly independent states worldwide following their decolonization. The Constitution of the United States serves as the country's supreme legal document. Most scholars describe the United States as a liberal democracy.[r] Composed of three branches, all headquartered in Washington, D.C., the federal government is the national government of the United States. The U.S. Constitution establishes a separation of powers intended to provide a system of checks and balances to prevent any of the three branches from becoming supreme. The three-branch system is known as the presidential system, in contrast to the parliamentary system where the executive is part of the legislative body. Many countries around the world adopted this aspect of the 1789 Constitution of the United States, especially in the postcolonial Americas. In the U.S. federal system, sovereign powers are shared between three levels of government specified in the Constitution: the federal government, the states, and Indian tribes. The U.S. also asserts sovereignty over five permanently inhabited territories: American Samoa, Guam, the Northern Mariana Islands, Puerto Rico, and the U.S. Virgin Islands. Residents of the 50 states are governed by their elected state government, under state constitutions compatible with the national constitution, and by elected local governments that are administrative divisions of a state. States are subdivided into counties or county equivalents, and (except for Hawaii) further divided into municipalities, each administered by elected representatives. The District of Columbia is a federal district containing the U.S. capital, Washington, D.C. The federal district is an administrative division of the federal government. Indian country is made up of 574 federally recognized tribes and 326 Indian reservations. They hold a government-to-government relationship with the U.S. federal government in Washington and are legally defined as domestic dependent nations with inherent tribal sovereignty rights. In addition to the five major territories, the U.S. also asserts sovereignty over the United States Minor Outlying Islands in the Pacific Ocean and the Caribbean. The seven undisputed islands without permanent populations are Baker Island, Howland Island, Jarvis Island, Johnston Atoll, Kingman Reef, Midway Atoll, and Palmyra Atoll. U.S. sovereignty over the unpopulated Bajo Nuevo Bank, Navassa Island, Serranilla Bank, and Wake Island is disputed. The Constitution is silent on political parties. However, they developed independently in the 18th century with the Federalist and Anti-Federalist parties. Since then, the United States has operated as a de facto two-party system, though the parties have changed over time. Since the mid-19th century, the two main national parties have been the Democratic Party and the Republican Party. The former is perceived as relatively liberal in its political platform while the latter is perceived as relatively conservative in its platform. The United States has an established structure of foreign relations, with the world's second-largest diplomatic corps as of 2024[update]. It is a permanent member of the United Nations Security Council and home to the United Nations headquarters. The United States is a member of the G7, G20, and OECD intergovernmental organizations. Almost all countries have embassies and many have consulates (official representatives) in the country. Likewise, nearly all countries host formal diplomatic missions with the United States, except Iran, North Korea, and Bhutan. Though Taiwan does not have formal diplomatic relations with the U.S., it maintains close unofficial relations. The United States regularly supplies Taiwan with military equipment to deter potential Chinese aggression. Its geopolitical attention also turned to the Indo-Pacific when the United States joined the Quadrilateral Security Dialogue with Australia, India, and Japan. The United States has a "Special Relationship" with the United Kingdom and strong ties with Canada, Australia, New Zealand, the Philippines, Japan, South Korea, Israel, and several European Union countries such as France, Italy, Germany, Spain, and Poland. The U.S. works closely with its NATO allies on military and national security issues, and with countries in the Americas through the Organization of American States and the United States–Mexico–Canada Free Trade Agreement. The U.S. exercises full international defense authority and responsibility for Micronesia, the Marshall Islands, and Palau through the Compact of Free Association. It has increasingly conducted strategic cooperation with India, while its ties with China have steadily deteriorated. Beginning in 2014, the U.S. had become a key ally of Ukraine. After Donald Trump was elected U.S. president in 2024, he sought to negotiate an end to the Russo-Ukrainian War. He paused all military aid to Ukraine in March 2025, although the aid resumed later. Trump also ended U.S. intelligence sharing with the country, but this too was eventually restored. The president is the commander-in-chief of the United States Armed Forces and appoints its leaders, the secretary of defense and the Joint Chiefs of Staff. The Department of Defense, headquartered at the Pentagon near Washington, D.C., administers five of the six service branches, which are made up of the U.S. Army, Marine Corps, Navy, Air Force, and Space Force. The Coast Guard is administered by the Department of Homeland Security in peacetime and can be transferred to the Department of the Navy in wartime. Total strength of the entire military is about 1.3 million active duty with an additional 400,000 in reserve. The United States spent $997 billion on its military in 2024, which is by far the largest amount of any country, making up 37% of global military spending and accounting for 3.4% of the country's GDP. The U.S. possesses 42% of the world's nuclear weapons—the second-largest stockpile after that of Russia. The U.S. military is widely regarded as the most powerful and advanced in the world. The United States has the third-largest combined armed forces in the world, behind the Chinese People's Liberation Army and Indian Armed Forces. The U.S. military operates about 800 bases and facilities abroad, and maintains deployments greater than 100 active duty personnel in 25 foreign countries. The United States has engaged in over 400 military interventions since its founding in 1776, with over half of these occurring between 1950 and 2019 and 25% occurring in the post-Cold War era. State defense forces (SDFs) are military units that operate under the sole authority of a state government. SDFs are authorized by state and federal law but are under the command of the state's governor. By contrast, the 54 U.S. National Guard organizations[t] fall under the dual control of state or territorial governments and the federal government; their units can also become federalized entities, but SDFs cannot be federalized. The National Guard personnel of a state or territory can be federalized by the president under the National Defense Act Amendments of 1933; this legislation created the Guard and provides for the integration of Army National Guard and Air National Guard units and personnel into the U.S. Army and (since 1947) the U.S. Air Force. The total number of National Guard members is about 430,000, while the estimated combined strength of SDFs is less than 10,000. There are about 18,000 U.S. police agencies from local to national level in the United States. Law in the United States is mainly enforced by local police departments and sheriff departments in their municipal or county jurisdictions. The state police departments have authority in their respective state, and federal agencies such as the Federal Bureau of Investigation (FBI) and the U.S. Marshals Service have national jurisdiction and specialized duties, such as protecting civil rights, national security, enforcing U.S. federal courts' rulings and federal laws, and interstate criminal activity. State courts conduct almost all civil and criminal trials, while federal courts adjudicate the much smaller number of civil and criminal cases that relate to federal law. There is no unified "criminal justice system" in the United States. The American prison system is largely heterogenous, with thousands of relatively independent systems operating across federal, state, local, and tribal levels. In 2025, "these systems hold nearly 2 million people in 1,566 state prisons, 98 federal prisons, 3,116 local jails, 1,277 juvenile correctional facilities, 133 immigration detention facilities, and 80 Indian country jails, as well as in military prisons, civil commitment centers, state psychiatric hospitals, and prisons in the U.S. territories." Despite disparate systems of confinement, four main institutions dominate: federal prisons, state prisons, local jails, and juvenile correctional facilities. Federal prisons are run by the Federal Bureau of Prisons and hold pretrial detainees as well as people who have been convicted of federal crimes. State prisons, run by the department of corrections of each state, hold people sentenced and serving prison time (usually longer than one year) for felony offenses. Local jails are county or municipal facilities that incarcerate defendants prior to trial; they also hold those serving short sentences (typically under a year). Juvenile correctional facilities are operated by local or state governments and serve as longer-term placements for any minor adjudicated as delinquent and ordered by a judge to be confined. In January 2023, the United States had the sixth-highest per capita incarceration rate in the world—531 people per 100,000 inhabitants—and the largest prison and jail population in the world, with more than 1.9 million people incarcerated. An analysis of the World Health Organization Mortality Database from 2010 showed U.S. homicide rates "were 7 times higher than in other high-income countries, driven by a gun homicide rate that was 25 times higher". Economy The U.S. has a highly developed mixed economy that has been the world's largest nominally since about 1890. Its 2024 gross domestic product (GDP)[e] of more than $29 trillion constituted over 25% of nominal global economic output, or 15% at purchasing power parity (PPP). From 1983 to 2008, U.S. real compounded annual GDP growth was 3.3%, compared to a 2.3% weighted average for the rest of the G7. The country ranks first in the world by nominal GDP, second when adjusted for purchasing power parities (PPP), and ninth by PPP-adjusted GDP per capita. In February 2024, the total U.S. federal government debt was $34.4 trillion. Of the world's 500 largest companies by revenue, 138 were headquartered in the U.S. in 2025, the highest number of any country. The U.S. dollar is the currency most used in international transactions and the world's foremost reserve currency, backed by the country's dominant economy, its military, the petrodollar system, its large U.S. treasuries market, and its linked eurodollar. Several countries use it as their official currency, and in others it is the de facto currency. The U.S. has free trade agreements with several countries, including the USMCA. Although the United States has reached a post-industrial level of economic development and is often described as having a service economy, it remains a major industrial power; in 2024, the U.S. manufacturing sector was the world's second-largest by value output after China's. New York City is the world's principal financial center, and its metropolitan area is the world's largest metropolitan economy. The New York Stock Exchange and Nasdaq, both located in New York City, are the world's two largest stock exchanges by market capitalization and trade volume. The United States is at the forefront of technological advancement and innovation in many economic fields, especially in artificial intelligence; electronics and computers; pharmaceuticals; and medical, aerospace and military equipment. The country's economy is fueled by abundant natural resources, a well-developed infrastructure, and high productivity. The largest trading partners of the United States are the European Union, Mexico, Canada, China, Japan, South Korea, the United Kingdom, Vietnam, India, and Taiwan. The United States is the world's largest importer and second-largest exporter.[u] It is by far the world's largest exporter of services. Americans have the highest average household and employee income among OECD member states, and the fourth-highest median household income in 2023, up from sixth-highest in 2013. With personal consumption expenditures of over $18.5 trillion in 2023, the U.S. has a heavily consumer-driven economy and is the world's largest consumer market. The U.S. ranked first in the number of dollar billionaires and millionaires in 2023, with 735 billionaires and nearly 22 million millionaires. Wealth in the United States is highly concentrated; in 2011, the richest 10% of the adult population owned 72% of the country's household wealth, while the bottom 50% owned just 2%. U.S. wealth inequality increased substantially since the late 1980s, and income inequality in the U.S. reached a record high in 2019. In 2024, the country had some of the highest wealth and income inequality levels among OECD countries. Since the 1970s, there has been a decoupling of U.S. wage gains from worker productivity. In 2016, the top fifth of earners took home more than half of all income, giving the U.S. one of the widest income distributions among OECD countries. There were about 771,480 homeless persons in the U.S. in 2024. In 2022, 6.4 million children experienced food insecurity. Feeding America estimates that around one in five, or approximately 13 million, children experience hunger in the U.S. and do not know where or when they will get their next meal. Also in 2022, about 37.9 million people, or 11.5% of the U.S. population, were living in poverty. The United States has a smaller welfare state and redistributes less income through government action than most other high-income countries. It is the only advanced economy that does not guarantee its workers paid vacation nationally and one of a few countries in the world without federal paid family leave as a legal right. The United States has a higher percentage of low-income workers than almost any other developed country, largely because of a weak collective bargaining system and lack of government support for at-risk workers. The United States has been a leader in technological innovation since the late 19th century and scientific research since the mid-20th century. Methods for producing interchangeable parts and the establishment of a machine tool industry enabled the large-scale manufacturing of U.S. consumer products in the late 19th century. By the early 20th century, factory electrification, the introduction of the assembly line, and other labor-saving techniques created the system of mass production. In the 21st century, the United States continues to be one of the world's foremost scientific powers, though China has emerged as a major competitor in many fields. The U.S. has the highest research and development expenditures of any country and ranks ninth as a percentage of GDP. In 2022, the United States was (after China) the country with the second-highest number of published scientific papers. In 2021, the U.S. ranked second (also after China) by the number of patent applications, and third by trademark and industrial design applications (after China and Germany), according to World Intellectual Property Indicators. In 2025 the United States ranked third (after Switzerland and Sweden) in the Global Innovation Index. The United States is considered to be a world leader in the development of artificial intelligence technology. In 2023, the United States was ranked the second most technologically advanced country in the world (after South Korea) by Global Finance magazine. The United States has maintained a space program since the late 1950s, beginning with the establishment of the National Aeronautics and Space Administration (NASA) in 1958. NASA's Apollo program (1961–1972) achieved the first crewed Moon landing with the 1969 Apollo 11 mission; it remains one of the agency's most significant milestones. Other major endeavors by NASA include the Space Shuttle program (1981–2011), the Voyager program (1972–present), the Hubble and James Webb space telescopes (launched in 1990 and 2021, respectively), and the multi-mission Mars Exploration Program (Spirit and Opportunity, Curiosity, and Perseverance). NASA is one of five agencies collaborating on the International Space Station (ISS); U.S. contributions to the ISS include several modules, including Destiny (2001), Harmony (2007), and Tranquility (2010), as well as ongoing logistical and operational support. The United States private sector dominates the global commercial spaceflight industry. Prominent American spaceflight contractors include Blue Origin, Boeing, Lockheed Martin, Northrop Grumman, and SpaceX. NASA programs such as the Commercial Crew Program, Commercial Resupply Services, Commercial Lunar Payload Services, and NextSTEP have facilitated growing private-sector involvement in American spaceflight. In 2023, the United States received approximately 84% of its energy from fossil fuel, and its largest source of energy was petroleum (38%), followed by natural gas (36%), renewable sources (9%), coal (9%), and nuclear power (9%). In 2022, the United States constituted about 4% of the world's population, but consumed around 16% of the world's energy. The U.S. ranks as the second-highest emitter of greenhouse gases behind China. The U.S. is the world's largest producer of nuclear power, generating around 30% of the world's nuclear electricity. It also has the highest number of nuclear power reactors of any country. From 2024, the U.S. plans to triple its nuclear power capacity by 2050. The United States' 4 million miles (6.4 million kilometers) of road network, owned almost entirely by state and local governments, is the longest in the world. The extensive Interstate Highway System that connects all major U.S. cities is funded mostly by the federal government but maintained by state departments of transportation. The system is further extended by state highways and some private toll roads. The U.S. is among the top ten countries with the highest vehicle ownership per capita (850 vehicles per 1,000 people) in 2022. A 2022 study found that 76% of U.S. commuters drive alone and 14% ride a bicycle, including bike owners and users of bike-sharing networks. About 11% use some form of public transportation. Public transportation in the United States is well developed in the largest urban areas, notably New York City, Washington, D.C., Boston, Philadelphia, Chicago, and San Francisco; otherwise, coverage is generally less extensive than in most other developed countries. The U.S. also has many relatively car-dependent localities. Long-distance intercity travel is provided primarily by airlines, but travel by rail is more common along the Northeast Corridor, the only high-speed rail in the U.S. that meets international standards. Amtrak, the country's government-sponsored national passenger rail company, has a relatively sparse network compared to that of Western European countries. Service is concentrated in the Northeast, California, the Midwest, the Pacific Northwest, and Virginia/Southeast. The United States has an extensive air transportation network. U.S. civilian airlines are all privately owned. The three largest airlines in the world, by total number of passengers carried, are U.S.-based; American Airlines became the global leader after its 2013 merger with US Airways. Of the 50 busiest airports in the world, 16 are in the United States, as well as five of the top 10. The world's busiest airport by passenger volume is Hartsfield–Jackson Atlanta International in Atlanta, Georgia. In 2022, most of the 19,969 U.S. airports were owned and operated by local government authorities, and there are also some private airports. Some 5,193 are designated as "public use", including for general aviation. The Transportation Security Administration (TSA) has provided security at most major airports since 2001. The country's rail transport network, the longest in the world at 182,412.3 mi (293,564.2 km), handles mostly freight (in contrast to more passenger-centered rail in Europe). Because they are often privately owned operations, U.S. railroads lag behind those of the rest of the world in terms of electrification. The country's inland waterways are the world's fifth-longest, totaling 25,482 mi (41,009 km). They are used extensively for freight, recreation, and a small amount of passenger traffic. Of the world's 50 busiest container ports, four are located in the United States, with the busiest in the country being the Port of Los Angeles. Demographics The U.S. Census Bureau reported 331,449,281 residents on April 1, 2020,[v] making the United States the third-most-populous country in the world, after India and China. The Census Bureau's official 2025 population estimate was 341,784,857, an increase of 3.1% since the 2020 census. According to the Bureau's U.S. Population Clock, on July 1, 2024, the U.S. population had a net gain of one person every 16 seconds, or about 5400 people per day. In 2023, 51% of Americans age 15 and over were married, 6% were widowed, 10% were divorced, and 34% had never been married. In 2023, the total fertility rate for the U.S. stood at 1.6 children per woman, and, at 23%, it had the world's highest rate of children living in single-parent households in 2019. Most Americans live in the suburbs of major metropolitan areas. The United States has a diverse population; 37 ancestry groups have more than one million members. White Americans with ancestry from Europe, the Middle East, or North Africa form the largest racial and ethnic group at 57.8% of the United States population. Hispanic and Latino Americans form the second-largest group and are 18.7% of the United States population. African Americans constitute the country's third-largest ancestry group and are 12.1% of the total U.S. population. Asian Americans are the country's fourth-largest group, composing 5.9% of the United States population. The country's 3.7 million Native Americans account for about 1%, and some 574 native tribes are recognized by the federal government. In 2024, the median age of the United States population was 39.1 years. While many languages and dialects are spoken in the United States, English is by far the most commonly spoken and written. De facto, English is the official language of the United States, and in 2025, Executive Order 14224 declared English official. However, the U.S. has never had a de jure official language, as Congress has never passed a law to designate English as official for all three federal branches. Some laws, such as U.S. naturalization requirements, nonetheless standardize English. Twenty-eight states and the United States Virgin Islands have laws that designate English as the sole official language; 19 states and the District of Columbia have no official language. Three states and four U.S. territories have recognized local or indigenous languages in addition to English: Hawaii (Hawaiian), Alaska (twenty Native languages),[w] South Dakota (Sioux), American Samoa (Samoan), Puerto Rico (Spanish), Guam (Chamorro), and the Northern Mariana Islands (Carolinian and Chamorro). In total, 169 Native American languages are spoken in the United States. In Puerto Rico, Spanish is more widely spoken than English. According to the American Community Survey (2020), some 245.4 million people in the U.S. age five and older spoke only English at home. About 41.2 million spoke Spanish at home, making it the second most commonly used language. Other languages spoken at home by one million people or more include Chinese (3.40 million), Tagalog (1.71 million), Vietnamese (1.52 million), Arabic (1.39 million), French (1.18 million), Korean (1.07 million), and Russian (1.04 million). German, spoken by 1 million people at home in 2010, fell to 857,000 total speakers in 2020. America's immigrant population is by far the world's largest in absolute terms. In 2022, there were 87.7 million immigrants and U.S.-born children of immigrants in the United States, accounting for nearly 27% of the overall U.S. population. In 2017, out of the U.S. foreign-born population, some 45% (20.7 million) were naturalized citizens, 27% (12.3 million) were lawful permanent residents, 6% (2.2 million) were temporary lawful residents, and 23% (10.5 million) were unauthorized immigrants. In 2019, the top countries of origin for immigrants were Mexico (24% of immigrants), India (6%), China (5%), the Philippines (4.5%), and El Salvador (3%). In fiscal year 2022, over one million immigrants (most of whom entered through family reunification) were granted legal residence. The undocumented immigrant population in the U.S. reached a record high of 14 million in 2023. The First Amendment guarantees the free exercise of religion in the country and forbids Congress from passing laws respecting its establishment. Religious practice is widespread, among the most diverse in the world, and profoundly vibrant. The country has the world's largest Christian population, which includes the fourth-largest population of Catholics. Other notable faiths include Judaism, Buddhism, Hinduism, Islam, New Age, and Native American religions. Religious practice varies significantly by region. "Ceremonial deism" is common in American culture. The overwhelming majority of Americans believe in a higher power or spiritual force, engage in spiritual practices such as prayer, and consider themselves religious or spiritual. In the Southern United States' "Bible Belt", evangelical Protestantism plays a significant role culturally; New England and the Western United States tend to be more secular. Mormonism, a Restorationist movement founded in the U.S. in 1847, is the predominant religion in Utah and a major religion in Idaho. About 82% of Americans live in metropolitan areas, particularly in suburbs; about half of those reside in cities with populations over 50,000. In 2022, 333 incorporated municipalities had populations over 100,000, nine cities had more than one million residents, and four cities—New York City, Los Angeles, Chicago, and Houston—had populations exceeding two million. Many U.S. metropolitan populations are growing rapidly, particularly in the South and West. According to the Centers for Disease Control and Prevention (CDC), average U.S. life expectancy at birth reached 79.0 years in 2024, its highest recorded level. This was an increase of 0.6 years over 2023. The CDC attributed the improvement to a significant fall in the number of fatal drug overdoses in the country, noting that "heart disease continues to be the leading cause of death in the United States, followed by cancer and unintentional injuries." In 2024, life expectancy at birth for American men rose to 76.5 years (+0.7 years compared to 2023), while life expectancy for women was 81.4 years (+0.3 years). Starting in 1998, life expectancy in the U.S. fell behind that of other wealthy industrialized countries, and Americans' "health disadvantage" gap has been increasing ever since. The Commonwealth Fund reported in 2020 that the U.S. had the highest suicide rate among high-income countries. Approximately one-third of the U.S. adult population is obese and another third is overweight. The U.S. healthcare system far outspends that of any other country, measured both in per capita spending and as a percentage of GDP, but attains worse healthcare outcomes when compared to peer countries for reasons that are debated. The United States is the only developed country without a system of universal healthcare, and a significant proportion of the population that does not carry health insurance. Government-funded healthcare coverage for the poor (Medicaid) and for those age 65 and older (Medicare) is available to Americans who meet the programs' income or age qualifications. In 2010, then-President Obama passed the Patient Protection and Affordable Care Act.[x] Abortion in the United States is not federally protected, and is illegal or restricted in 17 states. American primary and secondary education, known in the U.S. as K–12 ("kindergarten through 12th grade"), is decentralized. School systems are operated by state, territorial, and sometimes municipal governments and regulated by the U.S. Department of Education. In general, children are required to attend school or an approved homeschool from the age of five or six (kindergarten or first grade) until they are 18 years old. This often brings students through the 12th grade, the final year of a U.S. high school, but some states and territories allow them to leave school earlier, at age 16 or 17. The U.S. spends more on education per student than any other country, an average of $18,614 per year per public elementary and secondary school student in 2020–2021. Among Americans age 25 and older, 92.2% graduated from high school, 62.7% attended some college, 37.7% earned a bachelor's degree, and 14.2% earned a graduate degree. The U.S. literacy rate is near-universal. The U.S. has produced the most Nobel Prize winners of any country, with 411 (having won 413 awards). U.S. tertiary or higher education has earned a global reputation. Many of the world's top universities, as listed by various ranking organizations, are in the United States, including 19 of the top 25. American higher education is dominated by state university systems, although the country's many private universities and colleges enroll about 20% of all American students. Local community colleges generally offer open admissions, lower tuition, and coursework leading to a two-year associate degree or a non-degree certificate. As for public expenditures on higher education, the U.S. spends more per student than the OECD average, and Americans spend more than all nations in combined public and private spending. Colleges and universities directly funded by the federal government do not charge tuition and are limited to military personnel and government employees, including: the U.S. service academies, the Naval Postgraduate School, and military staff colleges. Despite some student loan forgiveness programs in place, student loan debt increased by 102% between 2010 and 2020, and exceeded $1.7 trillion in 2022. Culture and society The United States is home to a wide variety of ethnic groups, traditions, and customs. The country has been described as having the values of individualism and personal autonomy, as well as a strong work ethic and competitiveness. Voluntary altruism towards others also plays a major role; according to a 2016 study by the Charities Aid Foundation, Americans donated 1.44% of total GDP to charity—the highest rate in the world by a large margin. Americans have traditionally been characterized by a unifying political belief in an "American Creed" emphasizing consent of the governed, liberty, equality under the law, democracy, social equality, property rights, and a preference for limited government. The U.S. has acquired significant hard and soft power through its diplomatic influence, economic power, military alliances, and cultural exports such as American movies, music, video games, sports, and food. The influence that the United States exerts on other countries through soft power is referred to as Americanization. Nearly all present Americans or their ancestors came from Europe, Africa, or Asia (the "Old World") within the past five centuries. Mainstream American culture is a Western culture largely derived from the traditions of European immigrants with influences from many other sources, such as traditions brought by slaves from Africa. More recent immigration from Asia and especially Latin America has added to a cultural mix that has been described as a homogenizing melting pot, and a heterogeneous salad bowl, with immigrants contributing to, and often assimilating into, mainstream American culture. Under the First Amendment to the Constitution, the United States is considered to have the strongest protections of free speech of any country. Flag desecration, hate speech, blasphemy, and lese majesty are all forms of protected expression. A 2016 Pew Research Center poll found that Americans were the most supportive of free expression of any polity measured. Additionally, they are the "most supportive of freedom of the press and the right to use the Internet without government censorship". The U.S. is a socially progressive country with permissive attitudes surrounding human sexuality. LGBTQ rights in the United States are among the most advanced by global standards. The American Dream, or the perception that Americans enjoy high levels of social mobility, plays a key role in attracting immigrants. Whether this perception is accurate has been a topic of debate. While mainstream culture holds that the United States is a classless society, scholars identify significant differences between the country's social classes, affecting socialization, language, and values. Americans tend to greatly value socioeconomic achievement, but being ordinary or average is promoted by some as a noble condition as well. The National Foundation on the Arts and the Humanities is an agency of the United States federal government that was established in 1965 with the purpose to "develop and promote a broadly conceived national policy of support for the humanities and the arts in the United States, and for institutions which preserve the cultural heritage of the United States." It is composed of four sub-agencies: Colonial American authors were influenced by John Locke and other Enlightenment philosophers. The American Revolutionary Period (1765–1783) is notable for the political writings of Benjamin Franklin, Alexander Hamilton, Thomas Paine, and Thomas Jefferson. Shortly before and after the Revolutionary War, the newspaper rose to prominence, filling a demand for anti-British national literature. An early novel is William Hill Brown's The Power of Sympathy, published in 1791. Writer and critic John Neal in the early- to mid-19th century helped advance America toward a unique literature and culture by criticizing predecessors such as Washington Irving for imitating their British counterparts, and by influencing writers such as Edgar Allan Poe, who took American poetry and short fiction in new directions. Ralph Waldo Emerson and Margaret Fuller pioneered the influential Transcendentalism movement; Henry David Thoreau, author of Walden, was influenced by this movement. The conflict surrounding abolitionism inspired writers, like Harriet Beecher Stowe, and authors of slave narratives, such as Frederick Douglass. Nathaniel Hawthorne's The Scarlet Letter (1850) explored the dark side of American history, as did Herman Melville's Moby-Dick (1851). Major American poets of the 19th century American Renaissance include Walt Whitman, Melville, and Emily Dickinson. Mark Twain was the first major American writer to be born in the West. Henry James achieved international recognition with novels like The Portrait of a Lady (1881). As literacy rates rose, periodicals published more stories centered around industrial workers, women, and the rural poor. Naturalism, regionalism, and realism were the major literary movements of the period. While modernism generally took on an international character, modernist authors working within the United States more often rooted their work in specific regions, peoples, and cultures. Following the Great Migration to northern cities, African-American and black West Indian authors of the Harlem Renaissance developed an independent tradition of literature that rebuked a history of inequality and celebrated black culture. An important cultural export during the Jazz Age, these writings were a key influence on Négritude, a philosophy emerging in the 1930s among francophone writers of the African diaspora. In the 1950s, an ideal of homogeneity led many authors to attempt to write the Great American Novel, while the Beat Generation rejected this conformity, using styles that elevated the impact of the spoken word over mechanics to describe drug use, sexuality, and the failings of society. Contemporary literature is more pluralistic than in previous eras, with the closest thing to a unifying feature being a trend toward self-conscious experiments with language. Twelve American laureates have won the Nobel Prize in Literature. Media in the United States is broadly uncensored, with the First Amendment providing significant protections, as reiterated in New York Times Co. v. United States. The four major broadcasters in the U.S. are the National Broadcasting Company (NBC), Columbia Broadcasting System (CBS), American Broadcasting Company (ABC), and Fox Broadcasting Company (Fox). The four major broadcast television networks are all commercial entities. The U.S. cable television system offers hundreds of channels catering to a variety of niches. In 2021, about 83% of Americans over age 12 listened to broadcast radio, while about 40% listened to podcasts. In the prior year, there were 15,460 licensed full-power radio stations in the U.S. according to the Federal Communications Commission (FCC). Much of the public radio broadcasting is supplied by National Public Radio (NPR), incorporated in February 1970 under the Public Broadcasting Act of 1967. U.S. newspapers with a global reach and reputation include The Wall Street Journal, The New York Times, The Washington Post, and USA Today. About 800 publications are produced in Spanish. With few exceptions, newspapers are privately owned, either by large chains such as Gannett or McClatchy, which own dozens or even hundreds of newspapers; by small chains that own a handful of papers; or, in an increasingly rare situation, by individuals or families. Major cities often have alternative newspapers to complement the mainstream daily papers, such as The Village Voice in New York City and LA Weekly in Los Angeles. The five most-visited websites in the world are Google, YouTube, Facebook, Instagram, and ChatGPT—all of them American-owned. Other popular platforms used include X (formerly Twitter) and Amazon. In 2025, the U.S. was the world's second-largest video game market by revenue (after China). In 2015, the U.S. video game industry consisted of 2,457 companies that employed around 220,000 jobs and generated $30.4 billion in revenue. There are 444 game publishers, developers, and hardware companies in California alone. According to the Game Developers Conference (GDC), the U.S. is the top location for video game development, with 58% of the world's game developers based there in 2025. The United States is well known for its theater. Mainstream theater in the United States derives from the old European theatrical tradition and has been heavily influenced by the British theater. By the middle of the 19th century, America had created new distinct dramatic forms in the Tom Shows, the showboat theater and the minstrel show. The central hub of the American theater scene is the Theater District in Manhattan, with its divisions of Broadway, off-Broadway, and off-off-Broadway. Many movie and television celebrities have gotten their big break working in New York productions. Outside New York City, many cities have professional regional or resident theater companies that produce their own seasons. The biggest-budget theatrical productions are musicals. U.S. theater has an active community theater culture. The Tony Awards recognizes excellence in live Broadway theater and are presented at an annual ceremony in Manhattan. The awards are given for Broadway productions and performances. One is also given for regional theater. Several discretionary non-competitive awards are given as well, including a Special Tony Award, the Tony Honors for Excellence in Theatre, and the Isabelle Stevenson Award. Folk art in colonial America grew out of artisanal craftsmanship in communities that allowed commonly trained people to individually express themselves. It was distinct from Europe's tradition of high art, which was less accessible and generally less relevant to early American settlers. Cultural movements in art and craftsmanship in colonial America generally lagged behind those of Western Europe. For example, the prevailing medieval style of woodworking and primitive sculpture became integral to early American folk art, despite the emergence of Renaissance styles in England in the late 16th and early 17th centuries. The new English styles would have been early enough to make a considerable impact on American folk art, but American styles and forms had already been firmly adopted. Not only did styles change slowly in early America, but there was a tendency for rural artisans there to continue their traditional forms longer than their urban counterparts did—and far longer than those in Western Europe. The Hudson River School was a mid-19th-century movement in the visual arts tradition of European naturalism. The 1913 Armory Show in New York City, an exhibition of European modernist art, shocked the public and transformed the U.S. art scene. American Realism and American Regionalism sought to reflect and give America new ways of looking at itself. Georgia O'Keeffe, Marsden Hartley, and others experimented with new and individualistic styles, which would become known as American modernism. Major artistic movements such as the abstract expressionism of Jackson Pollock and Willem de Kooning and the pop art of Andy Warhol and Roy Lichtenstein developed largely in the United States. Major photographers include Alfred Stieglitz, Edward Steichen, Dorothea Lange, Edward Weston, James Van Der Zee, Ansel Adams, and Gordon Parks. The tide of modernism and then postmodernism has brought global fame to American architects, including Frank Lloyd Wright, Philip Johnson, and Frank Gehry. The Metropolitan Museum of Art in Manhattan is the largest art museum in the United States and the fourth-largest in the world. American folk music encompasses numerous music genres, variously known as traditional music, traditional folk music, contemporary folk music, or roots music. Many traditional songs have been sung within the same family or folk group for generations, and sometimes trace back to such origins as the British Isles, mainland Europe, or Africa. The rhythmic and lyrical styles of African-American music in particular have influenced American music. Banjos were brought to America through the slave trade. Minstrel shows incorporating the instrument into their acts led to its increased popularity and widespread production in the 19th century. The electric guitar, first invented in the 1930s, and mass-produced by the 1940s, had an enormous influence on popular music, in particular due to the development of rock and roll. The synthesizer, turntablism, and electronic music were also largely developed in the U.S. Elements from folk idioms such as the blues and old-time music were adopted and transformed into popular genres with global audiences. Jazz grew from blues and ragtime in the early 20th century, developing from the innovations and recordings of composers such as W.C. Handy and Jelly Roll Morton. Louis Armstrong and Duke Ellington increased its popularity early in the 20th century. Country music developed in the 1920s, bluegrass and rhythm and blues in the 1940s, and rock and roll in the 1950s. In the 1960s, Bob Dylan emerged from the folk revival to become one of the country's most celebrated songwriters. The musical forms of punk and hip hop both originated in the United States in the 1970s. The United States has the world's largest music market, with a total retail value of $15.9 billion in 2022. Most of the world's major record companies are based in the U.S.; they are represented by the Recording Industry Association of America (RIAA). Mid-20th-century American pop stars, such as Frank Sinatra and Elvis Presley, became global celebrities and best-selling music artists, as have artists of the late 20th century, such as Michael Jackson, Madonna, Whitney Houston, and Mariah Carey, and of the early 21st century, such as Eminem, Britney Spears, Lady Gaga, Katy Perry, Taylor Swift and Beyoncé. The United States has the world's largest apparel market by revenue. Apart from professional business attire, American fashion is eclectic and predominantly informal. Americans' diverse cultural roots are reflected in their clothing; however, sneakers, jeans, T-shirts, and baseball caps are emblematic of American styles. New York, with its Fashion Week, is considered to be one of the "Big Four" global fashion capitals, along with Paris, Milan, and London. A study demonstrated that general proximity to Manhattan's Garment District has been synonymous with American fashion since its inception in the early 20th century. A number of well-known designer labels, among them Tommy Hilfiger, Ralph Lauren, Tom Ford and Calvin Klein, are headquartered in Manhattan. Labels cater to niche markets, such as preteens. New York Fashion Week is one of the most influential fashion shows in the world, and is held twice each year in Manhattan; the annual Met Gala, also in Manhattan, has been called the fashion world's "biggest night". The U.S. film industry has a worldwide influence and following. Hollywood, a district in central Los Angeles, the nation's second-most populous city, is also metonymous for the American filmmaking industry. The major film studios of the United States are the primary source of the most commercially successful movies selling the most tickets in the world. Largely centered in the New York City region from its beginnings in the late 19th century through the first decades of the 20th century, the U.S. film industry has since been primarily based in and around Hollywood. Nonetheless, American film companies have been subject to the forces of globalization in the 21st century, and an increasing number of films are made elsewhere. The Academy Awards, popularly known as "the Oscars", have been held annually by the Academy of Motion Picture Arts and Sciences since 1929, and the Golden Globe Awards have been held annually since January 1944. The industry peaked in what is commonly referred to as the "Golden Age of Hollywood", from the early sound period until the early 1960s, with screen actors such as John Wayne and Marilyn Monroe becoming iconic figures. In the 1970s, "New Hollywood", or the "Hollywood Renaissance", was defined by grittier films influenced by French and Italian realist pictures of the post-war period. The 21st century has been marked by the rise of American streaming platforms, which came to rival traditional cinema. Early settlers were introduced by Native Americans to foods such as turkey, sweet potatoes, corn, squash, and maple syrup. Of the most enduring and pervasive examples are variations of the native dish called succotash. Early settlers and later immigrants combined these with foods they were familiar with, such as wheat flour, beef, and milk, to create a distinctive American cuisine. New World crops, especially pumpkin, corn, potatoes, and turkey as the main course are part of a shared national menu on Thanksgiving, when many Americans prepare or purchase traditional dishes to celebrate the occasion. Characteristic American dishes such as apple pie, fried chicken, doughnuts, french fries, macaroni and cheese, ice cream, hamburgers, hot dogs, and American pizza derive from the recipes of various immigrant groups. Mexican dishes such as burritos and tacos preexisted the United States in areas later annexed from Mexico, and adaptations of Chinese cuisine as well as pasta dishes freely adapted from Italian sources are all widely consumed. American chefs have had a significant impact on society both domestically and internationally. In 1946, the Culinary Institute of America was founded by Katharine Angell and Frances Roth. This would become the United States' most prestigious culinary school, where many of the most talented American chefs would study prior to successful careers. The United States restaurant industry was projected at $899 billion in sales for 2020, and employed more than 15 million people, representing 10% of the nation's workforce directly. It is the country's second-largest private employer and the third-largest employer overall. The United States is home to over 220 Michelin star-rated restaurants, 70 of which are in New York City. Wine has been produced in what is now the United States since the 1500s, with the first widespread production beginning in what is now New Mexico in 1628. In the modern U.S., wine production is undertaken in all fifty states, with California producing 84 percent of all U.S. wine. With more than 1,100,000 acres (4,500 km2) under vine, the United States is the fourth-largest wine-producing country in the world, after Italy, Spain, and France. The classic American diner, a casual restaurant type originally intended for the working class, emerged during the 19th century from converted railroad dining cars made stationary. The diner soon evolved into purpose-built structures whose number expanded greatly in the 20th century. The American fast-food industry developed alongside the nation's car culture. American restaurants developed the drive-in format in the 1920s, which they began to replace with the drive-through format by the 1940s. American fast-food restaurant chains, such as McDonald's, Burger King, Chick-fil-A, Kentucky Fried Chicken, Dunkin' Donuts and many others, have numerous outlets around the world. The most popular spectator sports in the U.S. are American football, basketball, baseball, soccer, and ice hockey. Their premier leagues are, respectively, the National Football League, the National Basketball Association, Major League Baseball, Major League Soccer, and the National Hockey League, All these leagues enjoy wide-ranging domestic media coverage and, except for the MLS, all are considered the preeminent leagues in their respective sports in the world. While most major U.S. sports such as baseball and American football have evolved out of European practices, basketball, volleyball, skateboarding, and snowboarding are American inventions, many of which have become popular worldwide. Lacrosse and surfing arose from Native American and Native Hawaiian activities that predate European contact. The market for professional sports in the United States was approximately $69 billion in July 2013, roughly 50% larger than that of Europe, the Middle East, and Africa combined. American football is by several measures the most popular spectator sport in the United States. Although American football does not have a substantial following in other nations, the NFL does have the highest average attendance (67,254) of any professional sports league in the world. In the year 2024, the NFL generated over $23 billion, making them the most valued professional sports league in the United States and the world. Baseball has been regarded as the U.S. "national sport" since the late 19th century. The most-watched individual sports in the U.S. are golf and auto racing, particularly NASCAR and IndyCar. On the collegiate level, earnings for the member institutions exceed $1 billion annually, and college football and basketball attract large audiences, as the NCAA March Madness tournament and the College Football Playoff are some of the most watched national sporting events. In the U.S., the intercollegiate sports level serves as the main feeder system for professional and Olympic sports, with significant exceptions such as Minor League Baseball. This differs greatly from practices in nearly all other countries, where publicly and privately funded sports organizations serve this function. Eight Olympic Games have taken place in the United States. The 1904 Summer Olympics in St. Louis, Missouri, were the first-ever Olympic Games held outside of Europe. The Olympic Games will be held in the U.S. for a ninth time when Los Angeles hosts the 2028 Summer Olympics. U.S. athletes have won a total of 2,968 medals (1,179 gold) at the Olympic Games, the most of any country. In other international competition, the United States is the home of a number of prestigious events, including the America's Cup, World Baseball Classic, the U.S. Open, and the Masters Tournament. The U.S. men's national soccer team has qualified for eleven World Cups, while the women's national team has won the FIFA Women's World Cup and Olympic soccer tournament four and five times, respectively. The 1999 FIFA Women's World Cup was hosted by the United States. Its final match was attended by 90,185, setting the world record for largest women's sporting event crowd at the time. The United States hosted the 1994 FIFA World Cup and will co-host, along with Canada and Mexico, the 2026 FIFA World Cup. See also Notes References This article incorporates text from a free content work. Licensed under CC BY-SA IGO 3.0 (license statement/permission). Text taken from World Food and Agriculture – Statistical Yearbook 2023, FAO, FAO. External links 40°N 100°W / 40°N 100°W / 40; -100 (United States of America) |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Lynn_Margulis] | [TOKENS: 3622] |
Contents Lynn Margulis Lynn Margulis (born Lynn Petra Alexander; March 5, 1938 – November 22, 2011) was an American evolutionary biologist, who was the primary modern proponent for the significance of symbiosis in evolution. In particular, Margulis transformed and fundamentally framed biologists' understanding of the evolution of the Eukaryotes, organisms with nuclei in their cells. She proposed that they came into being by symbiotic mergers of bacteria. Margulis was the co-developer of the Gaia hypothesis with the British chemist James Lovelock, proposing that the Earth functions as a unified self-regulating system, and the principal defender and promulgator of the five kingdom classification of Robert Whittaker. Throughout her career, Margulis' work could arouse intense objections, and her formative paper, "On the Origin of Mitosing Cells", appeared in 1967 after being rejected by about fifteen journals. Still a junior faculty member at Boston University at the time, her theory that cell organelles such as mitochondria and chloroplasts were once independent bacteria was largely ignored for another decade, becoming widely accepted only after it was powerfully substantiated through genetic evidence. Margulis was elected a member of the US National Academy of Sciences in 1983. President Bill Clinton presented her the National Medal of Science in 1999. The Linnean Society of London awarded her the Darwin-Wallace Medal in 2008. Margulis was a strong critic of neo-Darwinism. Her position sparked lifelong debate with leading neo-Darwinian biologists, including Richard Dawkins, George C. Williams, and John Maynard Smith.: 30, 67, 74–78, 88–92 Margulis' work on symbiosis and her endosymbiotic theory had important predecessors, going back to the mid-19th century – notably Andreas Franz Wilhelm Schimper, Konstantin Mereschkowski, Boris Kozo-Polyansky, and Ivan Wallin – and Margulis not only promoted greater recognition for their contributions, but personally oversaw the first English translation of Kozo-Polyansky's Symbiogenesis: A New Principle of Evolution, which appeared the year before her death. Many of her major works, particularly those intended for a general readership, were collaboratively written with her son Dorion Sagan. In 2002, Discover magazine recognized Margulis as one of the 50 most important women in science. Early life and education Lynn Petra Alexander was born on March 5, 1938 in Chicago, to a Jewish family. Her parents were Morris Alexander and Leona Wise Alexander. She was the eldest of four daughters. Her father was an attorney who also ran a company that made road paints. Her mother operated a travel agency. She entered Hyde Park High School in 1952, describing herself as a bad student who frequently had to stand in the corner. A precocious child, she was accepted at the University of Chicago Laboratory Schools at the age of fifteen. In 1957, at age 19, she earned a BA from the University of Chicago in Liberal Arts. She joined the University of Wisconsin to study biology under Hans Ris and Walter Plaut, graduating in 1960 with an MS in genetics and zoology. (Her first publication, published with Plaut in 1958 in the Journal of Protozoology, was on the genetics of Euglena, which are flagellates that have features of both animals and plants.) She then pursued research at the University of California, Berkeley, under the zoologist Max Alfert. Before she could complete her dissertation, she was offered research associateship and then lectureship at Brandeis University in Massachusetts in 1964. It was while working there that she obtained her PhD from the University of California, Berkeley in 1965. Her thesis was An Unusual Pattern of Thymidine Incorporation in Euglena. Career In 1966 she moved to Boston University, where she taught biology for twenty-two years. She was initially an Adjunct Assistant Professor, then was appointed to Assistant Professor in 1967. She was promoted to Associate Professor in 1971, to full Professor in 1977, and to University Professor in 1986. In 1988 she was appointed Distinguished Professor of Botany at the University of Massachusetts at Amherst. She was Distinguished Professor of Biology in 1993. In 1997 she transferred to the Department of Geosciences at UMass Amherst to become Distinguished Professor of Geosciences "with great delight", the post which she held until her death. In 1966, as a young faculty member at Boston University, Margulis wrote a theoretical paper titled "On the Origin of Mitosing Cells". The paper, however, was "rejected by about fifteen scientific journals," she recalled. It was finally accepted by Journal of Theoretical Biology and is considered today a landmark in modern endosymbiotic theory. Weathering constant criticism of her ideas for decades, Margulis was famous for her tenacity in pushing her theory forward, despite the opposition she faced at the time. The descent of mitochondria from bacteria and of chloroplasts from cyanobacteria was experimentally demonstrated in 1978 by Robert Schwartz and Margaret Dayhoff. This formed the first experimental evidence for the symbiogenesis theory. The endosymbiosis theory of organogenesis became widely accepted in the early 1980s, after the genetic material of mitochondria and chloroplasts had been found to be significantly different from that of the symbiont's nuclear DNA. In 1995, English evolutionary biologist Richard Dawkins had this to say about Lynn Margulis and her work: I greatly admire Lynn Margulis's sheer courage and stamina in sticking by the endosymbiosis theory, and carrying it through from being an unorthodoxy to an orthodoxy. I'm referring to the theory that the eukaryotic cell is a symbiotic union of primitive prokaryotic cells. This is one of the great achievements of twentieth-century evolutionary biology, and I greatly admire her for it. Margulis opposed competition-oriented views of evolution, stressing the importance of symbiotic or cooperative relationships between species. She later formulated a theory that proposed symbiotic relationships between organisms of different phyla, or kingdoms, as the driving force of evolution, and explained genetic variation as occurring mainly through transfer of nuclear information between bacterial cells or viruses and eukaryotic cells. Her organelle genesis ideas are widely accepted, but the proposal that symbiotic relationships explain most genetic variation is still something of a fringe idea. Margulis also held a negative view of certain interpretations of Neo-Darwinism that she felt were excessively focused on competition between organisms, as she believed that history will ultimately judge them as comprising "a minor twentieth-century religious sect within the sprawling religious persuasion of Anglo-Saxon Biology." She wrote that proponents of the standard theory "wallow in their zoological, capitalistic, competitive, cost-benefit interpretation of Darwin – having mistaken him ... Neo-Darwinism, which insists on [the slow accrual of mutations by gene-level natural selection], is in a complete funk." Margulis initially sought out the advice of James Lovelock for her own research: she explained that, "In the early seventies, I was trying to align bacteria by their metabolic pathways. I noticed that all kinds of bacteria produced gases. Oxygen, hydrogen sulfide, carbon dioxide, nitrogen, ammonia—more than thirty different gases are given off by the bacteria whose evolutionary history I was keen to reconstruct. Why did every scientist I asked believe that atmospheric oxygen was a biological product but the other atmospheric gases—nitrogen, methane, sulfur, and so on—were not? 'Go talk to Lovelock,' at least four different scientists suggested. Lovelock believed that the gases in the atmosphere were biological." Margulis met with Lovelock, who explained his Gaia hypothesis to her, and very soon they began an intense collaborative effort on the concept. One of the earliest significant publications on Gaia was a 1974 paper co-authored by Lovelock and Margulis, which succinctly defined the hypothesis as follows: "The notion of the biosphere as an active adaptive control system able to maintain the Earth in homeostasis we are calling the 'Gaia hypothesis.'" Like other early presentations of Lovelock's idea, the Lovelock-Margulis 1974 paper seemed to give living organisms complete agency in creating planetary self-regulation, whereas later, as the idea matured, this planetary-scale self-regulation was recognized as an emergent property of the Earth system, life and its physical environment taken together. When climatologist Stephen Schneider convened the 1989 American Geophysical Union Chapman Conference around the issue of Gaia, the idea of "strong Gaia" and "weak Gaia" was introduced by James Kirchner, after which Margulis was sometimes associated with the idea of "weak Gaia", incorrectly (her essay "Gaia is a Tough Bitch" dates from 1995 – and it stated her own distinction from Lovelock as she saw it, which was primarily that she did not like the metaphor of Earth as a single organism, because, she said, "No organism eats its own waste"). In her 1998 book Symbiotic Planet, Margulis explored the relationship between Gaia and her work on symbiosis. In 1969, life on earth was classified into five kingdoms, as introduced by Robert Whittaker. Margulis became an early supporter as well as critic. While supporting parts, she was the first to recognize the limitations of Whittaker's classification of microbes. But newly discovered organisms such as the archaea and the emergence of molecular taxonomy challenged the concept. By the mid 2000-aughts most scientists began to agree that there are more than five kingdoms. Contrarily, Margulis became the most important defender of the five-kingdom classification. She rejected the three-domain system introduced by Carl Woese in 1990, which gained wide acceptance. She introduced a modified classification by which all life forms, including those newly discovered, could be accounted for in the 'classical' five kingdoms. According to Margulis, the main problem lay with the treatment of archaea, which in her view should be grouped with bacteria under the kingdom Prokaryotae. This contrasts with both the three-domain system——which treats archaea as a domain (and a higher taxon than kingdom)—and with the six-kingdom system, (which holds that archaea is a separate kingdom). Margulis' concept is given in detail in her book Five Kingdoms, written with Karlene V. Schwartz. It has been suggested that it is mainly because of Margulis that the five-kingdom concept survives. In 2009, via a then-standard publication-process known as "communicated submission" (which bypassed traditional peer review), she was instrumental in getting the Proceedings of the National Academy of Sciences (PNAS) to publish a paper by Donald I. Williamson rejecting "the Darwinian assumption that larvae and their adults evolved from a single common ancestor." Williamson's paper provoked immediate response from the scientific community, including a countering paper in PNAS. Conrad Labandeira of the Smithsonian National Museum of Natural History said, "If I was reviewing [Williamson's paper] I would probably opt to reject it," he says, "but I'm not saying it's a bad thing that this is published. What it may do is broaden the discussion on how metamorphosis works and [...] [on] the origin of these very radical life cycles." But Duke University insect developmental biologist Fred Nijhout said that the paper was better suited for the "National Enquirer than the National Academy." In 2009 Margulis and seven others authored a position paper concerning research on the viability of round body forms of some spirochetes, "Syphilis, Lyme disease, & AIDS: Resurgence of 'the great imitator'?" which states that, "Detailed research that correlates life histories of symbiotic spirochetes to changes in the immune system of associated vertebrates is sorely needed", and urging the "reinvestigation of the natural history of mammalian, tick-borne, and venereal transmission of spirochetes in relation to impairment of the human immune system". The paper went on to suggest "that the possible direct causal involvement of spirochetes and their round bodies to symptoms of immune deficiency be carefully and vigorously investigated". In a Discover Magazine interview, Margulis explained her reason for interest in the topic of the 2009 "AIDS" paper: "I'm interested in spirochetes only because of our ancestry. I'm not interested in the diseases", and stated that she had called them "symbionts" because both the spirochete which causes syphilis (Treponema) and the spirochete which causes Lyme disease (Borrelia) only retain about 20% of the genes they would need to live freely, outside of their human hosts. However, in the Discover Magazine interview Margulis said that "the set of symptoms, or syndrome, presented by syphilitics overlaps completely with another syndrome: AIDS", and also noted that Kary Mullis[a] said that "he went looking for a reference substantiating that HIV causes AIDS and discovered, 'There is no such document' ". This provoked a widespread supposition that Margulis had been an "AIDS denialist". Jerry Coyne reacted on his Why Evolution is True blog against his interpretation that Margulis believed "that AIDS is really syphilis, not viral in origin at all." Seth Kalichman, a social psychologist who studies behavioral and social aspects of AIDS, cited her [Margulis] 2009 paper as an example of AIDS denialism "flourishing", and asserted that her [Margulis] "endorsement of HIV/AIDS denialism defies understanding". Reception Historian Jan Sapp has said that "Lynn Margulis's name is as synonymous with symbiosis as Charles Darwin's is with evolution." She has been called "science's unruly earth mother", a "vindicated heretic", or a scientific "rebel", It has been suggested that initial rejection of Margulis' work on the endosymbiotic theory, and the controversial nature of it as well as Gaia theory, made her identify throughout her career with scientific mavericks, outsiders, and unaccepted theories generally. In the last decade of her life, while key components of her life's work began to be understood as fundamental to a modern scientific viewpoint – the widespread adoption of Earth System Science and the incorporation of key parts of endosymbiotic theory into biology curricula worldwide – Margulis if anything became more embroiled in controversy, not less. Journalist John Wilson explained this by saying that Lynn Margulis "defined herself by oppositional science," and in the commemorative collection of essays Lynn Margulis: The Life and Legacy of a Scientific Rebel, commentators again and again depict her as a modern embodiment of the "scientific rebel", akin to Freeman Dyson's 1995 essay The Scientist as Rebel, a tradition Dyson saw embodied in Benjamin Franklin, and which Dyson believed to be essential to good science. Awards and recognitions Personal life Margulis married astronomer Carl Sagan in 1957 soon after she got her bachelor's degree. Sagan was then a graduate student in physics at the University of Chicago. Their marriage ended in 1964, just before she completed her PhD. They had two sons, Dorion Sagan, who later became a popular science writer and her collaborator, and Jeremy Sagan, software developer and founder of Sagan Technology.[citation needed] In 1967 she married Thomas N. Margulis, a crystallographer. They had a son named Zachary Margulis-Ohnuma, a New York City criminal defense lawyer, and a daughter Jennifer Margulis, teacher and author. They divorced in 1980.[citation needed] She commented, "I quit my job as a wife twice," and, "it's not humanly possible to be a good wife, a good mother, and a first-class scientist. No one can do it — something has to go." In the 2000s she had a relationship with fellow biologist Ricardo Guerrero. Margulis argued that the September 11 attacks were a "false-flag operation, which has been used to justify the wars in Afghanistan and Iraq as well as unprecedented assaults on [...] civil liberties." She wrote that there was "overwhelming evidence that the three buildings [of the World Trade Center] collapsed by controlled demolition." She was a religious agnostic, and a staunch evolutionist, but rejected the modern evolutionary synthesis, and said: "I remember waking up one day with an epiphanous revelation: I am not a neo-Darwinist! I recalled an earlier experience, when I realized that I wasn't a humanistic Jew. Although I greatly admire Darwin's contributions and agree with most of his theoretical analysis and I am a Darwinist, I am not a neo-Darwinist." She argued that "Natural selection eliminates and maybe maintains, but it doesn't create", and maintained that symbiosis was the major driver of evolutionary change. Margulis died on November 22, 2011, at home in Amherst, Massachusetts, five days after suffering a hemorrhagic stroke. As her wish, she was cremated and her ashes were scattered in her favorite research areas, near her home. Works Explanatory notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Elon_Musk#cite_note-GrimesVanityFair2022-467] | [TOKENS: 10515] |
Contents Elon Musk Elon Reeve Musk (/ˈiːlɒn/ EE-lon; born June 28, 1971) is a businessman and entrepreneur known for his leadership of Tesla, SpaceX, Twitter, and xAI. Musk has been the wealthiest person in the world since 2025; as of February 2026,[update] Forbes estimates his net worth to be around US$852 billion. Born into a wealthy family in Pretoria, South Africa, Musk emigrated in 1989 to Canada; he has Canadian citizenship since his mother was born there. He received bachelor's degrees in 1997 from the University of Pennsylvania before moving to California to pursue business ventures. In 1995, Musk co-founded the software company Zip2. Following its sale in 1999, he co-founded X.com, an online payment company that later merged to form PayPal, which was acquired by eBay in 2002. Musk also became an American citizen in 2002. In 2002, Musk founded the space technology company SpaceX, becoming its CEO and chief engineer; the company has since led innovations in reusable rockets and commercial spaceflight. Musk joined the automaker Tesla as an early investor in 2004 and became its CEO and product architect in 2008; it has since become a leader in electric vehicles. In 2015, he co-founded OpenAI to advance artificial intelligence (AI) research, but later left; growing discontent with the organization's direction and their leadership in the AI boom in the 2020s led him to establish xAI, which became a subsidiary of SpaceX in 2026. In 2022, he acquired the social network Twitter, implementing significant changes, and rebranding it as X in 2023. His other businesses include the neurotechnology company Neuralink, which he co-founded in 2016, and the tunneling company the Boring Company, which he founded in 2017. In November 2025, a Tesla pay package worth $1 trillion for Musk was approved, which he is to receive over 10 years if he meets specific goals. Musk was the largest donor in the 2024 U.S. presidential election, where he supported Donald Trump. After Trump was inaugurated as president in early 2025, Musk served as Senior Advisor to the President and as the de facto head of the Department of Government Efficiency (DOGE). After a public feud with Trump, Musk left the Trump administration and returned to managing his companies. Musk is a supporter of global far-right figures, causes, and political parties. His political activities, views, and statements have made him a polarizing figure. Musk has been criticized for COVID-19 misinformation, promoting conspiracy theories, and affirming antisemitic, racist, and transphobic comments. His acquisition of Twitter was controversial due to a subsequent increase in hate speech and the spread of misinformation on the service, following his pledge to decrease censorship. His role in the second Trump administration attracted public backlash, particularly in response to DOGE. The emails he sent to Jeffrey Epstein are included in the Epstein files, which were published between 2025–26 and became a topic of worldwide debate. Early life Elon Reeve Musk was born on June 28, 1971, in Pretoria, South Africa's administrative capital. He is of British and Pennsylvania Dutch ancestry. His mother, Maye (née Haldeman), is a model and dietitian born in Saskatchewan, Canada, and raised in South Africa. Musk therefore holds both South African and Canadian citizenship from birth. His father, Errol Musk, is a South African electromechanical engineer, pilot, sailor, consultant, emerald dealer, and property developer, who partly owned a rental lodge at Timbavati Private Nature Reserve. His maternal grandfather, Joshua N. Haldeman, who died in a plane crash when Elon was a toddler, was an American-born Canadian chiropractor, aviator and political activist in the technocracy movement who moved to South Africa in 1950. Elon has a younger brother, Kimbal, a younger sister, Tosca, and four paternal half-siblings. Musk was baptized as a child in the Anglican Church of Southern Africa. Despite both Elon and Errol previously stating that Errol was a part owner of a Zambian emerald mine, in 2023, Errol recounted that the deal he made was to receive "a portion of the emeralds produced at three small mines". Errol was elected to the Pretoria City Council as a representative of the anti-apartheid Progressive Party and has said that his children shared their father's dislike of apartheid. After his parents divorced in 1979, Elon, aged around 9, chose to live with his father because Errol Musk had an Encyclopædia Britannica and a computer. Elon later regretted his decision and became estranged from his father. Elon has recounted trips to a wilderness school that he described as a "paramilitary Lord of the Flies" where "bullying was a virtue" and children were encouraged to fight over rations. In one incident, after an altercation with a fellow pupil, Elon was thrown down concrete steps and beaten severely, leading to him being hospitalized for his injuries. Elon described his father berating him after he was discharged from the hospital. Errol denied berating Elon and claimed, "The [other] boy had just lost his father to suicide, and Elon had called him stupid. Elon had a tendency to call people stupid. How could I possibly blame that child?" Elon was an enthusiastic reader of books, and had attributed his success in part to having read The Lord of the Rings, the Foundation series, and The Hitchhiker's Guide to the Galaxy. At age ten, he developed an interest in computing and video games, teaching himself how to program from the VIC-20 user manual. At age twelve, Elon sold his BASIC-based game Blastar to PC and Office Technology magazine for approximately $500 (equivalent to $1,600 in 2025). Musk attended Waterkloof House Preparatory School, Bryanston High School, and then Pretoria Boys High School, where he graduated. Musk was a decent but unexceptional student, earning a 61/100 in Afrikaans and a B on his senior math certification. Musk applied for a Canadian passport through his Canadian-born mother to avoid South Africa's mandatory military service, which would have forced him to participate in the apartheid regime, as well as to ease his path to immigration to the United States. While waiting for his application to be processed, he attended the University of Pretoria for five months. Musk arrived in Canada in June 1989, connected with a second cousin in Saskatchewan, and worked odd jobs, including at a farm and a lumber mill. In 1990, he entered Queen's University in Kingston, Ontario. Two years later, he transferred to the University of Pennsylvania, where he studied until 1995. Although Musk has said that he earned his degrees in 1995, the University of Pennsylvania did not award them until 1997 – a Bachelor of Arts in physics and a Bachelor of Science in economics from the university's Wharton School. He reportedly hosted large, ticketed house parties to help pay for tuition, and wrote a business plan for an electronic book-scanning service similar to Google Books. In 1994, Musk held two internships in Silicon Valley: one at energy storage startup Pinnacle Research Institute, which investigated electrolytic supercapacitors for energy storage, and another at Palo Alto–based startup Rocket Science Games. In 1995, he was accepted to a graduate program in materials science at Stanford University, but did not enroll. Musk decided to join the Internet boom of the 1990s, applying for a job at Netscape, to which he reportedly never received a response. The Washington Post reported that Musk lacked legal authorization to remain and work in the United States after failing to enroll at Stanford. In response, Musk said he was allowed to work at that time and that his student visa transitioned to an H1-B. According to numerous former business associates and shareholders, Musk said he was on a student visa at the time. Business career In 1995, Musk, his brother Kimbal, and Greg Kouri founded the web software company Zip2 with funding from a group of angel investors. They housed the venture at a small rented office in Palo Alto. Replying to Rolling Stone, Musk denounced the notion that they started their company with funds borrowed from Errol Musk, but in a tweet, he recognized that his father contributed 10% of a later funding round. The company developed and marketed an Internet city guide for the newspaper publishing industry, with maps, directions, and yellow pages. According to Musk, "The website was up during the day and I was coding it at night, seven days a week, all the time." To impress investors, Musk built a large plastic structure around a standard computer to create the impression that Zip2 was powered by a small supercomputer. The Musk brothers obtained contracts with The New York Times and the Chicago Tribune, and persuaded the board of directors to abandon plans for a merger with CitySearch. Musk's attempts to become CEO were thwarted by the board. Compaq acquired Zip2 for $307 million in cash in February 1999 (equivalent to $590,000,000 in 2025), and Musk received $22 million (equivalent to $43,000,000 in 2025) for his 7-percent share. In 1999, Musk co-founded X.com, an online financial services and e-mail payment company. The startup was one of the first federally insured online banks, and, in its initial months of operation, over 200,000 customers joined the service. The company's investors regarded Musk as inexperienced and replaced him with Intuit CEO Bill Harris by the end of the year. The following year, X.com merged with online bank Confinity to avoid competition. Founded by Max Levchin and Peter Thiel, Confinity had its own money-transfer service, PayPal, which was more popular than X.com's service. Within the merged company, Musk returned as CEO. Musk's preference for Microsoft software over Unix created a rift in the company and caused Thiel to resign. Due to resulting technological issues and lack of a cohesive business model, the board ousted Musk and replaced him with Thiel in 2000.[b] Under Thiel, the company focused on the PayPal service and was renamed PayPal in 2001. In 2002, PayPal was acquired by eBay for $1.5 billion (equivalent to $2,700,000,000 in 2025) in stock, of which Musk—the largest shareholder with 11.72% of shares—received $175.8 million (equivalent to $320,000,000 in 2025). In 2017, Musk purchased the domain X.com from PayPal for an undisclosed amount, stating that it had sentimental value. In 2001, Musk became involved with the nonprofit Mars Society and discussed funding plans to place a growth-chamber for plants on Mars. Seeking a way to launch the greenhouse payloads into space, Musk made two unsuccessful trips to Moscow to purchase intercontinental ballistic missiles (ICBMs) from Russian companies NPO Lavochkin and Kosmotras. Musk instead decided to start a company to build affordable rockets. With $100 million of his early fortune, (equivalent to $180,000,000 in 2025) Musk founded SpaceX in May 2002 and became the company's CEO and Chief Engineer. SpaceX attempted its first launch of the Falcon 1 rocket in 2006. Although the rocket failed to reach Earth orbit, it was awarded a Commercial Orbital Transportation Services program contract from NASA, then led by Mike Griffin. After two more failed attempts that nearly caused Musk to go bankrupt, SpaceX succeeded in launching the Falcon 1 into orbit in 2008. Later that year, SpaceX received a $1.6 billion NASA contract (equivalent to $2,400,000,000 in 2025) for Falcon 9-launched Dragon spacecraft flights to the International Space Station (ISS), replacing the Space Shuttle after its 2011 retirement. In 2012, the Dragon vehicle docked with the ISS, a first for a commercial spacecraft. Working towards its goal of reusable rockets, in 2015 SpaceX successfully landed the first stage of a Falcon 9 on a land platform. Later landings were achieved on autonomous spaceport drone ships, an ocean-based recovery platform. In 2018, SpaceX launched the Falcon Heavy; the inaugural mission carried Musk's personal Tesla Roadster as a dummy payload. Since 2019, SpaceX has been developing Starship, a reusable, super heavy-lift launch vehicle intended to replace the Falcon 9 and Falcon Heavy. In 2020, SpaceX launched its first crewed flight, the Demo-2, becoming the first private company to place astronauts into orbit and dock a crewed spacecraft with the ISS. In 2024, NASA awarded SpaceX an $843 million (equivalent to $865,000,000 in 2025) contract to build a spacecraft that NASA will use to deorbit the ISS at the end of its lifespan. In 2015, SpaceX began development of the Starlink constellation of low Earth orbit satellites to provide satellite Internet access. After the launch of prototype satellites in 2018, the first large constellation was deployed in May 2019. As of May 2025[update], over 7,600 Starlink satellites are operational, comprising 65% of all operational Earth satellites. The total cost of the decade-long project to design, build, and deploy the constellation was estimated by SpaceX in 2020 to be $10 billion (equivalent to $12,000,000,000 in 2025).[c] During the Russian invasion of Ukraine, Musk provided free Starlink service to Ukraine, permitting Internet access and communication at a yearly cost to SpaceX of $400 million (equivalent to $440,000,000 in 2025). However, Musk refused to block Russian state media on Starlink. In 2023, Musk denied Ukraine's request to activate Starlink over Crimea to aid an attack against the Russian navy, citing fears of a nuclear response. Tesla, Inc., originally Tesla Motors, was incorporated in July 2003 by Martin Eberhard and Marc Tarpenning. Both men played active roles in the company's early development prior to Musk's involvement. Musk led the Series A round of investment in February 2004; he invested $6.35 million (equivalent to $11,000,000 in 2025), became the majority shareholder, and joined Tesla's board of directors as chairman. Musk took an active role within the company and oversaw Roadster product design, but was not deeply involved in day-to-day business operations. Following a series of escalating conflicts in 2007 and the 2008 financial crisis, Eberhard was ousted from the firm.[page needed] Musk assumed leadership of the company as CEO and product architect in 2008. A 2009 lawsuit settlement with Eberhard designated Musk as a Tesla co-founder, along with Tarpenning and two others. Tesla began delivery of the Roadster, an electric sports car, in 2008. With sales of about 2,500 vehicles, it was the first mass production all-electric car to use lithium-ion battery cells. Under Musk, Tesla has since launched several well-selling electric vehicles, including the four-door sedan Model S (2012), the crossover Model X (2015), the mass-market sedan Model 3 (2017), the crossover Model Y (2020), and the pickup truck Cybertruck (2023). In May 2020, Musk resigned as chairman of the board as part of the settlement of a lawsuit from the SEC over him tweeting that funding had been "secured" for potentially taking Tesla private. The company has also constructed multiple lithium-ion battery and electric vehicle factories, called Gigafactories. Since its initial public offering in 2010, Tesla stock has risen significantly; it became the most valuable carmaker in summer 2020, and it entered the S&P 500 later that year. In October 2021, it reached a market capitalization of $1 trillion (equivalent to $1,200,000,000,000 in 2025), the sixth company in U.S. history to do so. Musk provided the initial concept and financial capital for SolarCity, which his cousins Lyndon and Peter Rive founded in 2006. By 2013, SolarCity was the second largest provider of solar power systems in the United States. In 2014, Musk promoted the idea of SolarCity building an advanced production facility in Buffalo, New York, triple the size of the largest solar plant in the United States. Construction of the factory started in 2014 and was completed in 2017. It operated as a joint venture with Panasonic until early 2020. Tesla acquired SolarCity for $2 billion in 2016 (equivalent to $2,700,000,000 in 2025) and merged it with its battery unit to create Tesla Energy. The deal's announcement resulted in a more than 10% drop in Tesla's stock price; at the time, SolarCity was facing liquidity issues. Multiple shareholder groups filed a lawsuit against Musk and Tesla's directors, stating that the purchase of SolarCity was done solely to benefit Musk and came at the expense of Tesla and its shareholders. Tesla directors settled the lawsuit in January 2020, leaving Musk the sole remaining defendant. Two years later, the court ruled in Musk's favor. In 2016, Musk co-founded Neuralink, a neurotechnology startup, with an investment of $100 million. Neuralink aims to integrate the human brain with artificial intelligence (AI) by creating devices that are embedded in the brain. Such technology could enhance memory or allow the devices to communicate with software. The company also hopes to develop devices to treat neurological conditions like spinal cord injuries. In 2022, Neuralink announced that clinical trials would begin by the end of the year. In September 2023, the Food and Drug Administration approved Neuralink to initiate six-year human trials. Neuralink has conducted animal testing on macaques at the University of California, Davis. In 2021, the company released a video in which a macaque played the video game Pong via a Neuralink implant. The company's animal trials—which have caused the deaths of some monkeys—have led to claims of animal cruelty. The Physicians Committee for Responsible Medicine has alleged that Neuralink violated the Animal Welfare Act. Employees have complained that pressure from Musk to accelerate development has led to botched experiments and unnecessary animal deaths. In 2022, a federal probe was launched into possible animal welfare violations by Neuralink.[needs update] In 2017, Musk founded the Boring Company to construct tunnels; he also revealed plans for specialized, underground, high-occupancy vehicles that could travel up to 150 miles per hour (240 km/h) and thus circumvent above-ground traffic in major cities. Early in 2017, the company began discussions with regulatory bodies and initiated construction of a 30-foot (9.1 m) wide, 50-foot (15 m) long, and 15-foot (4.6 m) deep "test trench" on the premises of SpaceX's offices, as that required no permits. The Los Angeles tunnel, less than two miles (3.2 km) in length, debuted to journalists in 2018. It used Tesla Model Xs and was reported to be a rough ride while traveling at suboptimal speeds. Two tunnel projects announced in 2018, in Chicago and West Los Angeles, have been canceled. A tunnel beneath the Las Vegas Convention Center was completed in early 2021. Local officials have approved further expansions of the tunnel system. April 14, 2022 In early 2017, Musk expressed interest in buying Twitter and had questioned the platform's commitment to freedom of speech. By 2022, Musk had reached 9.2% stake in the company, making him the largest shareholder.[d] Musk later agreed to a deal that would appoint him to Twitter's board of directors and prohibit him from acquiring more than 14.9% of the company. Days later, Musk made a $43 billion offer to buy Twitter. By the end of April Musk had successfully concluded his bid for approximately $44 billion. This included approximately $12.5 billion in loans and $21 billion in equity financing. Having backtracked on his initial decision, Musk bought the company on October 27, 2022. Immediately after the acquisition, Musk fired several top Twitter executives including CEO Parag Agrawal; Musk became the CEO instead. Under Elon Musk, Twitter instituted monthly subscriptions for a "blue check", and laid off a significant portion of the company's staff. Musk lessened content moderation and hate speech also increased on the platform after his takeover. In late 2022, Musk released internal documents relating to Twitter's moderation of Hunter Biden's laptop controversy in the lead-up to the 2020 presidential election. Musk also promised to step down as CEO after a Twitter poll, and five months later, Musk stepped down as CEO and transitioned his role to executive chairman and chief technology officer (CTO). Despite Musk stepping down as CEO, X continues to struggle with challenges such as viral misinformation, hate speech, and antisemitism controversies. Musk has been accused of trying to silence some of his critics such as Twitch streamer Asmongold, who criticized him during one of his streams. Musk has been accused of removing their accounts' blue checkmarks, which hinders visibility and is considered a form of shadow banning, or suspending their accounts without justification. Other activities In August 2013, Musk announced plans for a version of a vactrain, and assigned engineers from SpaceX and Tesla to design a transport system between Greater Los Angeles and the San Francisco Bay Area, at an estimated cost of $6 billion. Later that year, Musk unveiled the concept, dubbed the Hyperloop, intended to make travel cheaper than any other mode of transport for such long distances. In December 2015, Musk co-founded OpenAI, a not-for-profit artificial intelligence (AI) research company aiming to develop artificial general intelligence, intended to be safe and beneficial to humanity. Musk pledged $1 billion of funding to the company, and initially gave $50 million. In 2018, Musk left the OpenAI board. Since 2018, OpenAI has made significant advances in machine learning. In July 2023, Musk launched the artificial intelligence company xAI, which aims to develop a generative AI program that competes with existing offerings like OpenAI's ChatGPT. Musk obtained funding from investors in SpaceX and Tesla, and xAI hired engineers from Google and OpenAI. December 16, 2022 Musk uses a private jet owned by Falcon Landing LLC, a SpaceX-linked company, and acquired a second jet in August 2020. His heavy use of the jets and the consequent fossil fuel usage have received criticism. Musk's flight usage is tracked on social media through ElonJet. In December 2022, Musk banned the ElonJet account on Twitter, and made temporary bans on the accounts of journalists that posted stories regarding the incident, including Donie O'Sullivan, Keith Olbermann, and journalists from The New York Times, The Washington Post, CNN, and The Intercept. In October 2025, Musk's company xAI launched Grokipedia, an AI-generated online encyclopedia that he promoted as an alternative to Wikipedia. Articles on Grokipedia are generated and reviewed by xAI's Grok chatbot. Media coverage and academic analysis described Grokipedia as frequently reusing Wikipedia content but framing contested political and social topics in line with Musk's own views and right-wing narratives. A study by Cornell University researchers and NBC News stated that Grokipedia cites sources that are blacklisted or considered "generally unreliable" on Wikipedia, for example, the conspiracy site Infowars and the neo-Nazi forum Stormfront. Wired, The Guardian and Time criticized Grokipedia for factual errors and for presenting Musk himself in unusually positive terms while downplaying controversies. Politics Musk is an outlier among business leaders who typically avoid partisan political advocacy. Musk was a registered independent voter when he lived in California. Historically, he has donated to both Democrats and Republicans, many of whom serve in states in which he has a vested interest. Since 2022, his political contributions have mostly supported Republicans, with his first vote for a Republican going to Mayra Flores in the 2022 Texas's 34th congressional district special election. In 2024, he started supporting international far-right political parties, activists, and causes, and has shared misinformation and numerous conspiracy theories. Since 2024, his views have been generally described as right-wing. Musk supported Barack Obama in 2008 and 2012, Hillary Clinton in 2016, Joe Biden in 2020, and Donald Trump in 2024. In the 2020 Democratic Party presidential primaries, Musk endorsed candidate Andrew Yang and expressed support for Yang's proposed universal basic income, and endorsed Kanye West's 2020 presidential campaign. In 2021, Musk publicly expressed opposition to the Build Back Better Act, a $3.5 trillion legislative package endorsed by Joe Biden that ultimately failed to pass due to unanimous opposition from congressional Republicans and several Democrats. In 2022, gave over $50 million to Citizens for Sanity, a conservative political action committee. In 2023, he supported Republican Ron DeSantis for the 2024 U.S. presidential election, giving $10 million to his campaign, and hosted DeSantis's campaign announcement on a Twitter Spaces event. From June 2023 to January 2024, Musk hosted a bipartisan set of X Spaces with Republican and Democratic candidates, including Robert F. Kennedy Jr., Vivek Ramaswamy, and Dean Phillips. In October 2025, former vice-president Kamala Harris commented that it was a mistake from the Democratic side to not invite Musk to a White House electric vehicle event organized in August 2021 and featuring executives from General Motors, Ford and Stellantis, despite Tesla being "the major American manufacturer of extraordinary innovation in this space." Fortune remarked that this was a nod to United Auto Workers and organized labor. Harris said presidents should put aside political loyalties when it came to recognizing innovation, and guessed that the non-invitation impacted Musk's perspective. Fortune noted that, at the time, Musk said, "Yeah, seems odd that Tesla wasn't invited." A month later, he criticized Biden as "not the friendliest administration." Jacob Silverman, author of the book Gilded Rage: Elon Musk and the Radicalization of Silicon Valley, said that the tech industry represented by Musk, Thiel, Andreessen and other capitalists, actually flourished under Biden, but the tech leaders chose Trump for their common ground on cultural issues. By early 2024, Musk had become a vocal and financial supporter of Donald Trump. In July 2024, minutes after the attempted assassination of Donald Trump, Musk endorsed him for president saying; "I fully endorse President Trump and hope for his rapid recovery." During the presidential campaign, Musk joined Trump on stage at a campaign rally, and during the campaign promoted conspiracy theories and falsehoods about Democrats, election fraud and immigration, in support of Trump. Musk was the largest individual donor of the 2024 election. In 2025, Musk contributed $19 million to the Wisconsin Supreme Court race, hoping to influence the state's future redistricting efforts and its regulations governing car manufacturers and dealers. In 2023, Musk said he shunned the World Economic Forum because it was boring. The organization commented that they had not invited him since 2015. He has participated in Dialog, dubbed "Tech Bilderberg" and organized by Peter Thiel and Auren Hoffman, though. Musk's international political actions and comments have come under increasing scrutiny and criticism, especially from the governments and leaders of France, Germany, Norway, Spain and the United Kingdom, particularly due to his position in the U.S. government as well as ownership of X. An NBC News analysis found he had boosted far-right political movements to cut immigration and curtail regulation of business in at least 18 countries on six continents since 2023. During his speech after the second inauguration of Donald Trump, Musk twice made a gesture interpreted by many as a Nazi or a fascist Roman salute.[e] He thumped his right hand over his heart, fingers spread wide, and then extended his right arm out, emphatically, at an upward angle, palm down and fingers together. He then repeated the gesture to the crowd behind him. As he finished the gestures, he said to the crowd, "My heart goes out to you. It is thanks to you that the future of civilization is assured." It was widely condemned as an intentional Nazi salute in Germany, where making such gestures is illegal. The Anti-Defamation League said it was not a Nazi salute, but other Jewish organizations disagreed and condemned the salute. American public opinion was divided on partisan lines as to whether it was a fascist salute. Musk dismissed the accusations of Nazi sympathies, deriding them as "dirty tricks" and a "tired" attack. Neo-Nazi and white supremacist groups celebrated it as a Nazi salute. Multiple European political parties demanded that Musk be banned from entering their countries. The concept of DOGE emerged in a discussion between Musk and Donald Trump, and in August 2024, Trump committed to giving Musk an advisory role, with Musk accepting the offer. In November and December 2024, Musk suggested that the organization could help to cut the U.S. federal budget, consolidate the number of federal agencies, and eliminate the Consumer Financial Protection Bureau, and that its final stage would be "deleting itself". In January 2025, the organization was created by executive order, and Musk was designated a "special government employee". Musk led the organization and was a senior advisor to the president, although his official role is not clear. In sworn statement during a lawsuit, the director of the White House Office of Administration stated that Musk "is not an employee of the U.S. DOGE Service or U.S. DOGE Service Temporary Organization", "is not the U.S. DOGE Service administrator", and has "no actual or formal authority to make government decisions himself". Trump said two days later that he had put Musk in charge of DOGE. A federal judge has ruled that Musk acted as the de facto leader of DOGE. Musk's role in the second Trump administration, particularly in response to DOGE, has attracted public backlash. He was criticized for his treatment of federal government employees, including his influence over the mass layoffs of the federal workforce. He has prioritized secrecy within the organization and has accused others of violating privacy laws. A Senate report alleged that Musk could avoid up to $2 billion in legal liability as a result of DOGE's actions. In May 2025, Bill Gates accused Musk of "killing the world's poorest children" through his cuts to USAID, which modeling by Boston University estimated had resulted in 300,000 deaths by this time, most of them of children. By November 2025, the estimated death toll had increased to 400,000 children and 200,000 adults. Musk announced on May 28, 2025, that he would depart from the Trump administration as planned when the special government employee's 130 day deadline expired, with a White House official confirming that Musk's offboarding from the Trump administration was already underway. His departure was officially confirmed during a joint Oval Office press conference with Trump on May 30, 2025. @realDonaldTrump is in the Epstein files. That is the real reason they have not been made public. June 5, 2025 After leaving office, Musk criticized the Trump administration's Big Beautiful Bill, calling it a "disgusting abomination" due to its provisions increasing the deficit. A feud began between Musk and Trump, with its most notable event being Musk alleging Trump had ties to sex offender Jeffrey Epstein on X (formerly Twitter) on June 5, 2025. Trump responded on Truth Social stating that Musk went "CRAZY" after the "EV Mandate" was purportedly taken away and threatened to cut Musk's government contracts. Musk then called for a third Trump impeachment. The next day, Trump stated that he did not wish to reconcile with Musk, and added that Musk would face "very serious consequences" if he funds Democratic candidates. On June 11, Musk publicly apologized for the tweets against Trump, saying they "went too far". Views November 6, 2022 Rejecting the conservative label, Musk has described himself as a political moderate, even as his views have become more right-wing over time. His views have been characterized as libertarian and far-right, and after his involvement in European politics, they have received criticism from world leaders such as Emmanuel Macron and Olaf Scholz. Within the context of American politics, Musk supported Democratic candidates up until 2022, at which point he voted for a Republican for the first time. He has stated support for universal basic income, gun rights, freedom of speech, a tax on carbon emissions, and H-1B visas. Musk has expressed concern about issues such as artificial intelligence (AI) and climate change, and has been a critic of wealth tax, short-selling, and government subsidies. An immigrant himself, Musk has been accused of being anti-immigration, and regularly blames immigration policies for illegal immigration. He is also a pronatalist who believes population decline is the biggest threat to civilization, and identifies as a cultural Christian. Musk has long been an advocate for space colonization, especially the colonization of Mars. He has repeatedly pushed for humanity colonizing Mars, in order to become an interplanetary species and lower the risks of human extinction. Musk has promoted conspiracy theories and made controversial statements that have led to accusations of racism, sexism, antisemitism, transphobia, disseminating disinformation, and support of white pride. While describing himself as a "pro-Semite", his comments regarding George Soros and Jewish communities have been condemned by the Anti-Defamation League and the Biden White House. Musk was criticized during the COVID-19 pandemic for making unfounded epidemiological claims, defying COVID-19 lockdowns restrictions, and supporting the Canada convoy protest against vaccine mandates. He has amplified false claims of white genocide in South Africa. Musk has been critical of Israel's actions in the Gaza Strip during the Gaza war, praised China's economic and climate goals, suggested that Taiwan and China should resolve cross-strait relations, and was described as having a close relationship with the Chinese government. In Europe, Musk expressed support for Ukraine in 2022 during the Russian invasion, recommended referendums and peace deals on the annexed Russia-occupied territories, and supported the far-right Alternative for Germany political party in 2024. Regarding British politics, Musk blamed the 2024 UK riots on mass migration and open borders, criticized Prime Minister Keir Starmer for what he described as a "two-tier" policing system, and was subsequently attacked as being responsible for spreading misinformation and amplifying the far-right. He has also voiced his support for far-right activist Tommy Robinson and pledged electoral support for Reform UK. In February 2026, Musk described Spanish Prime Minister Pedro Sánchez as a "tyrant" following Sánchez's proposal to prohibit minors under the age of 16 from accessing social media platforms. Legal affairs In 2018, Musk was sued by the U.S. Securities and Exchange Commission (SEC) for a tweet stating that funding had been secured for potentially taking Tesla private.[f] The securities fraud lawsuit characterized the tweet as false, misleading, and damaging to investors, and sought to bar Musk from serving as CEO of publicly traded companies. Two days later, Musk settled with the SEC, without admitting or denying the SEC's allegations. As a result, Musk and Tesla were fined $20 million each, and Musk was forced to step down for three years as Tesla chairman but was able to remain as CEO. Shareholders filed a lawsuit over the tweet, and in February 2023, a jury found Musk and Tesla not liable. Musk has stated in interviews that he does not regret posting the tweet that triggered the SEC investigation. In 2019, Musk stated in a tweet that Tesla would build half a million cars that year. The SEC reacted by asking a court to hold him in contempt for violating the terms of the 2018 settlement agreement. A joint agreement between Musk and the SEC eventually clarified the previous agreement details, including a list of topics about which Musk needed preclearance. In 2020, a judge blocked a lawsuit that claimed a tweet by Musk regarding Tesla stock price ("too high imo") violated the agreement. Freedom of Information Act (FOIA)-released records showed that the SEC concluded Musk had subsequently violated the agreement twice by tweeting regarding "Tesla's solar roof production volumes and its stock price". In October 2023, the SEC sued Musk over his refusal to testify a third time in an investigation into whether he violated federal law by purchasing Twitter stock in 2022. In February 2024, Judge Laurel Beeler ruled that Musk must testify again. In January 2025, the SEC filed a lawsuit against Musk for securities violations related to his purchase of Twitter. In January 2024, Delaware judge Kathaleen McCormick ruled in a 2018 lawsuit that Musk's $55 billion pay package from Tesla be rescinded. McCormick called the compensation granted by the company's board "an unfathomable sum" that was unfair to shareholders. The Delaware Supreme Court overturned McCormick's decision in December 2025, restoring Musk's compensation package and awarding $1 in nominal damages. Personal life Musk became a U.S. citizen in 2002. From the early 2000s until late 2020, Musk resided in California, where both Tesla and SpaceX were founded. He then relocated to Cameron County, Texas, saying that California had become "complacent" about its economic success. While hosting Saturday Night Live in 2021, Musk stated that he has Asperger syndrome (an outdated term for autism spectrum disorder). When asked about his experience growing up with Asperger's syndrome in a TED2022 conference in Vancouver, Musk stated that "the social cues were not intuitive ... I would just tend to take things very literally ... but then that turned out to be wrong — [people were not] simply saying exactly what they mean, there's all sorts of other things that are meant, and [it] took me a while to figure that out." Musk suffers from back pain and has undergone several spine-related surgeries, including a disc replacement. In 2000, he contracted a severe case of malaria while on vacation in South Africa. Musk has stated he uses doctor-prescribed ketamine for occasional depression and that he doses "a small amount once every other week or something like that"; since January 2024, some media outlets have reported that he takes ketamine, marijuana, LSD, ecstasy, mushrooms, cocaine and other drugs. Musk at first refused to comment on his alleged drug use, before responding that he had not tested positive for drugs, and that if drugs somehow improved his productivity, "I would definitely take them!". The New York Times' investigations revealed Musk's overuse of ketamine and numerous other drugs, as well as strained family relationships and concerns from close associates who have become troubled by his public behavior as he became more involved in political activities and government work. According to The Washington Post, President Trump described Musk as "a big-time drug addict". Through his own label Emo G Records, Musk released a rap track, "RIP Harambe", on SoundCloud in March 2019. The following year, he released an EDM track, "Don't Doubt Ur Vibe", featuring his own lyrics and vocals. Musk plays video games, which he stated has a "'restoring effect' that helps his 'mental calibration'". Some games he plays include Quake, Diablo IV, Elden Ring, and Polytopia. Musk once claimed to be one of the world's top video game players but has since admitted to "account boosting", or cheating by hiring outside services to achieve top player rankings. Musk has justified the boosting by claiming that all top accounts do it so he has to as well to remain competitive. In 2024 and 2025, Musk criticized the video game Assassin's Creed Shadows and its creator Ubisoft for "woke" content. Musk posted to X that "DEI kills art" and specified the inclusion of the historical figure Yasuke in the Assassin's Creed game as offensive; he also called the game "terrible". Ubisoft responded by saying that Musk's comments were "just feeding hatred" and that they were focused on producing a game not pushing politics. Musk has fathered at least 14 children, one of whom died as an infant. The Wall Street Journal reported in 2025 that sources close to Musk suggest that the "true number of Musk's children is much higher than publicly known". He had six children with his first wife, Canadian author Justine Wilson, whom he met while attending Queen's University in Ontario, Canada; they married in 2000. In 2002, their first child Nevada Musk died of sudden infant death syndrome at the age of 10 weeks. After his death, the couple used in vitro fertilization (IVF) to continue their family; they had twins in 2004, followed by triplets in 2006. The couple divorced in 2008 and have shared custody of their children. The elder twin he had with Wilson came out as a trans woman and, in 2022, officially changed her name to Vivian Jenna Wilson, adopting her mother's surname because she no longer wished to be associated with Musk. Musk began dating English actress Talulah Riley in 2008. They married two years later at Dornoch Cathedral in Scotland. In 2012, the couple divorced, then remarried the following year. After briefly filing for divorce in 2014, Musk finalized a second divorce from Riley in 2016. Musk then dated the American actress Amber Heard for several months in 2017; he had reportedly been "pursuing" her since 2012. In 2018, Musk and Canadian musician Grimes confirmed they were dating. Grimes and Musk have three children, born in 2020, 2021, and 2022.[g] Musk and Grimes originally gave their eldest child the name "X Æ A-12", which would have violated California regulations as it contained characters that are not in the modern English alphabet; the names registered on the birth certificate are "X" as a first name, "Æ A-Xii" as a middle name, and "Musk" as a last name. They received criticism for choosing a name perceived to be impractical and difficult to pronounce; Musk has said the intended pronunciation is "X Ash A Twelve". Their second child was born via surrogacy. Despite the pregnancy, Musk confirmed reports that the couple were "semi-separated" in September 2021; in an interview with Time in December 2021, he said he was single. In October 2023, Grimes sued Musk over parental rights and custody of X Æ A-Xii. Elon Musk has taken X Æ A-Xii to multiple official events in Washington, D.C. during Trump's second term in office. Also in July 2022, The Wall Street Journal reported that Musk allegedly had an affair with Nicole Shanahan, the wife of Google co-founder Sergey Brin, in 2021, leading to their divorce the following year. Musk denied the report. Musk also had a relationship with Australian actress Natasha Bassett, who has been described as "an occasional girlfriend". In October 2024, The New York Times reported Musk bought a Texas compound for his children and their mothers, though Musk denied having done so. Musk also has four children with Shivon Zilis, director of operations and special projects at Neuralink: twins born via IVF in 2021, a child born in 2024 via surrogacy and a child born in 2025.[h] On February 14, 2025, Ashley St. Clair, an influencer and author, posted on X claiming to have given birth to Musk's son Romulus five months earlier, which media outlets reported as Musk's supposed thirteenth child.[i] On February 22, 2025, it was reported that St Clair had filed for sole custody of her five-month-old son and for Musk to be recognised as the child's father. On March 31, 2025, Musk wrote that, while he was unsure if he was the father of St. Clair's child, he had paid St. Clair $2.5 million and would continue paying her $500,000 per year.[j] Later reporting from the Wall Street Journal indicated that $1 million of these payments to St. Clair were structured as a loan. In 2014, Musk and Ghislaine Maxwell appeared together in a photograph taken at an Academy Awards after-party, which Musk later described as a "photobomb". The January 2026 Epstein files contain emails between Musk and Epstein from 2012 to 2013, after Epstein's first conviction. Emails released on January 30, 2026, indicated that Epstein invited Musk to visit his private island on multiple occasions. The correspondence showed that while Epstein repeatedly encouraged Musk to attend, Musk did not visit the island. In one instance, Musk discussed the possibility of attending a party with his then-wife Talulah Riley and asked which day would be the "wildest party"; according to the emails, the visit did not take place after Epstein later cancelled the plans.[k] On Christmas day in 2012, Musk emailed Epstein asking "Do you have any parties planned? I’ve been working to the edge of sanity this year and so, once my kids head home after Christmas, I really want to hit the party scene in St Barts or elsewhere and let loose. The invitation is much appreciated, but a peaceful island experience is the opposite of what I’m looking for". Epstein replied that the "ratio on my island" might make Musk's wife uncomfortable to which Musk responded, "Ratio is not a problem for Talulah". On September 11, 2013, Epstein sent an email asking Musk if he had any plans for coming to New York for the opening of the United Nations General Assembly where many "interesting people" would be coming to his house to which Musk responded that "Flying to NY to see UN diplomats do nothing would be an unwise use of time". Epstein responded by stating "Do you think i am retarded. Just kidding, there is no one over 25 and all very cute." Musk has denied any close relationship with Epstein and described him as a "creep" who attempted to ingratiate himself with influential people. When Musk was asked in 2019 if he introduced Epstein to Mark Zuckerberg, Musk responded: "I don’t recall introducing Epstein to anyone, as I don’t know the guy well enough to do so." The released emails nonetheless showed cordial exchanges on a range of topics, including Musk's inquiry about parties on the island. The correspondence also indicated that Musk suggested hosting Epstein at SpaceX, while Epstein separately discussed plans to tour SpaceX and bring "the girls", though there is no evidence that such a visit occurred. Musk has described the release of the files a "distraction", later accusing the second Trump administration of suppressing them to protect powerful individuals, including Trump himself.[l] Wealth Elon Musk is the wealthiest person in the world, with an estimated net worth of US$690 billion as of January 2026, according to the Bloomberg Billionaires Index, and $852 billion according to Forbes, primarily from his ownership stakes in SpaceX and Tesla. Having been first listed on the Forbes Billionaires List in 2012, around 75% of Musk's wealth was derived from Tesla stock in November 2020, although he describes himself as "cash poor". According to Forbes, he became the first person in the world to achieve a net worth of $300 billion in 2021; $400 billion in December 2024; $500 billion in October 2025; $600 billion in mid-December 2025; $700 billion later that month; and $800 billion in February 2026. In November 2025, a Tesla pay package worth potentially $1 trillion for Musk was approved, which he is to receive over 10 years if he meets specific goals. Public image Although his ventures have been highly influential within their separate industries starting in the 2000s, Musk only became a public figure in the early 2010s. He has been described as an eccentric who makes spontaneous and impactful decisions, while also often making controversial statements, contrary to other billionaires who prefer reclusiveness to protect their businesses. Musk's actions and his expressed views have made him a polarizing figure. Biographer Ashlee Vance described people's opinions of Musk as polarized due to his "part philosopher, part troll" persona on Twitter. He has drawn denouncement for using his platform to mock the self-selection of personal pronouns, while also receiving praise for bringing international attention to matters like British survivors of grooming gangs. Musk has been described as an American oligarch due to his extensive influence over public discourse, social media, industry, politics, and government policy. After Trump's re-election, Musk's influence and actions during the transition period and the second presidency of Donald Trump led some to call him "President Musk", the "actual president-elect", "shadow president" or "co-president". Awards for his contributions to the development of the Falcon rockets include the American Institute of Aeronautics and Astronautics George Low Transportation Award in 2008, the Fédération Aéronautique Internationale Gold Space Medal in 2010, and the Royal Aeronautical Society Gold Medal in 2012. In 2015, he received an honorary doctorate in engineering and technology from Yale University and an Institute of Electrical and Electronics Engineers Honorary Membership. Musk was elected a Fellow of the Royal Society (FRS) in 2018.[m] In 2022, Musk was elected to the National Academy of Engineering. Time has listed Musk as one of the most influential people in the world in 2010, 2013, 2018, and 2021. Musk was selected as Time's "Person of the Year" for 2021. Then Time editor-in-chief Edward Felsenthal wrote that, "Person of the Year is a marker of influence, and few individuals have had more influence than Musk on life on Earth, and potentially life off Earth too." Notes References Works cited Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Cloud_storage] | [TOKENS: 1053] |
Contents Cloud storage Cloud storage is a model of computer data storage in which data, said to be on "the cloud", is stored remotely in logical pools and is accessible to users over a network, typically the Internet. The physical storage spans multiple servers (sometimes in multiple locations), and the physical environment is typically owned and managed by a cloud computing provider. These cloud storage providers are responsible for keeping the data available and accessible, and the physical environment secured, protected, and running. People and organizations buy or lease storage capacity from the providers to store user, organization, or application data. Cloud storage services may be accessed through a colocated cloud computing service, a web service application programming interface (API) or by applications that use the API, such as cloud desktop storage, a cloud storage gateway or Web-based content management systems. History Cloud computing is believed to have been invented by J. C. R. Licklider in the 1960s with his work on ARPANET to connect people and data from anywhere at any time. In 1983, CompuServe offered its consumer users a small amount of disk space that could be used to store any files they chose to upload. In 1994, AT&T launched PersonaLink Services, an online platform for personal and business communication and entrepreneurship. The storage was one of the first to be all web-based, and referenced in their commercials as, "you can think of our electronic meeting place as the cloud." Amazon Web Services introduced their cloud storage service Amazon S3 in 2006, and has gained widespread recognition and adoption as the storage supplier to popular services such as SmugMug, Dropbox, and Pinterest. In 2005, Box announced an online file sharing and personal cloud content management service for businesses. Architecture Cloud storage is based on highly virtualized infrastructure and is like broader cloud computing in terms of interfaces, near-instant elasticity and scalability, multi-tenancy, and metered resources. Cloud storage services can be used from an off-premises service (Amazon S3) or deployed on-premises (ViON Capacity Services). There are three types of cloud storage: a hosted object storage service, file storage, and block storage. Each of these cloud storage types offer their own unique advantages. Cloud storage is: Advantages Potential concerns Security of stored data and data in transit may be a concern when storing sensitive data at a cloud storage provider. Outsourcing data storage increases the attack surface area. Cloud storage is a rich resource for both hackers and national security agencies. Because the cloud holds data from many different users and organizations, hackers see it as a very valuable target. There are several options available to avoid security issues. One option is to use a private cloud instead of a public cloud. Another option is to ingest data in an encrypted format where the key is held within the on-premise infrastructure. To this end, access is often by use of on-premise cloud storage gateways that have options to encrypt the data prior to transfer. Typically, cloud storage Service Level Agreements (SLAs) do not encompass all forms of service interruptions. Exclusions typically include planned maintenance, downtime resulting from external factors such as network issues, human errors like misconfigurations, natural disasters, force majeure events, or security breaches. Typically, customers bear the responsibility of monitoring SLA compliance and must file claims for any unmet SLAs within a designated timeframe. Customers should be aware of how deviations from SLAs are calculated, as these parameters may vary by other services offered within the same provider. These requirements can place a considerable burden on customers. Additionally, SLA percentages and conditions can differ across various services within the same provider, with some services lacking any SLA altogether. In cases of service interruptions due to hardware failures in the cloud provider, service providers typically do not offer monetary compensation. Instead, eligible users may receive credits as outlined in the corresponding SLA. Hybrid cloud storage Hybrid cloud storage is a term for a storage infrastructure that uses a combination of on-premises storage resources with cloud storage. The on-premises storage is usually managed by the organization, while the public cloud storage provider is responsible for the management and security of the data stored in the cloud. Hybrid cloud storage can be implemented by an on-premises cloud storage gateway that presents a file system or object storage interface that users can access in the same way they would access a local storage system. The cloud storage gateway transparently transfers the data to and from the cloud storage service, providing low latency access to the data through a local cache. Hybrid cloud storage can be used to supplement an organization's internal storage resources, or it can be used as the primary storage infrastructure. In either case, hybrid cloud storage can provide organizations with greater flexibility and scalability than traditional on-premises storage infrastructure. There are several benefits to using hybrid cloud storage, including the ability to cache frequently used data on-site for quick access, while inactive cold data is stored off-site in the cloud. This can save space, reduce storage costs and improve performance. Additionally, hybrid cloud storage can provide organizations with greater redundancy and fault tolerance, as data is stored in both on-premises and cloud storage infrastructure. See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Middle_East#cite_ref-72] | [TOKENS: 6152] |
Contents Middle East The Middle East[b] is a geopolitical region encompassing the Arabian Peninsula, Egypt, Iran, Iraq, the Levant, and Turkey. The term came into widespread usage by Western European nations in the early 20th century as a replacement of the term Near East (both were in contrast to the Far East). The term "Middle East" has led to some confusion over its changing definitions. Since the late 20th century, it has been criticized as being too Eurocentric. The region includes the vast majority of the territories included in the closely associated definition of West Asia, but without the South Caucasus. It also includes all of Egypt (not just the Sinai region) and all of Turkey (including East Thrace). Most Middle Eastern countries (13 out of 18) are part of the Arab world. The three most populous countries in the region are Egypt, Iran, and Turkey, while Saudi Arabia is the largest Middle Eastern country by area. The history of the Middle East dates back to ancient times, and it was long considered the "cradle of civilization". The geopolitical importance of the region has been recognized and competed for during millennia. The Abrahamic religions (Judaism, Christianity, and Islam) have their origins in the Middle East. Arabs constitute the main ethnic group in the region, followed by Turks, Persians, Kurds, Jews, and Assyrians. The Middle East generally has a hot, arid climate, especially in the Arabian and Egyptian regions. Several major rivers provide irrigation to support agriculture in limited areas here, such as the Nile Delta in Egypt, the Tigris and Euphrates watersheds of Mesopotamia, and the basin of the Jordan River that spans most of the Levant. These regions are collectively known as the Fertile Crescent, and comprise the core of what historians had long referred to as the cradle of civilization; multiple regions of the world have since been classified as also having developed independent, original civilizations. Conversely, the Levantine coast and most of Turkey have relatively temperate climates typical of the Mediterranean, with dry summers and cool, wet winters. Most of the countries that border the Persian Gulf have vast reserves of petroleum. Monarchs of the Arabian Peninsula in particular have benefitted economically from petroleum exports. Because of the arid climate and dependence on the fossil fuel industry, the Middle East is both a major contributor to climate change and a region that is expected to be severely adversely affected by it. Other concepts of the region exist, including the broader Middle East and North Africa (MENA), which includes states of the Maghreb and the Sudan. The term the "Greater Middle East" also includes Afghanistan, Mauritania, Pakistan, as well as parts of East Africa, and sometimes Central Asia and the South Caucasus. Terminology The term "Middle East" may have originated in the 1850s in the British India Office. However, it became more widely known when United States naval strategist Alfred Thayer Mahan used the term in 1902 to "designate the area between Arabia and India". During this time the British and Russian empires were vying for influence in Central Asia, a rivalry that would become known as the Great Game. Mahan realized not only the strategic importance of the region, but also of its center, the Persian Gulf. He labeled the area surrounding the Persian Gulf as the Middle East. He said that, beyond Egypt's Suez Canal, the Gulf was the most important passage for Britain to control in order to keep the Russians from advancing towards British India. Mahan first used the term in his article "The Persian Gulf and International Relations", published in September 1902 in the National Review, a British journal. The Middle East, if I may adopt a term which I have not seen, will some day need its Malta, as well as its Gibraltar; it does not follow that either will be in the Persian Gulf. Naval force has the quality of mobility which carries with it the privilege of temporary absences; but it needs to find on every scene of operation established bases of refit, of supply, and in case of disaster, of security. The British Navy should have the facility to concentrate in force if occasion arise, about Aden, India, and the Persian Gulf. Mahan's article was reprinted in The Times and followed in October by a 20-article series entitled "The Middle Eastern Question", written by Sir Ignatius Valentine Chirol. During this series, Sir Ignatius expanded the definition of Middle East to include "those regions of Asia which extend to the borders of India or command the approaches to India." After the series ended in 1903, The Times removed quotation marks from subsequent uses of the term. Until World War II, it was customary to refer to areas centered on Turkey and the eastern shore of the Mediterranean as the "Near East", while the "Far East" centered on China, India and Japan. The Middle East was then defined as the area from Mesopotamia to Burma; namely, the area between the Near East and the Far East. This area broadly corresponds to South Asia. In the late 1930s, the British established the Middle East Command, which was based in Cairo, for its military forces in the region. After that time, the term "Middle East" gained broader usage in Europe and the United States. Following World War II, for example, the Middle East Institute was founded in Washington, D.C. in 1946. The corresponding adjective is Middle Eastern and the derived noun is Middle Easterner. While non-Eurocentric terms such as "Southwest Asia" or "Swasia" have been sparsely used, the classification of the African country, Egypt, among those counted in the Middle East challenges the usefulness of using such terms. The description Middle has also led to some confusion over changing definitions. Before the First World War, "Near East" was used in English to refer to the Balkans and the Ottoman Empire, while "Middle East" referred to the Caucasus, Persia, and Arabian lands, and sometimes Afghanistan, India and others. In contrast, "Far East" referred to the countries of East Asia (e.g. China, Japan, and Korea). With the collapse of the Ottoman Empire in 1918, "Near East" largely fell out of common use in English, while "Middle East" came to be applied to the emerging independent countries of the Islamic world. However, the usage "Near East" was retained by a variety of academic disciplines, including archaeology and ancient history. In their usage, the term describes an area identical to the term Middle East, which is not used by these disciplines (see ancient Near East).[citation needed] The first official use of the term "Middle East" by the United States government was in the 1957 Eisenhower Doctrine, which pertained to the Suez Crisis. Secretary of State John Foster Dulles defined the Middle East as "the area lying between and including Libya on the west and Pakistan on the east, Syria and Iraq on the North and the Arabian peninsula to the south, plus the Sudan and Ethiopia." In 1958, the State Department explained that the terms "Near East" and "Middle East" were interchangeable, and defined the region as including only Egypt, Syria, Israel, Lebanon, Jordan, Iraq, Saudi Arabia, Kuwait, Bahrain, and Qatar. Since the late 20th century, scholars and journalists from the region, such as journalist Louay Khraish and historian Hassan Hanafi have criticized the use of "Middle East" as a Eurocentric and colonialist term. The Associated Press Stylebook of 2004 says that Near East formerly referred to the farther west countries while Middle East referred to the eastern ones, but that now they are synonymous. It instructs: Use Middle East unless Near East is used by a source in a story. Mideast is also acceptable, but Middle East is preferred. European languages have adopted terms similar to Near East and Middle East. Since these are based on a relative description, the meanings depend on the country and are generally different from the English terms. In German the term Naher Osten (Near East) is still in common use (nowadays the term Mittlerer Osten is more and more common in press texts translated from English sources, albeit having a distinct meaning). In the four Slavic languages, Russian Ближний Восток or Blizhniy Vostok, Bulgarian Близкия Изток, Polish Bliski Wschód or Croatian Bliski istok (terms meaning Near East are the only appropriate ones for the region). However, some European languages do have "Middle East" equivalents, such as French Moyen-Orient, Swedish Mellanöstern, Spanish Oriente Medio or Medio Oriente, Greek is Μέση Ανατολή (Mesi Anatoli), and Italian Medio Oriente.[c] Perhaps because of the political influence of the United States and Europe, and the prominence of Western press, the Arabic equivalent of Middle East (Arabic: الشرق الأوسط ash-Sharq al-Awsaṭ) has become standard usage in the mainstream Arabic press. It comprises the same meaning as the term "Middle East" in North American and Western European usage. The designation, Mashriq, also from the Arabic root for East, also denotes a variously defined region around the Levant, the eastern part of the Arabic-speaking world (as opposed to the Maghreb, the western part). Even though the term originated in the West, countries of the Middle East that use languages other than Arabic also use that term in translation. For instance, the Persian equivalent for Middle East is خاورمیانه (Khāvar-e miyāneh), the Hebrew is המזרח התיכון (hamizrach hatikhon), and the Turkish is Orta Doğu. Countries and territory Traditionally included within the Middle East are Arabia, Asia Minor, East Thrace, Egypt, Iran, the Levant, Mesopotamia, and the Socotra Archipelago. The region includes 17 UN-recognized countries and one British Overseas Territory. Various concepts are often paralleled to the Middle East, most notably the Near East, Fertile Crescent, and Levant. These are geographical concepts, which refer to large sections of the modern-day Middle East, with the Near East being the closest to the Middle East in its geographical meaning. Due to it primarily being Arabic speaking, the Maghreb region of North Africa is sometimes included. "Greater Middle East" is a political term coined by the second Bush administration in the first decade of the 21st century to denote various countries, pertaining to the Muslim world, specifically Afghanistan, Iran, Pakistan, and Turkey. Various Central Asian countries are sometimes also included. History The Middle East lies at the juncture of Africa and Eurasia and of the Indian Ocean and the Mediterranean Sea (see also: Indo-Mediterranean). It is the birthplace and spiritual center of religions such as Christianity, Islam, Judaism, Manichaeism, Yezidi, Druze, Yarsan, and Mandeanism, and in Iran, Mithraism, Zoroastrianism, Manicheanism, and the Baháʼí Faith. Throughout its history the Middle East has been a major center of world affairs; a strategically, economically, politically, culturally, and religiously sensitive area. The region is one of the regions where agriculture was independently discovered, and from the Middle East it was spread, during the Neolithic, to different regions of the world such as Europe, the Indus Valley and Eastern Africa. Prior to the formation of civilizations, advanced cultures formed all over the Middle East during the Stone Age. The search for agricultural lands by agriculturalists, and pastoral lands by herdsmen meant different migrations took place within the region and shaped its ethnic and demographic makeup. The Middle East is widely and most famously known as the cradle of civilization. The world's earliest civilizations, Mesopotamia (Sumer, Akkad, Assyria and Babylonia), ancient Egypt and Kish in the Levant, all originated in the Fertile Crescent and Nile Valley regions of the ancient Near East. These were followed by the Hittite, Greek, Hurrian and Urartian civilisations of Asia Minor; Elam, Persia and Median civilizations in Iran, as well as the civilizations of the Levant (such as Ebla, Mari, Nagar, Ugarit, Canaan, Aramea, Mitanni, Phoenicia and Israel) and the Arabian Peninsula (Magan, Sheba, Ubar). The Near East was first largely unified under the Neo Assyrian Empire, then the Achaemenid Empire followed later by the Macedonian Empire and after this to some degree by the Iranian empires (namely the Parthian and Sassanid Empires), the Roman Empire and Byzantine Empire. The region served as the intellectual and economic center of the Roman Empire and played an exceptionally important role due to its periphery on the Sassanid Empire. Thus, the Romans stationed up to five or six of their legions in the region for the sole purpose of defending it from Sassanid and Bedouin raids and invasions. From the 4th century CE onwards, the Middle East became the center of the two main powers at the time, the Byzantine Empire and the Sassanid Empire. However, it would be the later Islamic Caliphates of the Middle Ages, or Islamic Golden Age which began with the Islamic conquest of the region in the 7th century AD, that would first unify the entire Middle East as a distinct region and create the dominant Islamic Arab ethnic identity that largely (but not exclusively) persists today. The 4 caliphates that dominated the Middle East for more than 600 years were the Rashidun Caliphate, the Umayyad caliphate, the Abbasid caliphate and the Fatimid caliphate. Additionally, the Mongols would come to dominate the region, the Kingdom of Armenia would incorporate parts of the region to their domain, the Seljuks would rule the region and spread Turko-Persian culture, and the Franks would found the Crusader states that would stand for roughly two centuries. Josiah Russell estimates the population of what he calls "Islamic territory" as roughly 12.5 million in 1000 – Anatolia 8 million, Syria 2 million, and Egypt 1.5 million. From the 16th century onward, the Middle East came to be dominated, once again, by two main powers: the Ottoman Empire and the Safavid dynasty. The modern Middle East began after World War I, when the Ottoman Empire, which was allied with the Central Powers, was defeated by the Allies and partitioned into a number of separate nations, initially under British and French Mandates. Other defining events in this transformation included the establishment of Israel in 1948 and the eventual departure of European powers, notably Britain and France by the end of the 1960s. They were supplanted in some part by the rising influence of the United States from the 1970s onwards. In the 20th century, the region's significant stocks of crude oil gave it new strategic and economic importance. Mass production of oil began around 1945, with Saudi Arabia, Iran, Kuwait, Iraq, and the United Arab Emirates having large quantities of oil. Estimated oil reserves, especially in Saudi Arabia and Iran, are some of the highest in the world, and the international oil cartel OPEC is dominated by Middle Eastern countries. During the Cold War, the Middle East was a theater of ideological struggle between the two superpowers and their allies: NATO and the United States on one side, and the Soviet Union and Warsaw Pact on the other, as they competed to influence regional allies. Besides the political reasons there was also the "ideological conflict" between the two systems. Moreover, as Louise Fawcett argues, among many important areas of contention, or perhaps more accurately of anxiety, were, first, the desires of the superpowers to gain strategic advantage in the region, second, the fact that the region contained some two-thirds of the world's oil reserves in a context where oil was becoming increasingly vital to the economy of the Western world [...] Within this contextual framework, the United States sought to divert the Arab world from Soviet influence. Throughout the 20th and 21st centuries, the region has experienced both periods of relative peace and tolerance and periods of conflict particularly between Sunnis and Shiites. Geography In 2018, the MENA region emitted 3.2 billion tonnes of carbon dioxide and produced 8.7% of global greenhouse gas emissions (GHG) despite making up only 6% of the global population. These emissions are mostly from the energy sector, an integral component of many Middle Eastern and North African economies due to the extensive oil and natural gas reserves that are found within the region. The Middle East region is one of the most vulnerable to climate change. The impacts include increase in drought conditions, aridity, heatwaves and sea level rise. Sharp global temperature and sea level changes, shifting precipitation patterns and increased frequency of extreme weather events are some of the main impacts of climate change as identified by the Intergovernmental Panel on Climate Change (IPCC). The MENA region is especially vulnerable to such impacts due to its arid and semi-arid environment, facing climatic challenges such as low rainfall, high temperatures and dry soil. The climatic conditions that foster such challenges for MENA are projected by the IPCC to worsen throughout the 21st century. If greenhouse gas emissions are not significantly reduced, part of the MENA region risks becoming uninhabitable before the year 2100. Climate change is expected to put significant strain on already scarce water and agricultural resources within the MENA region, threatening the national security and political stability of all included countries. Over 60 percent of the region's population lives in high and very high water-stressed areas compared to the global average of 35 percent. This has prompted some MENA countries to engage with the issue of climate change on an international level through environmental accords such as the Paris Agreement. Law and policy are also being established on a national level amongst MENA countries, with a focus on the development of renewable energies. Economy Middle Eastern economies range from being very poor (such as Gaza and Yemen) to extremely wealthy nations (such as Qatar and UAE). According to the International Monetary Fund, the three largest Middle Eastern economies in nominal GDP in 2023 were Saudi Arabia ($1.06 trillion), Turkey ($1.03 trillion), and Israel ($0.54 trillion). For nominal GDP per person, the highest ranking countries are Qatar ($83,891), Israel ($55,535), the United Arab Emirates ($49,451) and Cyprus ($33,807). Turkey ($3.6 trillion), Saudi Arabia ($2.3 trillion), and Iran ($1.7 trillion) had the largest economies in terms of GDP PPP. For GDP PPP per person, the highest-ranking countries are Qatar ($124,834), the United Arab Emirates ($88,221), Saudi Arabia ($64,836), Bahrain ($60,596) and Israel ($54,997). The lowest-ranking country in the Middle East, in terms of GDP nominal per capita, is Yemen ($573). The economic structure of Middle Eastern nations are different because while some are heavily dependent on export of only oil and oil-related products (Saudi Arabia, the UAE and Kuwait), others have a highly diverse economic base (such as Cyprus, Israel, Turkey and Egypt). Industries of the Middle Eastern region include oil and oil-related products, agriculture, cotton, cattle, dairy, textiles, leather products, surgical instruments, defence equipment (guns, ammunition, tanks, submarines, fighter jets, UAVs, and missiles). Banking is an important sector, especially for UAE and Bahrain. With the exception of Cyprus, Turkey, Egypt, Lebanon and Israel, tourism has been a relatively undeveloped area of the economy, in part because of the socially conservative nature of the region as well as political turmoil in certain regions. Since the end of the COVID pandemic however, countries such as the UAE, Bahrain, and Jordan have begun attracting greater numbers of tourists because of improving tourist facilities and the relaxing of tourism-related restrictive policies. Unemployment is high in the Middle East and North Africa region, particularly among people aged 15–29, a demographic representing 30% of the region's population. The total regional unemployment rate in 2025 is 10.8%, and among youth is as high as 28%. Demographics Arabs constitute the largest ethnic group in the Middle East, followed by various Iranian peoples and then by Turkic peoples (Turkish, Azeris, Syrian Turkmen, and Iraqi Turkmen). Native ethnic groups of the region include, in addition to Arabs, Arameans, Assyrians, Baloch, Berbers, Copts, Druze, Greek Cypriots, Jews, Kurds, Lurs, Mandaeans, Persians, Samaritans, Shabaks, Tats, and Zazas. European ethnic groups that form a diaspora in the region include Albanians, Bosniaks, Circassians (including Kabardians), Crimean Tatars, Greeks, Franco-Levantines, Italo-Levantines, and Iraqi Turkmens. Among other migrant populations are Chinese, Filipinos, Indians, Indonesians, Pakistanis, Pashtuns, Romani, and Afro-Arabs. "Migration has always provided an important vent for labor market pressures in the Middle East. For the period between the 1970s and 1990s, the Arab states of the Persian Gulf in particular provided a rich source of employment for workers from Egypt, Yemen and the countries of the Levant, while Europe had attracted young workers from North African countries due both to proximity and the legacy of colonial ties between France and the majority of North African states." According to the International Organization for Migration, there are 13 million first-generation migrants from Arab nations in the world, of which 5.8 reside in other Arab countries. Expatriates from Arab countries contribute to the circulation of financial and human capital in the region and thus significantly promote regional development. In 2009 Arab countries received a total of US$35.1 billion in remittance in-flows and remittances sent to Jordan, Egypt and Lebanon from other Arab countries are 40 to 190 per cent higher than trade revenues between these and other Arab countries. In Somalia, the Somali Civil War has greatly increased the size of the Somali diaspora, as many of the best educated Somalis left for Middle Eastern countries as well as Europe and North America. Non-Arab Middle Eastern countries such as Turkey, Israel and Iran are also subject to important migration dynamics. A fair proportion of those migrating from Arab nations are from ethnic and religious minorities facing persecution and are not necessarily ethnic Arabs, Iranians or Turks.[citation needed] Large numbers of Kurds, Jews, Assyrians, Greeks and Armenians as well as many Mandeans have left nations such as Iraq, Iran, Syria and Turkey for these reasons during the last century. In Iran, many religious minorities such as Christians, Baháʼís, Jews and Zoroastrians have left since the Islamic Revolution of 1979. The Middle East is very diverse when it comes to religions, many of which originated there. Islam is the largest religion in the Middle East, but other faiths that originated there, such as Judaism and Christianity, are also well represented. Christian communities have played a vital role in the Middle East, and they represent 78% of Cyprus population, and 40.5% of Lebanon, where the Lebanese president, half of the cabinet, and half of the parliament follow one of the various Lebanese Christian rites. There are also important minority religions like the Baháʼí Faith, Yarsanism, Yazidism, Zoroastrianism, Mandaeism, Druze, and Shabakism, and in ancient times the region was home to Mesopotamian religions, Canaanite religions, Manichaeism, Mithraism and various monotheist gnostic sects. The six top languages, in terms of numbers of speakers, are Arabic, Persian, Turkish, Kurdish, Modern Hebrew and Greek. About 20 minority languages are also spoken in the Middle East. Arabic, with all its dialects, is the most widely spoken language in the Middle East, with Literary Arabic being official in all North African and in most West Asian countries. Arabic dialects are also spoken in some adjacent areas in neighbouring Middle Eastern non-Arab countries. It is a member of the Semitic branch of the Afro-Asiatic languages. Several Modern South Arabian languages such as Mehri and Soqotri are also spoken in Yemen and Oman. Another Semitic language is Aramaic and its dialects are spoken mainly by Assyrians and Mandaeans, with Western Aramaic still spoken in two villages near Damascus, Syria. There is also an Oasis Berber-speaking community in Egypt where the language is also known as Siwa. It is a non-Semitic Afro-Asiatic sister language. Persian is the second most spoken language. While it is primarily spoken in Iran and some border areas in neighbouring countries, the country is one of the region's largest and most populous. It belongs to the Indo-Iranian branch of the family of Indo-European languages. Other Western Iranic languages spoken in the region include Achomi, Daylami, Kurdish dialects, Semmani, Lurish, amongst many others. The close third-most widely spoken language, Turkish, is largely confined to Turkey, which is also one of the region's largest and most populous countries, but it is present in areas in neighboring countries. It is a member of the Turkic languages, which have their origins in East Asia. Another Turkic language, Azerbaijani, is spoken by Azerbaijanis in Iran. The fourth-most widely spoken language, Kurdish, is spoken in the countries of Iran, Iraq, Syria and Turkey, Sorani Kurdish is the second official language in Iraq (instated after the 2005 constitution) after Arabic. Hebrew is the official language of Israel, with Arabic given a special status after the 2018 Basic law lowered its status from an official language prior to 2018. Hebrew is spoken and used by over 80% of Israel's population, the other 20% using Arabic. Modern Hebrew only began being spoken in the 20th century after being revived in the late 19th century by Elizer Ben-Yehuda (Elizer Perlman) and European Jewish settlers, with the first native Hebrew speaker being born in 1882. Greek is one of the two official languages of Cyprus, and the country's main language. Small communities of Greek speakers exist all around the Middle East; until the 20th century it was also widely spoken in Asia Minor (being the second most spoken language there, after Turkish) and Egypt. During the antiquity, Ancient Greek was the lingua franca for many areas of the western Middle East and until the Muslim expansion it was widely spoken there as well. Until the late 11th century, it was also the main spoken language in Asia Minor; after that it was gradually replaced by the Turkish language as the Anatolian Turks expanded and the local Greeks were assimilated, especially in the interior. English is one of the official languages of Akrotiri and Dhekelia. It is also commonly taught and used as a foreign second language, in countries such as Egypt, Jordan, Iran, Iraq, Qatar, Bahrain, United Arab Emirates and Kuwait. It is also a main language in some Emirates of the United Arab Emirates. It is also spoken as native language by Jewish immigrants from Anglophone countries (UK, US, Australia) in Israel and understood widely as second language there. French is taught and used in many government facilities and media in Lebanon, and is taught in some primary and secondary schools of Egypt and Syria. Maltese, a Semitic language mainly spoken in Europe, is used by the Franco-Maltese diaspora in Egypt. Due to widespread immigration of French Jews to Israel, it is the native language of approximately 200,000 Jews in Israel. Armenian speakers are to be found in the region. Georgian is spoken by the Georgian diaspora. Russian is spoken by a large portion of the Israeli population, because of emigration in the late 1990s. Russian today is a popular unofficial language in use in Israel; news, radio and sign boards can be found in Russian around the country after Hebrew and Arabic. Circassian is also spoken by the diaspora in the region and by almost all Circassians in Israel who speak Hebrew and English as well. The largest Romanian-speaking community in the Middle East is found in Israel, where as of 1995[update] Romanian is spoken by 5% of the population.[d] Bengali, Hindi and Urdu are widely spoken by migrant communities in many Middle Eastern countries, such as Saudi Arabia (where 20–25% of the population is South Asian), the United Arab Emirates (where 50–55% of the population is South Asian), and Qatar, which have large numbers of Pakistani, Bangladeshi and Indian immigrants. Culture The Middle East has recently become more prominent in hosting global sport events due to its wealth and desire to diversify its economy. The South Asian diaspora is a major backer of cricket in the region. See also Notes References Further reading External links 29°N 41°E / 29°N 41°E / 29; 41 |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/File:BH-no-escape-2.svg] | [TOKENS: 216] |
File:BH-no-escape-2.svg Summary This is part two of a series of figures designed to help explain how a black hole distorts the casual structure of the space time around it. The figures are essentially cartoonized versions of Finkelstein diagrams (see: Hawking, Stephen W.; George Ellis (1973) The large scale structure of space-time, Cambridge: Cambridge University Press, p. 152 ISBN: 0-521-09906-4. OCLC: 16002677. ). The diagrams have been stripped of their technical details as to emphasize the essence of the argument. The other parts are: and Licensing File history Click on a date/time to view the file as it appeared at that time. File usage The following 5 pages use this file: Global file usage The following other wikis use this file: Metadata This file contains additional information, probably added from the digital camera or scanner used to create or digitize it. If the file has been modified from its original state, some details may not fully reflect the modified file. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Parliamentary_system] | [TOKENS: 3554] |
Contents Parliamentary system A parliamentary system, or parliamentary democracy, is a form of government based on the fusion of powers. In this system the head of government (chief executive) derives their democratic legitimacy from their ability to command the support ("confidence") of a majority of the parliament, to which they are held accountable. This head of government is usually, but not always, distinct from a ceremonial head of state. This is in contrast to a presidential or assembly-independent system, which features a president who is not fully accountable to the legislature, and cannot be replaced by a simple majority vote. Countries with parliamentary systems may be constitutional monarchies, where a monarch is the head of state while the head of government is almost always a member of parliament, or parliamentary republics, where a mostly ceremonial president is the head of state while the head of government is from the legislature. In a few countries, the head of government is also head of state but is elected by the legislature. In bicameral parliaments, the head of government is generally, though not always, a member of the lower house. Typically, the head of state in a parliamentary republic is elected by popular vote, however in some cases they are elected by an electoral college (e.g. Germany) or by members of parliament (e.g. Italy) in a special process. Parliamentary democracy is the predominant form of government in the European Union, Oceania, and throughout the former British Empire, with other users scattered throughout Africa and Asia. A similar system, called a council–manager government, is used by many local governments in the United States. History The first parliaments date back to Europe in the Middle Ages. The earliest example of a parliament is disputed, especially depending how the term is defined. For example, the Icelandic Althing consisting of prominent individuals among the free landowners of the various districts of the Icelandic Commonwealth first gathered around the year 930 (it conducted its business orally, with no written record allowing an exact date). The first written record of a parliament, in particular in the sense of an assembly separate from the population called in presence of a king was in 1188, when Alfonso IX, King of Leon (Spain) convened the three states in the Cortes of León. The Corts of Catalonia were the first parliament of Europe that officially obtained the power to pass legislation, apart from the custom. An early example of parliamentary government also occurred in today's Netherlands and Belgium during the Dutch revolt (1581), when the sovereign, legislative and executive powers were taken over by the States General of the Netherlands from the monarch, King Philip II of Spain.[citation needed] Significant developments Kingdom of Great Britain, in particular in the period 1707 to 1800 and its contemporary, the Parliamentary System in Sweden between 1721 and 1772, and later in Europe and elsewhere in the 19th and 20th centuries, with the expansion of like institutions, and beyond In England, Simon de Montfort is remembered as one of the figures relevant later for convening two famous parliaments. The first, in 1258, stripped the king of unlimited authority and the second, in 1265, included ordinary citizens from the towns. Later, in the 17th century, the Parliament of England pioneered some of the ideas and systems of liberal democracy culminating in the Glorious Revolution and passage of the Bill of Rights 1689. In the Kingdom of Great Britain, the monarch in theory chaired the cabinet and chose ministers. In practice, King George I's inability to speak English led to the responsibility for chairing cabinet to go to the leading minister, literally the prime or first minister, Robert Walpole. The gradual democratisation of Parliament with the broadening of the voting franchise increased Parliament's role in controlling government, and in deciding whom the king could ask to form a government. By the 19th century, the Great Reform Act 1832 led to parliamentary dominance, with its choice invariably deciding who was prime minister and the complexion of the government. Other countries gradually adopted what came to be called the Westminster system of government, with an executive answerable to the lower house of a bicameral parliament, and exercising, in the name of the head of state, powers nominally vested in the head of state – hence the use of phrases such as Her Majesty's government (in constitutional monarchies) or His Excellency's government (in parliamentary republics). Such a system became particularly prevalent in older British dominions, many of which had their constitutions enacted by the British parliament; such as Australia, New Zealand, Canada, the Irish Free State and the Union of South Africa. Some of these parliaments were reformed from, or were initially developed as distinct from their original British model: the Australian Senate, for instance, has since its inception more closely reflected the US Senate than the British House of Lords; whereas since 1950 there is no upper house in New Zealand. Many of these countries such as Trinidad and Tobago and Barbados have severed institutional ties to Great Britain by becoming republics with their own ceremonial presidents, but retain the Westminster system of government. The idea of parliamentary accountability and responsible government spread with these systems. Democracy and parliamentarianism became increasingly prevalent in Europe in the years after World War I, partially imposed by the democratic victors,[how?] the United States, Great Britain and France, on the defeated countries and their successors, notably Germany's Weimar Republic and the First Austrian Republic. Nineteenth-century urbanisation, the Industrial Revolution and modernism had already made the parliamentarist demands of the Radicals and the emerging movement of social democrats increasingly impossible to ignore; these forces came to dominate many states that transitioned to parliamentarism, particularly in the French Third Republic where the Radical Party and its centre-left allies dominated the government for several decades. The rise of fascism in the 1930s put an end to parliamentary democracy in Italy and Germany, among others. After the Second World War, the defeated Axis powers were occupied by the victorious Allies. In those countries occupied by the Allied democracies (the United States, United Kingdom, and France) parliamentary constitutions were implemented, resulting in the Constitution of Italy and the Basic Law for the Federal Republic of Germany (now all of Germany) and the 1947 Constitution of Japan. The experiences of the war in the occupied nations where the legitimate democratic governments were allowed to return strengthened the public commitment to parliamentary principles; in Denmark, a new constitution was written in 1953, while a long and acrimonious debate in Norway resulted in no changes being made to the Constitution of Norwa, a strongly entrenched democratic constitution. Characteristics A parliamentary system may be either bicameral, with two chambers of parliament (or houses) or unicameral, with just one parliamentary chamber. A bicameral parliament usually consists of a directly elected lower house with the power to determine the executive government, and an upper house which may be appointed or elected through a different mechanism from the lower house. A 2019 peer-reviewed meta-analysis based on 1,037 regressions in 46 studies finds that presidential systems generally seem to favor revenue cuts, while parliamentary systems would rely more on fiscal expansion characterized by a higher level of spending before an election. Scholars of democracy such as Arend Lijphart distinguish two types of parliamentary democracies: the Westminster and Consensus systems. Implementations of the parliamentary system can also differ as to how the prime minister and government are appointed and whether the government needs the explicit approval of the parliament, rather than just the absence of its disapproval. While most parliamentary systems such as India require the prime minister and other ministers to be a member of the legislature, in other countries like Canada and the United Kingdom this only exists as a convention, some other countries including Norway, Sweden and the Benelux countries require a sitting member of the legislature to resign such positions upon being appointed to the executive. Parliamentary systems vary also vary in how the head of state is elected or selected. Parliamentary monarchies operate under hereditary succession. Parliamentary republics most commonly elect the head of state directly by popular vote, typically via a two-round system, therefore a majority or plurality principle. Furthermore, there are variations as to what conditions exist (if any) for the government to have the right to dissolve the parliament: The parliamentary system can be contrasted with a presidential system which operates under a stricter separation of powers, whereby the executive does not form part of—nor is appointed by—the parliamentary or legislative body. In such a system, parliaments or congresses do not select or dismiss heads of government, and governments cannot request an early dissolution as may be the case for parliaments (although the parliament may still be able to dissolve itself, as in the case of Cyprus). There also exists the semi-presidential system that draws on both presidential systems and parliamentary systems by combining a powerful president with an executive responsible to parliament: for example, the French Fifth Republic. Parliamentarianism may also apply to regional and local governments. An example is Oslo which has an executive council (Byråd) as a part of the parliamentary system. The devolved nations of the United Kingdom are also parliamentary and which, as with the UK Parliament, may hold early elections – this has only occurred with regards to the Northern Ireland Assembly in 2017 and 2022. A few parliamentary democratic nations such as India, Pakistan and Bangladesh have enacted laws that prohibit floor crossing or switching parties after the election. Under these laws, elected representatives will lose their seat in the parliament if they go against their party in votes. In the UK parliament, a member is free to cross over to a different party. In Canada and Australia, there are no restraints on legislators switching sides. In New Zealand, waka-jumping legislation provides that MPs who switch parties or are expelled from their party may be expelled from Parliament at the request of their former party's leader. A few parliamentary democracies such as the United Kingdom and New Zealand have weak or non-existent checks on the legislative power of their Parliaments, where any newly approved Act shall take precedence over all prior Acts. All laws are equally unentrenched, wherein judicial review may not outright annul nor amend them, as frequently occurs in other parliamentary systems like Germany. Whilst the head of state for both nations (Monarch, and or Governor General) has the de jure power to withhold assent to any bill passed by their Parliament, this check has not been exercised in Britain since the 1708 Scottish Militia Bill. Whilst both the UK and New Zealand have some Acts or parliamentary rules establishing supermajorities or additional legislative procedures for certain legislation, such as previously with the Fixed-term Parliaments Act 2011 (FTPA), these can be bypassed through the enactment of another that amends or ignores these supermajorities away, such as with the Early Parliamentary General Election Act 2019 – bypassing the 2/3rd supermajority required for an early dissolution under the FTPA -, which enabled the early dissolution for the 2019 general election. Parliamentarism metrics allow a quantitative comparison of the strength of parliamentary systems for individual countries. One parliamentarism metric is the Parliamentary Powers Index. Advantages Parliamentary systems like that found in the United Kingdom are widely considered to be more flexible, allowing a rapid change in legislation and policy as long as there is a stable majority or coalition in parliament, allowing the government to have 'few legal limits on what it can do' When combined with first-past-the-post voting, this system produces the classic "Westminster model" with the twin virtues of strong but responsive party government. This electoral system providing a strong majority in the House of Commons, paired with the fused power system results in a particularly powerful government able to provide change and 'innovate'. The United Kingdom's fused power system is often noted to be advantageous with regard to accountability. The centralised government allows for more transparency as to where decisions originate from, this contrasts with the American system with Treasury Secretary C. Douglas Dillon saying "the president blames Congress, the Congress blames the president, and the public remains confused and disgusted with government in Washington". Furthermore, ministers of the U.K. cabinet are subject to weekly Question Periods in which their actions/policies are scrutinised; no such regular check on the government exists in the U.S. system. A 2001 World Bank study found that parliamentary systems are associated with less corruption. In his 1867 book The English Constitution, Walter Bagehot praised parliamentary governments for producing serious debates, for allowing for a change in power without an election, and for allowing elections at any time. Bagehot considered fixed-term elections such as the four-year election rule for presidents of the United States to be unnatural, as it can potentially allow a president who has disappointed the public with a dismal performance in the second year of their term to continue on until the end of their four-year term. Under a parliamentary system, a prime minister that has lost support in the middle of their term can be easily replaced by their own peers with a more popular alternative, as the Conservative Party in the UK did with successive prime ministers David Cameron, Theresa May, Boris Johnson, Liz Truss, and Rishi Sunak. Although Bagehot praised parliamentary governments for allowing an election to take place at any time, the lack of a definite election calendar can be abused. Under some systems, such as the British, a ruling party can schedule elections when it believes that it is likely to retain power, and so avoid elections at times of unpopularity. (From 2011, election timing in the UK was partially fixed under the Fixed-term Parliaments Act 2011, which was repealed by the Dissolution and Calling of Parliament Act 2022.) Thus, by a shrewd timing of elections, in a parliamentary system, a party can extend its rule for longer than is feasible in a presidential system. This problem can be alleviated somewhat by setting fixed dates for parliamentary elections, as is the case in several of Australia's state parliaments. In other systems, such as the Dutch and the Belgian, the ruling party or coalition has some flexibility in determining the election date. Conversely, flexibility in the timing of parliamentary elections can avoid periods of legislative gridlock that can occur in a fixed period presidential system. In any case, voters ultimately have the power to choose whether to vote for the ruling party or someone else. Disadvantages According to Arturo Fontaine, parliamentary systems in Europe have yielded very powerful heads of government which is rather what is often criticized about presidential systems. Fontaine compares United Kingdom's Margaret Thatcher to the United States' Ronald Reagan noting the former head of government was much more powerful despite governing under a parliamentary system. The rise to power of Viktor Orbán in Hungary has been claimed to show how parliamentary systems can be subverted. The situation in Hungary was according to Fontaine allowed by the deficient separation of powers that characterises parliamentary and semi-presidential systems. Once Orbán's party got two-thirds of the seats in Parliament in a single election, a supermajority large enough to amend the Hungarian constitution, there was no institution that was able to balance the concentration of power. In a presidential system it would require at least two separate elections to create the same effect; the presidential election, and the legislative election, and that the president's party has the legislative supermajority required for constitutional amendments. Safeguards against this situation implementable in both systems include the establishment of an upper house or a requirement for external ratification of constitutional amendments such as a referendum. Fontaine also notes as a warning example of the flaws of parliamentary systems that if the United States had a parliamentary system, Donald Trump, as head of government, could have dissolved the United States Congress. The ability for strong parliamentary governments to push legislation through with the ease of fused power systems such as in the United Kingdom, whilst positive in allowing rapid adaptation when necessary e.g. the nationalisation of services during the world wars, in the opinion of some commentators does have its drawbacks. For instance, the flip-flopping of legislation back and forth as the majority in parliament changed between the Conservatives and Labour over the period 1940–1980, contesting over the nationalisation and privatisation of the British Steel Industry resulted in major instability for the British steel sector. In R. Kent Weaver's book Are Parliamentary Systems Better?, he writes that an advantage of presidential systems is their ability to allow and accommodate more diverse viewpoints. He states that because "legislators are not compelled to vote against their constituents on matters of local concern, parties can serve as organizational and roll-call cuing vehicles without forcing out dissidents". All current parliamentary democracies see the indirect election or appointment of their head of government. As a result, the electorate has limited power to remove or install the person or party wielding the most power. Although strategic voting may enable the party of the prime minister to be removed or empowered, this can be at the expense of voters first preferences in the many parliamentary systems utilising first past the post, or having no effect in dislodging those parties who consistently form part of a coalition government, as with then Dutch prime minister Mark Rutte and his party the VVD's 4 terms in office, despite their peak support reaching only 26.6% in 2012. Countries National Assembly elects the President who appoints the Prime Minister See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/File:Inside_the_Facebook_campus.jpg] | [TOKENS: 92] |
File:Inside the Facebook campus.jpg Summary Licensing File history Click on a date/time to view the file as it appeared at that time. File usage The following page uses this file: Global file usage The following other wikis use this file: Metadata This file contains additional information, probably added from the digital camera or scanner used to create or digitize it. If the file has been modified from its original state, some details may not fully reflect the modified file. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Al-Mirr] | [TOKENS: 772] |
Contents Al-Mirr Al-Mirr, also named Mahmudiyeh ("the property of Mahmud"), was a Palestinian Arab village in the Jaffa Subdistrict, which was depopulated during the 1947–1948 Civil War in Mandatory Palestine on February 1, 1948. Location The village was located 16.5 kilometers (10.3 mi) northeast of Jaffa, on the southern bank of the al-'Awja river. A short, secondary track linked it to the railway line running between Ras al-Ayn and Petah Tikva. History A mill and dam built at this site in late Roman/early Byzantine period were repaired in Crusader times. The mill was mentioned in Crusader sources in 1158/9 C.E. Excavations of the mill have recovered several 14th-century coins, which indicate that it was in use in the Mamluk period. The modern village was founded during the reign of the Mahmud II (1808–39), the Sultan of the Ottoman Empire, and was also known as "Al Mahmudiyya". In 1856 the village was named el Mir on Kiepert's map of Palestine published that year. In 1870 Victor Guérin visited and described the village (which he called Ma'moudieh): "It contains at most two hundred inhabitants, who live in houses built of adobe. Several mills are set in motion by the cascading waterfalls along the Nahr el-A'oudjeh. A small bridge over the river makes it possible to cross it at this point". An Ottoman village list from about the same year indicated 30 houses and a population of 69, though the population count included men only. The PEF's Survey of Western Palestine in 1882 described al-Mirr as "a small mud village, with mills close to the river." During the British Mandate for Palestine, the population was recorded as 75 Muslims in the 1922 census, and the village was classified as a hamlet in the Palestine Index Gazetteer. In the 1931 census Mahmudiya had 101 inhabitants, still all Muslims, in 25 houses. In the 1945 statistics the population numbered 170 Muslims, who worked in agriculture and with transportation. Cultivated lands in the village in 1944-45 included 2 dunums planted with citrus and bananas, and 31 dunums planted with cereals. 2 dunams were classified as built-up areas. Before the outbreak of the 1948 Arab-Israeli war, al-Mirr's inhabitants left on February 3, 1948, out of fear of Jewish attack. According to Benny Morris, some of the inhabitants returned on February 15, but fled for the final time one month later. However, according to Walid Khalidi, citing The New York Times, the villagers apparently returned yet again, as Jewish forces attacked the village in mid-May. The 13 May attack would have occurred around the same time as an attack into the area by Irgun. The remains of a Turkish bridge lies where the village was. Andrew Petersen, an archaeologist specializing in Islamic architecture, visited the mill in 1991. He found that it had probably been built in several phases. Presently, it consists of a rectangular building, 60 m. NS x 10 m EW, on two levels. At the lower level are at least 13 parallel water inlets. These inlets are of two different types, (indicating different construction date); a flat slab roof, and pointed vaulted roof. Between the two levels are holes in the floor, presumably this is where the millstones were connected to the turbines. See also References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Social_identity_theory] | [TOKENS: 3152] |
Contents Social identity theory Social identity is the portion of an individual's self-concept derived from perceived membership in a relevant social group. As originally formulated by social psychologists Henri Tajfel and John Turner in the 1970s and the 1980s, social identity theory introduced the concept of a social identity as a way in which to explain intergroup behaviour. "Social identity theory explores the phenomenon of the 'ingroup' and 'outgroup', and is based on the view that identities are constituted through a process of difference defined in a relative or flexible way depends on the activities in which one engages." This theory is described as a theory that predicts certain intergroup behaviours on the basis of perceived group status differences, the perceived legitimacy and stability of those status differences, and the perceived ability to move from one group to another. This contrasts with occasions where the term "social identity theory" is used to refer to general theorizing about human social selves. Moreover, and although some researchers have treated it as such, social identity theory was never intended to be a general theory of social categorization. It was awareness of the limited scope of social identity theory that led John Turner and colleagues to develop a cousin theory in the form of self-categorization theory, which built on the insights of social identity theory to produce a more general account of self and group processes. The term social identity approach, or social identity perspective, is suggested for describing the joint contributions of both social identity theory and self-categorization theory. Social identity theory suggests that an organization can change individual behaviours if it can modify their self-identity or part of their self-concept that derives from the knowledge of, and emotional attachment to the group. Development The term 'social identity theory' achieved academic purchase only in the late 1970s, but the basic underlying concepts associated with it had emerged by the early twentieth century. William G. Sumner, writing in 1906, captures the primary dynamics in this excerpt from his influential work Folkways: A Study of the Sociological Importance of Usages, Manners, Customs, Mores, and Morals: By the late 1920s the collectivist perspective had all but disappeared from mainstream social psychology. Over fifty years later, around the time of the first formal use of the term 'social identity theory', Tajfel wrote this on the state of social psychology: Thus, social identity theory in part reflects a desire to reestablish a more collectivist approach to social psychology of the self and social groups. Social Identity Theory “Recent research in social psychology demonstrates that social identity influences not only group attitudes but also cognition, moral judgment, and decision-making. Van Bavel and Packer (2021) argue that shared identities powerfully shape attention, trust, and perceptions of threat, linking group processes to findings in cognitive neuroscience. Their work shows that people process information differently depending on whether it comes from an ingroup or an outgroup, and that identity-based motivations can influence moral behavior, cooperation, and political decision-making. These findings extend classic Social Identity Theory by illustrating how group membership shapes fundamental psychological processes, not only intergroup bias.” Aspects Social identity theory states that social behaviour will want a person to change their behaviour while in a group. It varies along a continuum between interpersonal behaviour and intergroup behaviour. Completely interpersonal behaviour would be behaviour determined solely by the individual characteristics and interpersonal relationships that exists between only two people. Completely intergroup behaviour would be behaviour determined solely by the social category memberships that apply to more than two people. The authors of social identity theory state that purely interpersonal or purely intergroup behaviour is unlikely to be found in realistic social situations. Rather, behaviour is expected to be driven by a compromise between the two extremes. The cognitive nature of personal vs. social identities, and the relationship between them, is more fully developed in self-categorization theory. Social identity theory instead focuses on the social structural factors that will predict which end of the spectrum will most influence an individual's behaviour, along with the forms that the behaviour may take. A key assumption in social identity theory is that individuals are intrinsically motivated to achieve positive distinctiveness. That is, individuals "strive for a positive self-concept". As individuals to varying degrees may be defined and informed by their respective social identities (as per the interpersonal-intergroup continuum) it is further derived in social identity theory that "individuals strive to achieve or to maintain positive social identity". The precise nature of this striving for positive self-concept is a matter of debate (see the self-esteem hypothesis). Both the interpersonal-intergroup continuum and the assumption of positive distinctiveness motivation arose as outcomes of the findings of minimal group studies. In particular, it was found that under certain conditions individuals would endorse resource distributions that would maximize the positive distinctiveness of an in-group in contrast to an out-group at the expense of personal self-interest. Social identity matters because it shapes people's self-perceptions and interpersonal relationships. Favorable self-perception increases the likelihood that an individual would relate well to other members of the group and experience favorable feelings about themselves. People's perceptions of themselves are shaped by the group they identify with more strongly. Getting status within the group can make people feel more confident, content, and respected since belonging to that group becomes significant for how they view themselves and their talents.[citation needed] Building on the above components, social identity theory details a variety of strategies that may be invoked in order to achieve positive distinctiveness. The individual's choice of behaviour is posited to be dictated largely by the perceived intergroup relationship. In particular the choice of strategy is an outcome of the perceived permeability of group boundaries (e.g., whether a group member may pass from a low status group into a high status group), as well as the perceived stability and legitimacy of the intergroup status hierarchy. The self-enhancing strategies detailed in social identity theory are detailed below. Importantly, although these are viewed from the perspective of a low status group member, comparable behaviours may also be adopted by high status group members. It is predicted that under conditions where the group boundaries are considered permeable individuals are more likely to engage in individual mobility strategies. That is, individuals "disassociate from the group and pursue individual goals designed to improve their personal lot rather than that of their ingroup". Where group boundaries are considered impermeable, and where status relations are considered reasonably stable, individuals are predicted to engage in social creativity behaviours. Here, low-status ingroup members are still able to increase their positive distinctiveness without necessarily changing the objective resources of the ingroup or the outgroup. This may be achieved by comparing the ingroup to the outgroup on some new dimension, changing the values assigned to the attributes of the group, and choosing an alternative outgroup by which to compare the ingroup. Here an ingroup seeks positive distinctiveness and requires positive differentiation via direct competition with the outgroup in the form of ingroup favoritism. It is considered competitive in that in this case favoritism for the ingroup occurs on a value dimension that is shared by all relevant social groups (in contrast to social creativity scenarios). Social competition is predicted to occur when group boundaries are considered impermeable, and when status relations are considered to be reasonably unstable. Although not privileged in the theory, it is this positive distinctiveness strategy that has received the greatest amount of attention. In political science, social identity theory has been incorporated as the subconsitituency politics theory of representation. This theory holds that political elites are individually rational, and they use identity instrumentally to cultivate minimum winning constituencies (e.g., via the "microtargeting" of ads). An example of microtargeting is Russian use of social media advertising alleged to have influenced the United States 2016 presidential election. Separately, a recent Science Advances article validates a computational model of in-group favoritism and political economy developed by Princeton political scientist Nolan McCarty using public opinion polling data. Implications In-group favoritism (also known as "ingroup bias", despite Turner's objections to the term) is an effect where people give preferential treatment to others when they are perceived to be in the same ingroup. Social identity attributes the cause of ingroup favoritism to a psychological need for positive distinctiveness and describes the situations where ingroup favoritism is likely to occur (as a function of perceived group status, legitimacy, stability, and permeability). It has been shown via the minimal group studies that ingroup favoritism may occur for both arbitrary ingroups (e.g. a coin toss may split participants into a 'heads' group and a 'tails' group) as well as non-arbitrary ingroups (e.g. ingroups based on cultures, genders, sexual orientation, and first languages). Continued study into the relationship between social categorization and ingroup favoritism has explored the relative prevalences of the ingroup favoritism vs. outgroup discrimination, explored different manifestations of ingroup favoritism, and has explored the relationship between ingroup favoritism and other psychological constraints (e.g., existential threat). System justification theory was originally proposed by John Jost and Mahzarin Banaji in 1994 to build on social identity theory and to understand important deviations from ingroup favoritism, such as outgroup favoritism on the part of members of disadvantaged groups (Jost & Banaji, 1994; Jost, 2020). Social identity threat was also inspired by social identity theory and developed by Branscombe and colleagues in 1999 as a mechanism to understand and explain the different types of threats that arise from group identity being threatened. Social identification can lead individuals to engage in prosocial behaviours towards others. Examples include contexts such as food drives or even shared purchasing patterns, as might occur for motorcycle riders. Consumers may have sub-identities that are nested into a larger identity. As a result, "[w]hen consumers identify with the overall community, they assist other consumers. However, consumers are less likely to help consumers in the overall community when identifying with a subgroup". Social identities are a valued aspect of the self, and people will sacrifice their pecuniary self-interest to maintain the self-perception that they belong to a given social group. Political partisans and fans of sports teams (e.g., Republicans and Democrats, or MLB, NFL, NCAA fans) are reluctant to bet against the success of their party or team because of the diagnostic cost such a bet would incur to their identification with it. As a result, partisans and fans will reject even very favorable bets against identity-relevant desired outcomes. More than 45% of N.C.A.A. basketball and hockey fans, for example, turned down a free, real chance to earn $5 if their team lost its upcoming game. Controversies Social identity theory proposes that people are motivated to achieve and maintain positive concepts of themselves. Some researchers, including Michael Hogg and Dominic Abrams, thus propose a fairly direct relationship between positive social identity and self-esteem. In what has become known as the "self-esteem hypothesis", self-esteem is predicted to relate to in-group bias in two ways. Firstly, successful intergroup discrimination elevates self-esteem. Secondly, depressed or threatened self-esteem promotes intergroup discrimination. Empirical support for these predictions has been mixed. Some social identity theorists, including John Turner, consider the self-esteem hypothesis as not canonical to social identity theory. In fact, the self-esteem hypothesis is argued to be conflictual with the tenets of the theory. It is argued that the self-esteem hypothesis misunderstands the distinction between a social identity and a personal identity. Along those lines, John Turner and Penny Oakes argue against an interpretation of positive distinctiveness as a straightforward need for self-esteem or "quasi-biological drive toward prejudice". They instead favour a somewhat more complex conception of positive self-concept as a reflection of the ideologies and social values of the perceiver. Additionally, it is argued that the self-esteem hypothesis neglects the alternative strategies to maintaining a positive self-concept that are articulated in social identity theory (i.e., individual mobility and social creativity). In what has been dubbed the Positive-Negative Asymmetry Phenomenon, researchers have shown that punishing the out-group benefits self-esteem less than rewarding the in-group. From this finding it has been extrapolated that social identity theory is therefore unable to deal with bias on negative dimensions. Social identity theorists, however, point out that for ingroup favouritism to occur a social identity "must be psychologically salient", and that negative dimensions may be experienced as a "less fitting basis for self-definition". This important qualification is subtly present in social identity theory, but is further developed in self-categorization theory. Empirical support for this perspective exist. It has been shown that when experiment participants can self-select negative dimensions that define the ingroup no positive–negative asymmetry is found. It has been posited that social identity theory suggests that similar groups should have an increased motivation to differentiate themselves from each other. Subsequently, empirical findings where similar groups are shown to possess increased levels of intergroup attraction and decreased levels of in-group bias have been interpreted as problematic for the theory. Elsewhere it has been suggested that this apparent inconsistency may be resolved by attending to social identity theory's emphasis on the importance of the perceived stability and legitimacy of the intergroup status hierarchy. Social identity theory has been criticised for having far greater explanatory power than predictive power. That is, while the relationship between independent variables and the resulting intergroup behaviour may be consistent with the theory in retrospect, that particular outcome is often not that which was predicted at the outset. A rebuttal to this charge is that the theory was never advertised as the definitive answer to understanding intergroup relationships. Instead it is stated that social identity theory must go hand in hand with sufficient understanding of the specific social context under consideration. The latter argument is consistent with the explicit importance that the authors of social identity theory placed on the role of "objective" factors, stating that in any particular situation "the effects of [social identity theory] variables are powerfully determined by the previous social, economic, and political processes". Some researchers interpret social identity theory as drawing a direct link between identification with a social group and ingroup favoritism. This is because social identity theory was proposed as a way of explaining the ubiquity of ingroup favoritism in the minimal group paradigm. For example, Charles Stangor and John Jost state that "a main premise of social identity theory is that ingroup members will favour their own group over other groups". This interpretation is rejected by other researchers. For example, Alex Haslam states that "although vulgarized versions of social identity theory argue that 'social identification leads automatically to discrimination and bias', in fact…discrimination and conflict are anticipated only in a limited set of circumstances". The likening of social identity theory with social competition and ingroup favouritism is partly attributable to the fact that early statements of the theory included empirical examples of ingroup favouritism, while alternative positive distinctiveness strategies (e.g., social creativity) were at that stage theoretical assertions. Regardless, in some circles the prediction of a straightforward identification-bias correlation has earned the pejorative title "social identity theory-lite". This raises the problem of whether social identity theory really does explain the ubiquity of ingroup favoritism in the minimal group paradigm without making recourse to "the generic norm hypothesis" originally proposed by Tajfel but later abandoned.[citation needed] See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Portal:Astronomy] | [TOKENS: 851] |
Portal:Astronomy Portal topics - (Random portal) Introduction Astronomy is a natural science that studies celestial objects and the phenomena that occur in the cosmos. It uses mathematics, physics, and chemistry to explain their origin and their overall evolution. Objects of interest include planets, moons, stars, nebulae, galaxies, meteoroids, asteroids, and comets. Relevant phenomena include supernova explosions, gamma ray bursts, quasars, blazars, pulsars, and cosmic microwave background radiation. More generally, astronomy studies everything that originates beyond Earth's atmosphere. Cosmology is the branch of astronomy that studies the universe as a whole. Astronomy is one of the oldest natural sciences. The early civilizations in recorded history made methodical observations of the night sky. These include the Egyptians, Babylonians, Greeks, Indians, Chinese, Maya, and many ancient indigenous peoples of the Americas. In the past, astronomy included disciplines as diverse as astrometry, celestial navigation, observational astronomy, and the making of calendars. Astronomy is one of the few sciences in which amateurs play an active role. This is especially true for the discovery and observation of transient events. Amateur astronomers have helped with many important discoveries, such as finding new comets. (Full article...) General images - load new batch Featured article - show another A galaxy is a system of stars, stellar remnants, interstellar gas, dust, and dark matter bound together by gravity. The word is derived from the Greek galaxias (γαλαξίας), literally 'milky', a reference to the Milky Way galaxy that contains the Solar System. Galaxies, averaging an estimated 100 million stars, range in size from dwarfs with less than a thousand stars, to the largest galaxies known – supergiants with one hundred trillion stars, each orbiting its galaxy's centre of mass. Most of the mass in a typical galaxy is in the form of dark matter, with only a few percent of that mass visible in the form of stars and nebulae. Supermassive black holes are a common feature at the centres of galaxies. Galaxies are categorised according to their visual morphology as elliptical, spiral, or irregular. The Milky Way is an example of a spiral galaxy. In addition to shape, galaxies may be notable due to special properties, such as interacting with another galaxy, producing stars at an unusual rate, or having an active galactic nucleus. It is estimated that there are between 200 billion (2×1011) and 2 trillion galaxies in the observable universe. Most galaxies are 1,000 to 100,000 parsecs in diameter (approximately 3,000 to 300,000 light years) and are separated by distances in the order of millions of parsecs (or megaparsecs). For comparison, the Milky Way has a diameter of at least 26,800 parsecs (87,400 ly) and is separated from the Andromeda Galaxy, its nearest large neighbour, by just over 750,000 parsecs (2.5 million ly). (Full article...) Did you know - show different entries More Did you know (auto generated) WikiProjects Selected image - show another The Orion Nebula (also known as Messier 42, M42, or NGC 1976) is a diffuse nebula situated in the Milky Way, being south of Orion's Belt in the constellation of Orion. It is one of the brightest nebulae, visible to the naked eye in the night sky. The entire Orion Nebula in a composite image of visible light and infrared; taken by Hubble Space Telescope in 2006 Read more Astronomy News February anniversaries Space-related Portals Astronomical events All times UT unless otherwise specified. Portal:Astronomy/Events/February 2026 Topics Subcategories Things you can do Astronomy featured article candidates: Astronomy articles for which peer review has been requested: Wikibooks These books may be in various stages of development. See also the related Science and Mathematics bookshelves. Associated Wikimedia The following Wikimedia Foundation sister projects provide more on this subject: Shortcuts to this page: Astronomy portal • P:ASTRO Purge server cache |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Social_media_intelligence] | [TOKENS: 1404] |
Contents Social media intelligence Social media intelligence (SMI or SOCMINT) comprises the collective tools and solutions that allow organizations to analyze conversations, respond to synchronize social signals, and synthesize social data points into meaningful trends and analysis, based on the user's needs. Social media intelligence allows one to utilize intelligence gathering from social media sites, using both intrusive or non-intrusive means, from open and closed social networks. This type of intelligence gathering is one element of OSINT (Open- Source Intelligence). The term was coined in a 2012 paper written by Sir David Omand, Jamie Bartlett and Carl Miller for the Centre for the Analysis of Social Media, at the London-based think tank, Demos. The authors argued that social media is now an important part of intelligence and security work, but that technological, analytical, and regulatory changes are needed before it can be considered a powerful new form of intelligence, including amendments to the United Kingdom Regulation of Investigatory Powers Act 2000. Given the dynamic evolution of social media and social media monitoring, our current understanding of how social media monitoring can help organizations create business value is inadequate. As a result, there is a need to study how organizations can (a) extract and analyze social media data related to their business (Sensing), and (b) utilize external intelligence gained from social media monitoring for specific business initiatives (Seizing). Governmental Use In Thailand, the Technology Crime Suppression Division not only employs a 30-person team to scrutinize social media for content deemed disrespectful to the monarchy, known as lèse-majesté but also encourages citizens to report such content. Particularly targeting the youth, they run a "Cyber Scout" program where participants are rewarded for reporting individuals posting material perceived as detrimental to the monarchy. Instances in Israel involve the arrest of Palestinians by the police for their social media posts. An example includes a 15-year-old girl who posted a Facebook status with the words "forgive me," raising suspicions among Israeli authorities that she might be planning an attack. In Egypt, a leaked 2014 call for tender from the Ministry of Interior reveals efforts to procure a social media monitoring system to identify leading figures and prevent protests before they occur. In the United States, ZeroFOX faced criticism for sharing a report with Baltimore officials showcasing how their social media monitoring tool could track riots following Freddie Gray's funeral. The report labeled 19 individuals, including two prominent figures from the #BlackLivesMatter movement, as "threat actors." In the UK, the Association of Chief Police Officers of England, Wales, and Northern Ireland emphasized the significance of social media in intelligence gathering during anti-fracking protests in 2011. Social media analysis closely monitored protests against the badger cull in 2013, with a 2013 report revealing a team of 17 officers in the National Domestic Extremism Unit scanning public tweets, YouTube videos, Facebook profiles, and other online content from UK citizens. Effects on Political Opinion During the 2016 United States presidential election, the Senate Intelligence Committee released reports containing information about Russia’s use of troll farms to mislead black voters about voting. Also, German researchers in 2010 analyzed Twitter messages regarding the German federal election concluding that Twitter played a role in leading users to a specific political opinion. In a broad sense, social media refers to a conversational, distributed mode of content generation, dissemination, and communication among communities. Different from broadcast-based traditional and industrial media, social media has torn down the boundaries between authorship and readership, while the information consumption and dissemination process is becoming intrinsically intertwined with the process of generating and sharing information. An example of how SOCMINT is used to affect political opinions is the Cambridge Analytica Scandal. Cambridge Analytica was a company that purchased data from Facebook about its users without the consent or knowledge of Americans. They used this data to build a "psychological warfare tool" to persuade US voters to elect Donald Trump as president in the 2016 election. Christopher Wylie, the whistleblower, reported that personal information was taken in early 2014, and used to build a system that could target US voters with personalized pollical advertisements. More than 50 million individuals' data was exploited and manipulated. Law Enforcement In September of 2023, the Philadelphia Police Department began using social media to track and stay one step ahead of criminal activity to stop meetups and potential robberies. This new approach has made officers utilize another tool in their field by being able to find new information as quickly as possible. Law enforcement agencies worldwide are increasingly employing social media intelligence to enhance their capabilities in both crime prevention and investigation. By analyzing publicly available data from social platforms such as Facebook, Twitter, and Instagram, police can track criminal activities, identify suspects, and even prevent potential crimes before they occur. For instance, the FBI utilizes SOCMINT to monitor threats and investigate criminal activities, including analyzing posts, images, and videos that might signal illegal activities or security concerns. Marketing SOCMINT collects data from both organizations and people on an individual level. It has a variety of different purposes, and though its main goal is to improve national security advancements, there are several other benefits as well. This intelligence can identify patterns, predict trends, gather information in current time, etc. In addition, these aspects have allowed for both improvement within businesses and help for law enforcement. Artificial Social Networking Intelligence (ASNI) refers to the application of artificial intelligence within social networking services and social media platforms. It encompasses various technologies and techniques used to automate, personalize, enhance, improve, and synchronize user's interactions and experiences within social networks. ASNI is expected to evolve rapidly, influencing how we interact online and shaping their digital experiences. Transparency, ethical considerations, media influence bias, and user control over data will be crucial to ensure responsible development and positive impact. Google provides many free services and has built an entire media brand with its vast variety of products. Along with data collection, Google also owns two advertising services, Google Ads, and Google AdSense. Surprisingly, most of its revenue comes from advertising, not direct sales of its services or products. Google makes money by selling advertising services to advertisers. They provide ad space to websites on Google, and target ads to consumers of Google services and products. Google can market ads using SOCMINT to collect data from its users and generate revenue. Research shows that various social media platforms on the Internet such as Twitter, Tumblr (micro-blogging websites), Facebook (a popular social networking website), YouTube (largest video sharing and hosting website), Blogs and discussion forums are being misused by extremist groups for spreading their beliefs and ideologies, promoting radicalization, recruiting members and creating online virtual communities sharing a common agenda. Popular microblogging websites such as Twitter are being used as a real-time platform for information sharing and communication during the planning and mobilization of civil unrest-related events. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Special:BookSources/978-1538124185] | [TOKENS: 380] |
Contents Book sources This page allows users to search multiple sources for a book given a 10- or 13-digit International Standard Book Number. Spaces and dashes in the ISBN do not matter. This page links to catalogs of libraries, booksellers, and other book sources where you will be able to search for the book by its International Standard Book Number (ISBN). Online text Google Books and other retail sources below may be helpful if you want to verify citations in Wikipedia articles, because they often let you search an online version of the book for specific words or phrases, or you can browse through the book (although for copyright reasons the entire book is usually not available). At the Open Library (part of the Internet Archive) you can borrow and read entire books online. Online databases Subscription eBook databases Libraries Alabama Alaska California Colorado Connecticut Delaware Florida Georgia Illinois Indiana Iowa Kansas Kentucky Massachusetts Michigan Minnesota Missouri Nebraska New Jersey New Mexico New York North Carolina Ohio Oklahoma Oregon Pennsylvania Rhode Island South Carolina South Dakota Tennessee Texas Utah Washington state Wisconsin Bookselling and swapping Find your book on a site that compiles results from other online sites: These sites allow you to search the catalogs of many individual booksellers: Non-English book sources If the book you are looking for is in a language other than English, you might find it helpful to look at the equivalent pages on other Wikipedias, linked below – they are more likely to have sources appropriate for that language. Find other editions The WorldCat xISBN tool for finding other editions is no longer available. However, there is often a "view all editions" link on the results page from an ISBN search. Google books often lists other editions of a book and related books under the "about this book" link. You can convert between 10 and 13 digit ISBNs with these tools: Find on Wikipedia See also Get free access to research! Research tools and services Outreach Get involved |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Social_media_and_psychology] | [TOKENS: 6148] |
Contents Social media and psychology Social media began in the form of generalized online communities. These online communities formed on websites like Geocities.com in 1994, Theglobe.com in 1995, and Tripod.com in 1995. Many of these early communities focused on social interaction by bringing people together through the use of chat rooms. The chat rooms encouraged users to share personal information, ideas, or even personal web pages. Later the social networking community Classmates took a different approach by simply having people link to each other by using their personal email addresses. By the late 1990s, social networking websites began to develop more advanced features to help users find and manage friends. These newer generation of social networking websites began to flourish with the emergence of SixDegrees.com in 1997, Makeoutclub in 2000, Hub Culture in 2002, and Friendster in 2002. However, the first profitable mass social networking website was the South Korean service, Cyworld. Cyworld initially launched as a blog-based website in 1999 and social networking features were added to the website in 2001. Other social networking websites emerged like Myspace in 2002, LinkedIn in 2003, and Bebo in 2005. In 2009, the social networking website Facebook (launched in 2004) became the largest social networking website in the world. Both Instagram and Kik were launched in October 2010. Active users of Facebook increased from just a million in 2004 to over 750 million by the year 2011. Making internet-based social networking both a cultural and financial phenomenon. In September 2011, Snapchat was launched and reported over 300 million users in 2021. Psychology of social networking A social network is a social structure made up of individuals or organizations who communicate and interact with each other. Social networking sites – such as Facebook, Twitter, Instagram, Pinterest and LinkedIn – are defined as technology-enabled tools that assist users with creating and maintaining their relationships. A study found that middle schoolers reported using social media to see what their friends are doing, to post pictures, and to connect with friends. Human behavior related to social networking is influenced by major individual differences, meaning that people differ quite systematically in the quantity and quality of their social relationships. Two of the main personality traits that are responsible for this variability are the traits of extraversion and introversion. Extraversion refers to the tendency to be socially dominant, exert leadership, and influence on others. In contrast, introversion reflects a tendency towards shyness, social phobia, or even avoid social situations altogether, which could potentially reduce the number of social contacts a person may have. These individual differences may result in different social networking outcomes. Other psychology factors related to social media and Media psychology are depression, anxiety, attachment, self-identity, well-being, and the need to belong. The three domains that neural systems rely on to be strengthened to support social media use are social cognition, self-referential cognition, and social rewarding. When someone posts something on social media, they think of how their audience will react, while the audience thinks of the motivations behind posting the information. Both parties are analyzing the other's thoughts and feelings, which coherently rely on multiple network systems of the brain including the dorsomedial prefrontal cortex, bilateral temporoparietal junction, anterior temporal lobes, inferior frontal gyri, and posterior cingulate cortex. All of these systems work to help us process social behaviors and thoughts drawn out on social media. Social media requires a great deal of self-referential thought. People use social media as a platform to express their opinions and show off their past and present selves. In other words, as Bailey Parnell said in her Ted Talk, we're showing off our "highlight reel" (4). When one receives feedback from others, the individual obtains more reflected self-appraisal which leads to comparisons of their social behaviors or "highlights" to other users. Self-referential thought involves activity in the medial prefrontal cortex and the posterior cingulate cortex. The brain uses these systems when thinking of oneself. A 2021 umbrella review found that most associations between adolescent social media use and mental health were characterized as weak or inconsistent, though certain studies identified 'substantial' negative impacts, particularly linked to passive consumption and problematic use. Social media also provides a constant supply of rewards that keeps users coming back for more. Whenever users receive a like or a new follower, it activates the brain's social reward system which includes the ventromedial prefrontal cortex, ventral striatum, and ventral tegmental area. This system has been found to activate in response to positive feedback from peers, suggesting that users experience online acceptance in a similar manner to other material rewards or positive experiences, further acting as a potential reward. While these areas of the brain become strengthened, other parts of the brain start to weaken. Technology is encouraging multi-tasking, especially because of how easy it is to switch from one task to another by opening another tab or using two devices at once. The brain's hippocampus is mainly associated with long-term memory. In a study done by Russell Poldark, a professor at UCLA, they found that "for the task learned without distraction, the hippocampus was involved. However, for the task learned with the distraction of the beeps, the hippocampus was not involved; but the striatum was, which is the brain system that underlies our ability to learn new skills." The study concludes that multitasking can cause reliance on the striatum more than the hippocampus, which can change the way we learn. The striatum is known to be connected to mainly the brain's reward system. The brain will strengthen the neurons to the striatum while it weakens the neurons to the hippocampus to make the brain more efficient. Because our brain starts to rely on the striatum more than the hippocampus, it becomes harder for us to process new information. Nicholas Carr, author of The Shallows: How The Internet Is Changing Our Brains, agrees: "What psychologists and brain scientists tell us about interruptions is that they have a fairly profound effect on the way we think. It becomes much harder to sustain attention, to think about one thing for a long period of time, and to think deeply when new stimuli are pouring at you all day long. I argue that the price we pay for being constantly inundated with information is a loss of our ability to be contemplative and to engage in the kind of deep thinking that requires you to concentrate on one thing." How does well-being relate to social media? In an article titled Social Impact of Psychological Research on Well-Being Shared in Social Media, Pulido et al. found a 15.7% social impact in their results. These new results were compared to a previous study conducted by Pulido et al., which had a high of 4.98% compared to 27.5% in the new study. These results show the ESISM, which is evidence of social impact present. In a two-year span, the difference between social impact rose 22.52% according to these studies. When taking into consideration that an increasingly large number of teens report either being active on, or having used, some form of social media, ranging from apps such as Facebook to TikTok, researching the effects of social media on the well-being of teens and young adults has become more of a topic of focus in recent years. Especially in today's society, social media has gained a new perspective on younger generations. It is what younger generations are born into and are growing up to use, particularly what is running today's society. Social Media has its downfalls regarding depression and mental health. Many users often compare their lives regarding what they see on these platforms. In an article Does Social Media Cause Depression? by the Child Mind Institute, Miller states that "several studies, teenage and young adult users who spend the most time on Instagram, Facebook and other platforms for have shown to have substantially (from 13 to 66 percent) higher rates of reported depression than those who spent the least time", what the study shows how Facebook and Instagram, platforms showcasing daily lives and or lifestyles, or less fulfilling or less satisfied or more flaunting base or superficial. Instead of social community, there has become a perception of individuals striving for a life that is not real, whether that is editing photos or making life seem perfect when it is not. This causes a sense of depression by the weight of a comparing game. In "How Social Media Affects Your Teen's Mental Health: A Parent's Guide," Kathy Katella states, "According to a research study of American teens ages 12-15, those who used social media over three hours each day faced twice the risk of having negative mental health outcomes, including depression and anxiety symptoms." Teenagers are facing more and more complications everyday because of the overuse of social media. Teenagers and young adults see these ideal lifestyles and make these assumptions about their personal lives, questioning their values and sense of belonging, bringing forth this aspect of depression. For example, on Facebook and Instagram, these platforms allow comments on posts or stories, indicating hateful and nasty comments/bullying that can cause mental health issues. As the internet first began to grow in popularity, researchers noted an association between increases in internet usage and decreases in offline social involvement and psychological well-being. Investigators explained these findings through the hypothesis that the internet supports poor quality relationships. In light of the recent emergence of online social networking, there has been growing concern of a possible relationship between individuals' activities on these forums and symptoms of psychopathology, particularly depression. Research has shown a positive correlation between time spent on social networking sites and depressive symptoms. One possible explanation for this relationship is that people use social networking sites as a method of social comparison, which leads to social comparison bias. Adolescents who used Facebook and Instagram to compare themselves with and seek reassurance from other users experienced more depressive symptoms. It is likely, though, that the effects of social comparison on social networking sites is influenced by who people are interacting with on those sites. Specifically, Instagram users who followed a higher percentage of strangers were more likely to show an association between Instagram use and depressive symptoms than were users who followed a lower percentage of strangers. Other studies have found that social media use can potentially increase symptoms of depression in adolescents. Kleppgang et al. (2021) found that adolescents who used social media or played video games for more than three hours a day experienced a higher proportion of symptoms of depression. The goal of Kleppang's study was to examine the relationship between electronic media use and symptoms of depression and to observe whether gender or platonic relationships affect said relationship. They used surveys and web-based questionnaires to gather data. The subjects, sourced from all over Norway, were adolescents in tenth grade. The questions that were presented to the participants asked them to identify any symptoms of depression they have experienced, the frequency of which they used social media, and their gender. Research support for a relationship between online social networking and depression remains mixed. For example, some studies have found that people experiencing feelings of inferiority may share these spontaneously on social media rather than seeking face-to-face help with medical professionals. Similarly, Banjanin and colleagues (2015), for example, found a relationship between increased internet use and depressive symptoms, but no relationship between time spent on social networking sites and depressive symptoms. Several other studies have similarly found no relationship between online social networking and depression. In fact, studies that show there is no particular relationships between using Social media and the mental health suggest that there should be all the time support for young ages to prevent any mental health damage. Even though the direction of any relationship between depression and using social media platform is still unclear. Current research for this issue had been applying on ages between 13 and 18 and it was for the outcome depression, anxiety or psychological distress, assessed by validated instruments. Betul and colleagues, As found in a journal article from the American Academy of Pediatrics cyberbullying can lead to "profound psychosocial outcomes including depression, anxiety, severe isolation, and, tragically, suicide." This introduces relationship between social networking and suicide. Cyberbullying on social media has a strong correlation to causes of suicide among adolescents and young adults. Results of a study by Hinduja and Patchin examining a large sample of middle school-aged adolescents found that those who experienced cyberbullying were twice as likely to attempt or be successful in committing suicide. In a study done by The 2019 School Crime Supplement to the National Crime Victimization Survey (National Center for Education Statistics and Bureau of Justice) indicates that, nationwide, about 16 percent of students in grades 9–12 experienced cyberbullying by time they reach high school or are in high school. Additionally, "social media sites that allow greater anonymity (e.g., Yik Yak, Whisper) have higher rates of cyberbullying perpetration than sites on which users are more identifiable." Many people can easily bully and harass others when their name is not mentioned in the statement. It prevents them from getting the real-world consequences, in which they feel less responsible for their actions. In psychology, attachment theory is a model that attempts to describe the interpersonal relationships people have throughout their lives. The most commonly recognized four styles of attachment in adults are: secure, anxious-preoccupied, dismissive-avoidant, and fearful-avoidant. With the rapid increase in social networking sites, scientists have become interested in the phenomenon of people relying on these sites for their attachment needs. Attachment style has been significantly related to the level of social media use and social orientation on Facebook. Additionally, attachment anxiety has been found to be predictive of less feedback seeking and Facebook usage, whereas attachment avoidance was found to be predictive less feedback seeking and usage. The study found that anxiously attached individuals more frequently comment, "like," and post. Furthermore, the authors suggest that anxious people behave more actively on social media sites because they are motivated to seek positive feedback from others. Despite their attempts to fulfill their needs, data suggests that individuals who use social media to fulfill these voids are typically disappointed and further isolate themselves by reducing their face-to-face interaction time with others. One's self-identity, also commonly known as self-concept, can be defined as a collection of beliefs an individual has about his or herself. It can also be defined as an individual's answer to "Who am I?". Social media offers a means of exploring and forming self-identity, especially for adolescents and young adults. Early adolescence has been found to be the period in which most online identity experimentation occurs, compared to other periods of development. Researchers have identified some of the most common ways early adolescents explore identity are through self-exploration (e.g. to investigate how others react), social compensation (e.g. to overcome shyness), and social facilitation (e.g. to facilitate relationship formation). Additionally, early adolescents use the Internet more to talk to strangers and form new relationships, whereas older adolescents tend to socialize with current friends." Individuals have a high need for social affiliation but find it hard to form social connections in the offline world, and social media may afford a sense of connection that satisfies their needs for belonging, social feedback, and social validation." Of the various concepts comprising self-identity, self-esteem, and self-image, specifically body image, have been given much attention in regard to its relationship with social media usage. Despite the popularity of social media, the direct relationship between Internet exposure and body image has been examined in only a few studies. Individuals are known for having a tendency to compare themselves to others for their own self-evaluation, most prominently through adolescence. Social media makes it even easier for adolescents to engage in these behaviors of social comparison, allowing them to view others all over the world at any given moment. In one study looking at over 150 high school students, survey data regarding online social networking use and body image was collected. With students reporting an average of two to three hours per day online, online social media usage has been significantly related to an internalization of thin ideals, appearance comparison, weight dissatisfaction, and drive for thinness. In a more recent study that focused more specifically on Facebook usage in over 1,000 high school girls, the same association between the amount of use and body dissatisfaction was found, with Facebook users reporting significantly higher levels of body dissatisfaction than non-users. Current research findings suggest a negative relationship between self-image and social media usage for adolescents. In other words, the more an adolescent uses social media, the more likely he or she is to feel bad about themselves, more specifically regarding how they look. Types of social media engagement may differently affect self-esteem in youth. There are unsaid social understanding on social media that make people come as 'uncool' or 'desperate', as a study research points out that liking, commenting on others' s posts is predicted to reduce the appearance of self-esteem. About 40% of teens surveyed revealed that they may not post on social media in fear of people ridiculing them for what they post. Social media use decrease future appearance confidence in young women especially. This has increased the negative effects of the beauty standard that many women and young girls struggle to live up too with social media causing it to become worse for them. This has led them to be more negatively affected by social media and to lash out using the device. According to the study done in Italy with students that were 11, 13, and 15 years old, "Girls reported higher cyber-victimization and problematic social media usage than boys (9.1% vs 6.0% and 10.2% vs 6.1%, respectively)." On top of that, a survey conducted by the Pew Research Center found that there is more TikTok usage in Black teens (81%) and teen girls (73%) than white teen boys (62%). The app TikTok creates the affect that social comparison or the "fear of missing out are related to negative affect and might have detrimental effects on the usage experience and/or TikTok users' lives in general." Apps like TikTok can make an addictive social media environment that can have negative correlations to self-esteem. Narcissistic personality disorder has been connected with an inflated sense of self-worth and a need for excessive attention. Like many disorders there are varying versions of narcissism. (1) Grandiose; typically arrogant, a higher sense of entitlement and a belief that they are better than everyone and everyone knows it. (2) malignant; similar to grandiose but as one tries to lift themselves up they have no concern with destroying others in the process. (3) covert; arrogance mixed with highly self-absorbed tendencies. Inability to accept responsibility and a chronic victim of the world and finally (4) communal; self-absorbed and needs acknowledgement for good they do while typically the good they do is all for show and not genuine. There is a direct connection between narcissistic personality disorder and social media. Studies are showing a connection between narcissism and motives for social media, such as seeking admiration for content and increase following. It has been implicated that narcissists find their content to be of higher quality and therefore share more information on their social media platforms due to a feeling of superiority. There have been many studies to date, all typically using predictive analysis and surveys that require participates to self-report social media usage. It should be mentioned as this self-reporting directly impacts the results and relies on participates to answer truthfully. In 2016, McCain and Campbell found that narcissism was related to greater number of posts, more time spent on social media platforms and having more friends/followers on their platforms. In 2017 Andreassen, Pallesen, and Griffiths found that narcissism may be associated with addictive use of social media. Most of the studies are finding positive relationships between grandiose forms of narcissism and self-reported SM activities. However, in general, there is still variance in the results and continued studies investigating how narcissism relates to use of social media is needed. Social media also has psychological effects on our attention span due to the structure of platforms and the dopamine that is released from using them. Our brains have been trained for constant stimulation, connection, validation, and more. Apps like TikTok, Instagram, and YouTube Shorts show us videos that are fast and entertaining which releases dopamine and makes it harder to focus on longer or slower content. Things like notification alerts and tones also distract people, and a study has found that it takes approximately 23 minutes to refocus after the interruption. The more notifications received, the more distractions that occur, and then our brains are rewired to crave this distraction. People's focus and attention spans continue to decrease as the desire for social media use and distractions increases. To clarify the impact even more, it is crucial to acknowledge the complex correlation between mental health issues and social media use. Primack et al. (2017) found that there is a correlation between heavy social media use and an increase in depressive symptoms in children, based on their longitudinal research. This emphasizes the need to understand the complex dynamics at work as well as the possible negative effects. The complex web of influences on mental health is influenced by several factors, including the type of content consumed, how long it is used for, and the caliber of online interactions. Understanding these intricacies emphasizes the necessity of an all-encompassing approach to awareness and research. The multibillion-dollar advertising industry targeting youth, particularly through digital channels, raises concerns. Research links advertising exposure to unhealthy behaviors in children—consumption of low-nutrient foods, tobacco, alcohol, and indoor tanning. Children's vulnerability arises from immature critical thinking. The policy urges pediatricians to promote digital literacy, emphasizing the need for policymakers and tech companies to adopt practices fostering healthier outcomes in the digital environment and expressing concern about tracking children's digital behavior for targeted marketing. In conclusion, the impact of social media on child psychology is a multifaceted and evolving field of study. As technology continues to advance, research and awareness must adapt to encompass the complexities of digital interactions. The accessibility and vastness of social media can make it difficult for parents to manage their children's behavior and safety, which is why researchers recommend parents establish healthy habits in regards to social media usage to protect them. To prevent such detrimental symptoms in children using social and digital media, scholars suggest parental mediation in two phases depending on a child's life stage: enabling/active mediation for the elementary and middle school years and observant mediation for teens and older adolescents. Enabling mediation balances promotion and protection, and parents have an active role in their child's social/digital media exposure by encouraging open and constructive discussion. The method is centered around the parent's explanation and evaluation of social media and aims to develop empathy, critical thinking, and other vital skills of competent communicators in children. Observant mediation allows children to engage in social/digital media without intervention or supervision. This method is best used with older adolescents using social media, which provides a space for "self-expression, socializing, and the exchange of knowledge," and can educate users on digital literacy and online interpersonal communication in valuable areas. In this stage, it is important for parents to be aware of social media "...tak[ing] away from interpersonal relationships that nurture social skills…" and potential struggles with "self-esteem, social comparison, and peer connections." In conclusion, scholars encourage parents to combine both techniques, allowing their children privacy while providing them security, maintaining an open line of communication, and creating a positive atmosphere around social media to prevent mental harm. The lens social media creates A 2017 Washington Post study found that 55% of people who got plastic surgery did so to appear better in selfie pictures. Social media has created an environment in which people look at themselves through a unique lens. This lens may showcase whether the person is deemed worthy and whether they meet the requirements to fit into modern day society. At its core, social media is a place where people compare themselves and constantly attempt to better their online appearance as evidenced by the aforementioned study conducted in the Washington Post. There is not one set lens that people use to compare themselves, rather people can view themselves in any manner that is applicable to their lives. This the reason that people that come from poorer backgrounds and broken families are more likely to abuse social media. According to the study Sociodemographic factors and social media use in 9-year-old children: the Generation R Study, children from poorer backgrounds or broken homes are significantly more likely to abuse social media and use it. In the study they were found to have a more negative impacts to their lives when compared to children coming from wealthier and more stable families. This is because they are using it as an escape or that they are viewing social media through their lens and our developing mental health problems when they see people that have perceived better lives than them. Living with everyday evaluations Because social media plays such a significant role within society, our everyday lives are filled with constant evaluations based on the feedback we receive on social media. Blake Hallinan and Jed R. Brubaker (2021) discuss the significance of the "like" button on social media platforms, such as Facebook, Instagram, and Twitter, as an online form of evaluation. They explain that the like button is more than just a positive or good status update but is now interpreted as a "currency for self-esteem and belonging" (p. 1). To understand how social media users interpret likes they receive on their accounts, the researchers conducted in-depth interviews consisting of twenty-five self-identified artists who actively use Instagram to share their artwork. Hallinan and Brubaker explain they chose to interview artists is because artists take a lot of pride in their work and can be significantly influenced by the feedback they receive. The interview consisted of questions regarding the participants' artwork, experience, and knowledge of Instagram, and their interpretation of the networks like button. Based on the responses from the interviewers, the researchers found that some participants were unaffected by the number of likes they received from the posts of their artwork. However, they did find that the artists who were deeply affected by the feedback on their Instagram posts experienced doubt within their artwork and personally. As a result, their personal self-esteem decreased. Overall, the researchers emphasized the impact of likes among social media users and concluded that the like button is more than just a good rating, but a personal approval. The need to belong Belongingness is the personal experience of being involved in a system or group. There are two major components of belongingness which are the feeling of being valued or needed in the group and fitting into the group. The sense of belongingness is said to stem from attachment theories. Neubaum and Kramer (2015) state that individuals with a greater desire to form attachments, have a stronger need for belonging in a group. Roy Baumeister and Mark Leary discussed the need to belong theory in a paper in 1995. They discuss the strong effects of belongingness and stated that humans have a "basic desire to form social attachments." Without social interactions, we are deprived of emotions and are prone to more illness, physical and psychological, in the future. In 2010, Judith Gere and Geoff MacDonald found inconsistencies in the research done on this topic and reported updated findings. Research still supported that lack of social interactions lead to negative outcomes in the future. When these needs were not met, an individual's daily life seemed to be negatively affected. However, questions about an individual's interpersonal problems, such as sensitivity and self-regulation, still seem to be unknown. In today's world, social media may be the outlet in which the need to belong theory is fulfilled for individuals. Social media platforms such as Facebook, Twitter, etc. are updated daily to include details of people's personal lives and what they are doing. This in turn gives the perception of being close to people without actually speaking with them. Individuals contribute to social media by 'liking' posts, commenting, updating statuses, tweeting, posting photos, videos and more. Sixty Facebook users were recruited in a study by Neubaum and Kramer (2015) to take part in a series of questionnaires, spend ten minutes on Facebook and then complete a post-Facebook perceptions and an emotional status questionnaires. These individuals perceived more social closeness on Facebook that lead to maintaining relationships. Individuals with a higher need to belong also relied on Facebook, but in more private messages. This allowed these individuals to belong in a one-on-one setting or in a more personal way with a group of members who are more significant to them. Active Facebook users, individuals who posted and contributed to their newsfeed, had a greater sense of social closeness, whereas passive Facebook users, who only viewed posts and did not contribute to the newsfeed, had a lesser sense of social closeness. These findings indicate that social closeness and belonging on social media is dependent on the individual's own interactions and usage style. In a study conducted by Cohen & Lancaster (2014), 451 individuals were asked to complete a survey online. The results suggested that social media usage during television viewing made individuals feel like they were watching the shows in a group setting. Different emotional reactions to the show, were found on all social media platforms due to hashtags of the specific show. These emotional reactions were due to certain parts of the show, reactions to characters, and commenting on the overall show. In this way, social media enhanced people's social interactions just as if they were face-to-face co-viewing television. Individuals with high needs to belong can use social media to participate in social interactions regularly, in a broader sense (Cohen & Lancaster, 2014). Social media has the tendency of making people viral over night and not always in their best interest. Especially cases like woman from Pakistan who became a meme in Pakistan overnight and she got abandoned by her community The relationship of virtual and real is closely intertwined and it has direct and in certain cases devastating effect on people's relationships and their belongingness to their groups. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Internet#cite_note-193] | [TOKENS: 9291] |
Contents Internet The Internet (or internet)[a] is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP)[b] to communicate between networks and devices. It is a network of networks that comprises private, public, academic, business, and government networks of local to global scope, linked by electronic, wireless, and optical networking technologies. The Internet carries a vast range of information services and resources, such as the interlinked hypertext documents and applications of the World Wide Web (WWW), electronic mail, discussion groups, internet telephony, streaming media and file sharing. Most traditional communication media, including telephone, radio, television, paper mail, newspapers, and print publishing, have been transformed by the Internet, giving rise to new media such as email, online music, digital newspapers, news aggregators, and audio and video streaming websites. The Internet has enabled and accelerated new forms of personal interaction through instant messaging, Internet forums, and social networking services. Online shopping has also grown to occupy a significant market across industries, enabling firms to extend brick and mortar presences to serve larger markets. Business-to-business and financial services on the Internet affect supply chains across entire industries. The origins of the Internet date back to research that enabled the time-sharing of computer resources, the development of packet switching, and the design of computer networks for data communication. The set of communication protocols to enable internetworking on the Internet arose from research and development commissioned in the 1970s by the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense in collaboration with universities and researchers across the United States and in the United Kingdom and France. The Internet has no single centralized governance in either technological implementation or policies for access and usage. Each constituent network sets its own policies. The overarching definitions of the two principal name spaces on the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the non-profit Internet Engineering Task Force (IETF). Terminology The word internetted was used as early as 1849, meaning interconnected or interwoven. The word Internet was used in 1945 by the United States War Department in a radio operator's manual, and in 1974 as the shorthand form of Internetwork. Today, the term Internet most commonly refers to the global system of interconnected computer networks, though it may also refer to any group of smaller networks. The word Internet may be capitalized as a proper noun, although this is becoming less common. This reflects the tendency in English to capitalize new terms and move them to lowercase as they become familiar. The word is sometimes still capitalized to distinguish the global internet from smaller networks, though many publications, including the AP Stylebook since 2016, recommend the lowercase form in every case. In 2016, the Oxford English Dictionary found that, based on a study of around 2.5 billion printed and online sources, "Internet" was capitalized in 54% of cases. The terms Internet and World Wide Web are often used interchangeably; it is common to speak of "going on the Internet" when using a web browser to view web pages. However, the World Wide Web, or the Web, is only one of a large number of Internet services. It is the global collection of web pages, documents and other web resources linked by hyperlinks and URLs. History In the 1960s, computer scientists began developing systems for time-sharing of computer resources. J. C. R. Licklider proposed the idea of a universal network while working at Bolt Beranek & Newman and, later, leading the Information Processing Techniques Office at the Advanced Research Projects Agency (ARPA) of the United States Department of Defense. Research into packet switching,[c] one of the fundamental Internet technologies, started in the work of Paul Baran at RAND in the early 1960s and, independently, Donald Davies at the United Kingdom's National Physical Laboratory in 1965. After the Symposium on Operating Systems Principles in 1967, packet switching from the proposed NPL network was incorporated into the design of the ARPANET, an experimental resource sharing network proposed by ARPA. ARPANET development began with two network nodes which were interconnected between the University of California, Los Angeles and the Stanford Research Institute on 29 October 1969. The third site was at the University of California, Santa Barbara, followed by the University of Utah. By the end of 1971, 15 sites were connected to the young ARPANET. Thereafter, the ARPANET gradually developed into a decentralized communications network, connecting remote centers and military bases in the United States. Other user networks and research networks, such as the Merit Network and CYCLADES, were developed in the late 1960s and early 1970s. Early international collaborations for the ARPANET were rare. Connections were made in 1973 to Norway (NORSAR and, later, NDRE) and to Peter Kirstein's research group at University College London, which provided a gateway to British academic networks, the first internetwork for resource sharing. ARPA projects, the International Network Working Group and commercial initiatives led to the development of various protocols and standards by which multiple separate networks could become a single network, or a network of networks. In 1974, Vint Cerf at Stanford University and Bob Kahn at DARPA published a proposal for "A Protocol for Packet Network Intercommunication". Cerf and his graduate students used the term internet as a shorthand for internetwork in RFC 675. The Internet Experiment Notes and later RFCs repeated this use. The work of Louis Pouzin and Robert Metcalfe had important influences on the resulting TCP/IP design. National PTTs and commercial providers developed the X.25 standard and deployed it on public data networks. The ARPANET initially served as a backbone for the interconnection of regional academic and military networks in the United States to enable resource sharing. Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized, which facilitated worldwide proliferation of interconnected networks. TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNet) provided access to supercomputer sites in the United States for researchers, first at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s. The NSFNet expanded into academic and research organizations in Europe, Australia, New Zealand and Japan in 1988–89. Although other network protocols such as UUCP and PTT public data networks had global reach well before this time, this marked the beginning of the Internet as an intercontinental network. Commercial Internet service providers emerged in 1989 in the United States and Australia. The ARPANET was decommissioned in 1990. The linking of commercial networks and enterprises by the early 1990s, as well as the advent of the World Wide Web, marked the beginning of the transition to the modern Internet. Steady advances in semiconductor technology and optical networking created new economic opportunities for commercial involvement in the expansion of the network in its core and for delivering services to the public. In mid-1989, MCI Mail and Compuserve established connections to the Internet, delivering email and public access products to the half million users of the Internet. Just months later, on 1 January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the networks that added to the core of the commercial Internet of later years. In March 1990, the first high-speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed between Cornell University and CERN, allowing much more robust communications than were capable with satellites. Later in 1990, Tim Berners-Lee began writing WorldWideWeb, the first web browser, after two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9, the HyperText Markup Language (HTML), the first Web browser (which was also an HTML editor and could access Usenet newsgroups and FTP files), the first HTTP server software (later known as CERN httpd), the first web server, and the first Web pages that described the project itself. In 1991 the Commercial Internet eXchange was founded, allowing PSInet to communicate with the other commercial networks CERFnet and Alternet. Stanford Federal Credit Union was the first financial institution to offer online Internet banking services to all of its members in October 1994. In 1996, OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe. By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was decommissioned, removing the last restrictions on use of the Internet to carry commercial traffic. As technology advanced and commercial opportunities fueled reciprocal growth, the volume of Internet traffic started experiencing similar characteristics as that of the scaling of MOS transistors, exemplified by Moore's law, doubling every 18 months. This growth, formalized as Edholm's law, was catalyzed by advances in MOS technology, laser light wave systems, and noise performance. Since 1995, the Internet has tremendously impacted culture and commerce, including the rise of near-instant communication by email, instant messaging, telephony (Voice over Internet Protocol or VoIP), two-way interactive video calls, and the World Wide Web. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 Gbit/s, 10 Gbit/s, or more. The Internet continues to grow, driven by ever-greater amounts of online information and knowledge, commerce, entertainment and social networking services. During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network. In November 2006, the Internet was included on USA Today's list of the New Seven Wonders. As of 31 March 2011[update], the estimated total number of Internet users was 2.095 billion (30% of world population). It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication. By 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet. Modern smartphones can access the Internet through cellular carrier networks, and internet usage by mobile and tablet devices exceeded desktop worldwide for the first time in October 2016. As of 2018[update], 80% of the world's population were covered by a 4G network. The International Telecommunication Union (ITU) estimated that, by the end of 2017, 48% of individual users regularly connect to the Internet, up from 34% in 2012. Mobile Internet connectivity has played an important role in expanding access in recent years, especially in Asia and the Pacific and in Africa. The number of unique mobile cellular subscriptions increased from 3.9 billion in 2012 to 4.8 billion in 2016, two-thirds of the world's population, with more than half of subscriptions located in Asia and the Pacific. The limits that users face on accessing information via mobile applications coincide with a broader process of fragmentation of the Internet. Fragmentation restricts access to media content and tends to affect the poorest users the most. One solution, zero-rating, is the practice of Internet service providers allowing users free connectivity to access specific content or applications without cost. Social impact The Internet has enabled new forms of social interaction, activities, and social associations, giving rise to the scholarly study of the sociology of the Internet. Between 2000 and 2009, the number of Internet users globally rose from 390 million to 1.9 billion. By 2010, 22% of the world's population had access to computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube. In 2014 the world's Internet users surpassed 3 billion or 44 percent of world population, but two-thirds came from the richest countries, with 78 percent of Europeans using the Internet, followed by 57 percent of the Americas. However, by 2018, Asia alone accounted for 51% of all Internet users, with 2.2 billion out of the 4.3 billion Internet users in the world. China's Internet users surpassed a major milestone in 2018, when the country's Internet regulatory authority, China Internet Network Information Centre, announced that China had 802 million users. China was followed by India, with some 700 million users, with the United States third with 275 million users. However, in terms of penetration, in 2022, China had a 70% penetration rate compared to India's 60% and the United States's 90%. In 2022, 54% of the world's Internet users were based in Asia, 14% in Europe, 7% in North America, 10% in Latin America and the Caribbean, 11% in Africa, 4% in the Middle East and 1% in Oceania. In 2019, Kuwait, Qatar, the Falkland Islands, Bermuda and Iceland had the highest Internet penetration by the number of users, with 93% or more of the population with access. As of 2022, it was estimated that 5.4 billion people use the Internet, more than two-thirds of the world's population. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet. After English (27%), the most requested languages on the World Wide Web are Chinese (25%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%). Modern character encoding standards, such as Unicode, allow for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain. Several neologisms exist that refer to Internet users: Netizen (as in "citizen of the net") refers to those actively involved in improving online communities, the Internet in general or surrounding political affairs and rights such as free speech, Internaut refers to operators or technically highly capable users of the Internet, digital citizen refers to a person using the Internet in order to engage in society, politics, and government participation. The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet wirelessly.[citation needed] Educational material at all levels from pre-school (e.g. CBeebies) to post-doctoral (e.g. scholarly literature through Google Scholar) is available on websites. The internet has facilitated the development of virtual universities and distance education, enabling both formal and informal education. The Internet allows researchers to conduct research remotely via virtual laboratories, with profound changes in reach and generalizability of findings as well as in communication between scientists and in the publication of results. By the late 2010s the Internet had been described as "the main source of scientific information "for the majority of the global North population".: 111 Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries. In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work. The United States Patent and Trademark Office uses a wiki to allow the public to collaborate on finding prior art relevant to examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park. The English Wikipedia has the largest user base among wikis on the World Wide Web and ranks in the top 10 among all sites in terms of traffic. The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic. Many Internet forums have sections devoted to games and funny videos. Another area of leisure activity on the Internet is multiplayer gaming. This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer. Streaming media is the real-time delivery of digital media for immediate consumption or enjoyment by end users. Streaming companies (such as Netflix, Disney+, Amazon's Prime Video, Mubi, Hulu, and Apple TV+) now dominate the entertainment industry, eclipsing traditional broadcasters. Audio streamers such as Spotify and Apple Music also have significant market share in the audio entertainment market. Video sharing websites are also a major factor in the entertainment ecosystem. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with more than two billion users. It uses a web player to stream and show video files. YouTube users watch hundreds of millions, and upload hundreds of thousands, of videos daily. Other video sharing websites include Vimeo, Instagram and TikTok.[citation needed] Although many governments have attempted to restrict both Internet pornography and online gambling, this has generally failed to stop their widespread popularity. A number of advertising-funded ostensible video sharing websites known as "tube sites" have been created to host shared pornographic video content. Due to laws requiring the documentation of the origin of pornography, these websites now largely operate in conjunction with pornographic movie studios and their own independent creator networks, acting as de-facto video streaming services. Major players in this field include the market leader Aylo, the operator of PornHub and numerous other branded sites, as well as other independent operators such as xHamster and Xvideos. As of 2023[update], Internet traffic to pornographic video sites rivalled that of mainstream video streaming and sharing services. Remote work is facilitated by tools such as groupware, virtual private networks, conference calling, videotelephony, and VoIP so that work may be performed from any location, such as the worker's home.[citation needed] The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have made collaborative work dramatically easier, with the help of collaborative software, which allow groups to easily form, cheaply communicate, and share ideas. An example of collaborative software is the free software movement, which has produced, among other things, Linux, Mozilla Firefox, and OpenOffice.org (later forked into LibreOffice).[citation needed] Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work.[citation needed] The internet also allows for cloud computing, virtual private networks, remote desktops, and remote work.[citation needed] The online disinhibition effect describes the tendency of many individuals to behave more stridently or offensively online than they would in person. A significant number of feminist women have been the target of various forms of harassment, including insults and hate speech, to, in extreme cases, rape and death threats, in response to posts they have made on social media. Social media companies have been criticized in the past for not doing enough to aid victims of online abuse. Children also face dangers online such as cyberbullying and approaches by sexual predators, who sometimes pose as children themselves. Due to naivety, they may also post personal information about themselves online, which could put them or their families at risk unless warned not to do so. Many parents choose to enable Internet filtering or supervise their children's online activities in an attempt to protect their children from pornography or violent content on the Internet. The most popular social networking services commonly forbid users under the age of 13. However, these policies can be circumvented by registering an account with a false birth date, and a significant number of children aged under 13 join such sites.[citation needed] Social networking services for younger children, which claim to provide better levels of protection for children, also exist. Internet usage has been correlated to users' loneliness. Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the "I am lonely will anyone speak to me" thread.[citation needed] Cyberslacking can become a drain on corporate resources; employees spend a significant amount of time surfing the Web while at work. Internet addiction disorder is excessive computer use that interferes with daily life. Nicholas G. Carr believes that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity. Electronic business encompasses business processes spanning the entire value chain: purchasing, supply chain management, marketing, sales, customer service, and business relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance relationships with clients and partners. According to International Data Corporation, the size of worldwide e-commerce, when global business-to-business and -consumer transactions are combined, equate to $16 trillion in 2013. A report by Oxford Economics added those two together to estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global sales. While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide. Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality. A 2013 Institute for Local Self-Reliance report states that brick-and-mortar retailers employ 47 people for every $10 million in sales, while Amazon employs only 14. Similarly, the 700-employee room rental start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Worldwide, which employs 152,000 people. At that time, Uber employed 1,000 full-time employees and was valued at $18.2 billion, about the same valuation as Avis Rent a Car and The Hertz Corporation combined, which together employed almost 60,000 people. Advertising on popular web pages can be lucrative, and e-commerce. Online advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. It includes email marketing, search engine marketing (SEM), social media marketing, many types of display advertising (including web banner advertising), and mobile advertising. In 2011, Internet advertising revenues in the United States surpassed those of cable television and nearly exceeded those of broadcast television.: 19 Many common online advertising practices are controversial and increasingly subject to regulation. The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing for carrying out their mission, having given rise to Internet activism. Social media websites, such as Facebook and Twitter, helped people organize the Arab Spring, by helping activists organize protests, communicate grievances, and disseminate information. Many have understood the Internet as an extension of the Habermasian notion of the public sphere, observing how network communication technologies provide something like a global civic forum. However, incidents of politically motivated Internet censorship have now been recorded in many countries, including western democracies. E-government is the use of technological communications devices, such as the Internet, to provide public services to citizens and other persons in a country or region. E-government offers opportunities for more direct and convenient citizen access to government and for government provision of services directly to citizens. Cybersectarianism is a new organizational form that involves: highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in the collective study via email, online chat rooms, and web-based message boards. In particular, the British government has raised concerns about the prospect of young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being persuaded to join terrorist groups such as the so-called "Islamic State", and then potentially committing acts of terrorism on returning to Britain after fighting in Syria or Iraq.[citation needed] Applications and services The Internet carries many applications and services, most prominently the World Wide Web, including social media, electronic mail, mobile applications, multiplayer online games, Internet telephony, file sharing, and streaming media services. The World Wide Web is a global collection of documents, images, multimedia, applications, and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs), which provide a global system of named references. URIs symbolically identify services, web servers, databases, and the documents and resources that they can provide. HyperText Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP for communication between software systems for information transfer, sharing and exchanging business data and logistics and is one of many languages or protocols that can be used for communication on the Internet. World Wide Web browser software, such as Microsoft Edge, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, enable users to navigate from one web page to another via the hyperlinks embedded in the documents. These documents may also contain computer data, including graphics, sounds, text, video, multimedia and interactive content. Client-side scripts can include animations, games, office applications and scientific demonstrations. Email is an important communications service available via the Internet. The concept of sending electronic text messages between parties, analogous to mailing letters or memos, predates the creation of the Internet. Internet telephony is a common communications service realized with the Internet. The name of the principal internetworking protocol, the Internet Protocol, lends its name to voice over Internet Protocol (VoIP).[citation needed] VoIP systems now dominate many markets, being as easy and convenient as a traditional telephone, while having substantial cost savings, especially over long distances. File sharing is the practice of transferring large amounts of data in the form of computer files across the Internet, for example via file servers. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. Access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by a digital signature. Governance The Internet is a global network that comprises many voluntarily interconnected autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by the IETF. The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. The resulting contributions and standards are published as Request for Comments (RFC) documents on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices when implementing Internet technologies. To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. The organization coordinates the assignment of unique identifiers for use on the Internet, including domain names, IP addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces are essential for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the global Internet. The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, had final approval over changes to the DNS root zone until the IANA stewardship transition on 1 October 2016. Regional Internet registries (RIRs) were established for five regions of the world to assign IP address blocks and other Internet parameters to local registries, such as Internet service providers, from a designated pool of addresses set aside for each region:[citation needed] The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". Its members include individuals as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the Internet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United Nations-sponsored World Summit on the Information Society in Tunis established the Internet Governance Forum (IGF) to discuss Internet-related issues.[citation needed] Infrastructure The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. As with any computer network, the Internet physically consists of routers, media (such as cabling and radio links), repeaters, and modems. However, as an example of internetworking, many of the network nodes are not necessarily Internet equipment per se. Internet packets are carried by other full-fledged networking protocols, with the Internet acting as a homogeneous networking standard, running across heterogeneous hardware, with the packets guided to their destinations by IP routers.[citation needed] Internet service providers (ISPs) establish worldwide connectivity between individual networks at various levels of scope. At the top of the routing hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly with each other via very high speed fiber-optic cables and governed by peering agreements. Tier 2 and lower-level networks buy Internet transit from other providers to reach at least some parties on the global Internet, though they may also engage in peering. End-users who only access the Internet when needed to perform a function or obtain information, represent the bottom of the routing hierarchy.[citation needed] An ISP may use a single upstream provider for connectivity, or implement multihoming to achieve redundancy and load balancing. Internet exchange points are major traffic exchanges with physical connections to multiple ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect with large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET.[citation needed] Common methods of Internet access by users include broadband over coaxial cable, fiber optics or copper wires, Wi-Fi, satellite, and cellular telephone technology.[citation needed] Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services that cover large areas are available in many cities, such as New York, London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. Most servers that provide internet services are today hosted in data centers, and content is often accessed through high-performance content delivery networks. Colocation centers often host private peering connections between their customers, internet transit providers, cloud providers, meet-me rooms for connecting customers together, Internet exchange points, and landing points and terminal equipment for fiber optic submarine communication cables, connecting the internet. Internet Protocol Suite The Internet standards describe a framework known as the Internet protocol suite (also called TCP/IP, based on the first two components.) This is a suite of protocols that are ordered into a set of four conceptional layers by the scope of their operation, originally documented in RFC 1122 and RFC 1123:[citation needed] The most prominent component of the Internet model is the Internet Protocol. IP enables internetworking, essentially establishing the Internet itself. Two versions of the Internet Protocol exist, IPv4 and IPv6.[citation needed] Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe the exchange of data over the network.[citation needed] For locating individual computers on the network, the Internet provides IP addresses. IP addresses are used by the Internet infrastructure to direct internet packets to their destinations. They consist of fixed-length numbers, which are found within the packet. IP addresses are generally assigned to equipment either automatically via Dynamic Host Configuration Protocol, or are configured.[citation needed] Domain Name Systems convert user-inputted domain names (e.g. "en.wikipedia.org") into IP addresses.[citation needed] Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. IPv4 is the initial version used on the first generation of the Internet and is still in dominant use. It was designed in 1981 to address up to ≈4.3 billion (109) hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011, when the global IPv4 address allocation pool was exhausted. Because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 uses 128 bits for the IP address and was standardized in 1998. IPv6 deployment has been ongoing since the mid-2000s and is currently in growing deployment around the world, since Internet address registries began to urge all resource managers to plan rapid adoption and conversion. By design, IPv6 is not directly interoperable with IPv4. Instead, it establishes a parallel version of the Internet not directly accessible with IPv4 software. Thus, translation facilities exist for internetworking, and some nodes have duplicate networking software for both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol.[citation needed] Network infrastructure, however, has been lagging in this development.[citation needed] A subnet or subnetwork is a logical subdivision of an IP network.: 1, 16 Computers that belong to a subnet are addressed with an identical most-significant bit-group in their IP addresses. This results in the logical division of an IP address into two fields, the network number or routing prefix and the rest field or host identifier. The rest field is an identifier for a specific host or network interface.[citation needed] The routing prefix may be expressed in Classless Inter-Domain Routing (CIDR) notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example, 198.51.100.0/24 is the prefix of the Internet Protocol version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range 198.51.100.0 to 198.51.100.255 belong to this network. The IPv6 address specification 2001:db8::/32 is a large address block with 296 addresses, having a 32-bit routing prefix.[citation needed] For IPv4, a network may also be characterized by its subnet mask or netmask, which is the bitmask that when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an address. For example, 255.255.255.0 is the subnet mask for the prefix 198.51.100.0/24.[citation needed] Computers and routers use routing tables in their operating system to forward IP packets to reach a node on a different subnetwork. Routing tables are maintained by manual configuration or automatically by routing protocols. End-nodes typically use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet.[citation needed] The default gateway is the node that serves as the forwarding host (router) to other networks when no other route specification matches the destination IP address of a packet. Security Internet resources, hardware, and software components are the target of criminal or malicious attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or access private information. Malware is malicious software used and distributed via the Internet. It includes computer viruses which are copied with the help of humans, computer worms which copy themselves automatically, software for denial of service attacks, ransomware, botnets, and spyware that reports on the activity and typing of users.[citation needed] Usually, these activities constitute cybercrime. Defense theorists have also speculated about the possibilities of hackers using cyber warfare using similar methods on a large scale. Malware poses serious problems to individuals and businesses on the Internet. According to Symantec's 2018 Internet Security Threat Report (ISTR), malware variants number has increased to 669,947,865 in 2017, which is twice as many malware variants as in 2016. Cybercrime, which includes malware attacks as well as other crimes committed by computer, was predicted to cost the world economy US$6 trillion in 2021, and is increasing at a rate of 15% per year. Since 2021, malware has been designed to target computer systems that run critical infrastructure such as the electricity distribution network. Malware can be designed to evade antivirus software detection algorithms. The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet. In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies. Under the Act, all U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers' broadband Internet and VoIP traffic.[d] The large amount of data gathered from packet capture requires surveillance software that filters and reports relevant information, such as the use of certain words or phrases, the access to certain types of web sites, or communicating via email or chat with certain parties. Agencies, such as the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to develop, purchase, implement, and operate systems for interception and analysis of data. Similar systems are operated by Iranian secret police to identify and suppress dissidents. The required hardware and software were allegedly installed by German Siemens AG and Finnish Nokia. Some governments, such as those of Myanmar, Iran, North Korea, Mainland China, Saudi Arabia and the United Arab Emirates, restrict access to content on the Internet within their territories, especially to political and religious content, with domain name and keyword filters. In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to contain only known child pornography sites, the content of the list is secret. Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet but do not mandate filter software. Many free or commercially available software programs, called content-control software are available to users to block offensive specific on individual computers or networks in order to limit access by children to pornographic material or depiction of violence.[citation needed] Performance As the Internet is a heterogeneous network, its physical characteristics, including, for example the data transfer rates of connections, vary widely. It exhibits emergent phenomena that depend on its large-scale organization. PB per monthYear020,00040,00060,00080,000100,000120,000140,000199019952000200520102015Petabytes per monthGlobal Internet Traffic Volume The volume of Internet traffic is difficult to measure because no single point of measurement exists in the multi-tiered, non-hierarchical topology. Traffic data may be estimated from the aggregate volume through the peering points of the Tier 1 network providers, but traffic that stays local in large provider networks may not be accounted for.[citation needed] An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to the small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests. Estimates of the Internet's electricity usage have been the subject of controversy, according to a 2014 peer-reviewed research paper that found claims differing by a factor of 20,000 published in the literature during the preceding decade, ranging from 0.0064 kilowatt hours per gigabyte transferred (kWh/GB) to 136 kWh/GB. The researchers attributed these discrepancies mainly to the year of reference (i.e. whether efficiency gains over time had been taken into account) and to whether "end devices such as personal computers and servers are included" in the analysis. In 2011, academic researchers estimated the overall energy used by the Internet to be between 170 and 307 GW, less than two percent of the energy used by humanity. This estimate included the energy needed to build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and 100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi transmitters and cloud storage devices use when transmitting Internet traffic. According to a non-peer-reviewed study published in 2018 by The Shift Project (a French think tank funded by corporate sponsors), nearly 4% of global CO2 emissions could be attributed to global data transfer and the necessary infrastructure. The study also said that online video streaming alone accounted for 60% of this data transfer and therefore contributed to over 300 million tons of CO2 emission per year, and argued for new "digital sobriety" regulations restricting the use and size of video files. See also Notes References Sources Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Al-Muwaylih] | [TOKENS: 292] |
Contents Al-Muwaylih Al-Muwaylih (Arabic: المويْلح, El Muweilih) was a Palestinian village in the Jaffa Subdistrict. It was depopulated during the 1948 Palestine War. History In the 1931 census of Palestine, conducted by the British Mandate authorities Malalha had 37 Muslim inhabitants. In the 1945 statistics the population numbered 360 Muslims, who had a total of 3,342 dunams of land. Of this, 949 dunums were planted with citrus and bananas, 27 dunums were plantations and irrigable land, 1,796 were for cereals, while a total of 194 dunams were classified as non-cultivable areas. Neve Yarak is located, partly on Al-Muwaylih land, and partly on land formerly belonging to Jaljuliya. By 1992, it was described: "The site is very difficult to identify. Some of the houses still stand, deserted, amidst wild vegetation. One of them belonged to Hashim al-Jayyusi, who later became a Jordanian cabinet minister. It is a two-storey, concrete structure with rectangular doors and windows and a stairway in front that leads to the second storey. The other villas have been reduced to rubble. The land in the area is cultivated." References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Middle_East#cite_ref-73] | [TOKENS: 6152] |
Contents Middle East The Middle East[b] is a geopolitical region encompassing the Arabian Peninsula, Egypt, Iran, Iraq, the Levant, and Turkey. The term came into widespread usage by Western European nations in the early 20th century as a replacement of the term Near East (both were in contrast to the Far East). The term "Middle East" has led to some confusion over its changing definitions. Since the late 20th century, it has been criticized as being too Eurocentric. The region includes the vast majority of the territories included in the closely associated definition of West Asia, but without the South Caucasus. It also includes all of Egypt (not just the Sinai region) and all of Turkey (including East Thrace). Most Middle Eastern countries (13 out of 18) are part of the Arab world. The three most populous countries in the region are Egypt, Iran, and Turkey, while Saudi Arabia is the largest Middle Eastern country by area. The history of the Middle East dates back to ancient times, and it was long considered the "cradle of civilization". The geopolitical importance of the region has been recognized and competed for during millennia. The Abrahamic religions (Judaism, Christianity, and Islam) have their origins in the Middle East. Arabs constitute the main ethnic group in the region, followed by Turks, Persians, Kurds, Jews, and Assyrians. The Middle East generally has a hot, arid climate, especially in the Arabian and Egyptian regions. Several major rivers provide irrigation to support agriculture in limited areas here, such as the Nile Delta in Egypt, the Tigris and Euphrates watersheds of Mesopotamia, and the basin of the Jordan River that spans most of the Levant. These regions are collectively known as the Fertile Crescent, and comprise the core of what historians had long referred to as the cradle of civilization; multiple regions of the world have since been classified as also having developed independent, original civilizations. Conversely, the Levantine coast and most of Turkey have relatively temperate climates typical of the Mediterranean, with dry summers and cool, wet winters. Most of the countries that border the Persian Gulf have vast reserves of petroleum. Monarchs of the Arabian Peninsula in particular have benefitted economically from petroleum exports. Because of the arid climate and dependence on the fossil fuel industry, the Middle East is both a major contributor to climate change and a region that is expected to be severely adversely affected by it. Other concepts of the region exist, including the broader Middle East and North Africa (MENA), which includes states of the Maghreb and the Sudan. The term the "Greater Middle East" also includes Afghanistan, Mauritania, Pakistan, as well as parts of East Africa, and sometimes Central Asia and the South Caucasus. Terminology The term "Middle East" may have originated in the 1850s in the British India Office. However, it became more widely known when United States naval strategist Alfred Thayer Mahan used the term in 1902 to "designate the area between Arabia and India". During this time the British and Russian empires were vying for influence in Central Asia, a rivalry that would become known as the Great Game. Mahan realized not only the strategic importance of the region, but also of its center, the Persian Gulf. He labeled the area surrounding the Persian Gulf as the Middle East. He said that, beyond Egypt's Suez Canal, the Gulf was the most important passage for Britain to control in order to keep the Russians from advancing towards British India. Mahan first used the term in his article "The Persian Gulf and International Relations", published in September 1902 in the National Review, a British journal. The Middle East, if I may adopt a term which I have not seen, will some day need its Malta, as well as its Gibraltar; it does not follow that either will be in the Persian Gulf. Naval force has the quality of mobility which carries with it the privilege of temporary absences; but it needs to find on every scene of operation established bases of refit, of supply, and in case of disaster, of security. The British Navy should have the facility to concentrate in force if occasion arise, about Aden, India, and the Persian Gulf. Mahan's article was reprinted in The Times and followed in October by a 20-article series entitled "The Middle Eastern Question", written by Sir Ignatius Valentine Chirol. During this series, Sir Ignatius expanded the definition of Middle East to include "those regions of Asia which extend to the borders of India or command the approaches to India." After the series ended in 1903, The Times removed quotation marks from subsequent uses of the term. Until World War II, it was customary to refer to areas centered on Turkey and the eastern shore of the Mediterranean as the "Near East", while the "Far East" centered on China, India and Japan. The Middle East was then defined as the area from Mesopotamia to Burma; namely, the area between the Near East and the Far East. This area broadly corresponds to South Asia. In the late 1930s, the British established the Middle East Command, which was based in Cairo, for its military forces in the region. After that time, the term "Middle East" gained broader usage in Europe and the United States. Following World War II, for example, the Middle East Institute was founded in Washington, D.C. in 1946. The corresponding adjective is Middle Eastern and the derived noun is Middle Easterner. While non-Eurocentric terms such as "Southwest Asia" or "Swasia" have been sparsely used, the classification of the African country, Egypt, among those counted in the Middle East challenges the usefulness of using such terms. The description Middle has also led to some confusion over changing definitions. Before the First World War, "Near East" was used in English to refer to the Balkans and the Ottoman Empire, while "Middle East" referred to the Caucasus, Persia, and Arabian lands, and sometimes Afghanistan, India and others. In contrast, "Far East" referred to the countries of East Asia (e.g. China, Japan, and Korea). With the collapse of the Ottoman Empire in 1918, "Near East" largely fell out of common use in English, while "Middle East" came to be applied to the emerging independent countries of the Islamic world. However, the usage "Near East" was retained by a variety of academic disciplines, including archaeology and ancient history. In their usage, the term describes an area identical to the term Middle East, which is not used by these disciplines (see ancient Near East).[citation needed] The first official use of the term "Middle East" by the United States government was in the 1957 Eisenhower Doctrine, which pertained to the Suez Crisis. Secretary of State John Foster Dulles defined the Middle East as "the area lying between and including Libya on the west and Pakistan on the east, Syria and Iraq on the North and the Arabian peninsula to the south, plus the Sudan and Ethiopia." In 1958, the State Department explained that the terms "Near East" and "Middle East" were interchangeable, and defined the region as including only Egypt, Syria, Israel, Lebanon, Jordan, Iraq, Saudi Arabia, Kuwait, Bahrain, and Qatar. Since the late 20th century, scholars and journalists from the region, such as journalist Louay Khraish and historian Hassan Hanafi have criticized the use of "Middle East" as a Eurocentric and colonialist term. The Associated Press Stylebook of 2004 says that Near East formerly referred to the farther west countries while Middle East referred to the eastern ones, but that now they are synonymous. It instructs: Use Middle East unless Near East is used by a source in a story. Mideast is also acceptable, but Middle East is preferred. European languages have adopted terms similar to Near East and Middle East. Since these are based on a relative description, the meanings depend on the country and are generally different from the English terms. In German the term Naher Osten (Near East) is still in common use (nowadays the term Mittlerer Osten is more and more common in press texts translated from English sources, albeit having a distinct meaning). In the four Slavic languages, Russian Ближний Восток or Blizhniy Vostok, Bulgarian Близкия Изток, Polish Bliski Wschód or Croatian Bliski istok (terms meaning Near East are the only appropriate ones for the region). However, some European languages do have "Middle East" equivalents, such as French Moyen-Orient, Swedish Mellanöstern, Spanish Oriente Medio or Medio Oriente, Greek is Μέση Ανατολή (Mesi Anatoli), and Italian Medio Oriente.[c] Perhaps because of the political influence of the United States and Europe, and the prominence of Western press, the Arabic equivalent of Middle East (Arabic: الشرق الأوسط ash-Sharq al-Awsaṭ) has become standard usage in the mainstream Arabic press. It comprises the same meaning as the term "Middle East" in North American and Western European usage. The designation, Mashriq, also from the Arabic root for East, also denotes a variously defined region around the Levant, the eastern part of the Arabic-speaking world (as opposed to the Maghreb, the western part). Even though the term originated in the West, countries of the Middle East that use languages other than Arabic also use that term in translation. For instance, the Persian equivalent for Middle East is خاورمیانه (Khāvar-e miyāneh), the Hebrew is המזרח התיכון (hamizrach hatikhon), and the Turkish is Orta Doğu. Countries and territory Traditionally included within the Middle East are Arabia, Asia Minor, East Thrace, Egypt, Iran, the Levant, Mesopotamia, and the Socotra Archipelago. The region includes 17 UN-recognized countries and one British Overseas Territory. Various concepts are often paralleled to the Middle East, most notably the Near East, Fertile Crescent, and Levant. These are geographical concepts, which refer to large sections of the modern-day Middle East, with the Near East being the closest to the Middle East in its geographical meaning. Due to it primarily being Arabic speaking, the Maghreb region of North Africa is sometimes included. "Greater Middle East" is a political term coined by the second Bush administration in the first decade of the 21st century to denote various countries, pertaining to the Muslim world, specifically Afghanistan, Iran, Pakistan, and Turkey. Various Central Asian countries are sometimes also included. History The Middle East lies at the juncture of Africa and Eurasia and of the Indian Ocean and the Mediterranean Sea (see also: Indo-Mediterranean). It is the birthplace and spiritual center of religions such as Christianity, Islam, Judaism, Manichaeism, Yezidi, Druze, Yarsan, and Mandeanism, and in Iran, Mithraism, Zoroastrianism, Manicheanism, and the Baháʼí Faith. Throughout its history the Middle East has been a major center of world affairs; a strategically, economically, politically, culturally, and religiously sensitive area. The region is one of the regions where agriculture was independently discovered, and from the Middle East it was spread, during the Neolithic, to different regions of the world such as Europe, the Indus Valley and Eastern Africa. Prior to the formation of civilizations, advanced cultures formed all over the Middle East during the Stone Age. The search for agricultural lands by agriculturalists, and pastoral lands by herdsmen meant different migrations took place within the region and shaped its ethnic and demographic makeup. The Middle East is widely and most famously known as the cradle of civilization. The world's earliest civilizations, Mesopotamia (Sumer, Akkad, Assyria and Babylonia), ancient Egypt and Kish in the Levant, all originated in the Fertile Crescent and Nile Valley regions of the ancient Near East. These were followed by the Hittite, Greek, Hurrian and Urartian civilisations of Asia Minor; Elam, Persia and Median civilizations in Iran, as well as the civilizations of the Levant (such as Ebla, Mari, Nagar, Ugarit, Canaan, Aramea, Mitanni, Phoenicia and Israel) and the Arabian Peninsula (Magan, Sheba, Ubar). The Near East was first largely unified under the Neo Assyrian Empire, then the Achaemenid Empire followed later by the Macedonian Empire and after this to some degree by the Iranian empires (namely the Parthian and Sassanid Empires), the Roman Empire and Byzantine Empire. The region served as the intellectual and economic center of the Roman Empire and played an exceptionally important role due to its periphery on the Sassanid Empire. Thus, the Romans stationed up to five or six of their legions in the region for the sole purpose of defending it from Sassanid and Bedouin raids and invasions. From the 4th century CE onwards, the Middle East became the center of the two main powers at the time, the Byzantine Empire and the Sassanid Empire. However, it would be the later Islamic Caliphates of the Middle Ages, or Islamic Golden Age which began with the Islamic conquest of the region in the 7th century AD, that would first unify the entire Middle East as a distinct region and create the dominant Islamic Arab ethnic identity that largely (but not exclusively) persists today. The 4 caliphates that dominated the Middle East for more than 600 years were the Rashidun Caliphate, the Umayyad caliphate, the Abbasid caliphate and the Fatimid caliphate. Additionally, the Mongols would come to dominate the region, the Kingdom of Armenia would incorporate parts of the region to their domain, the Seljuks would rule the region and spread Turko-Persian culture, and the Franks would found the Crusader states that would stand for roughly two centuries. Josiah Russell estimates the population of what he calls "Islamic territory" as roughly 12.5 million in 1000 – Anatolia 8 million, Syria 2 million, and Egypt 1.5 million. From the 16th century onward, the Middle East came to be dominated, once again, by two main powers: the Ottoman Empire and the Safavid dynasty. The modern Middle East began after World War I, when the Ottoman Empire, which was allied with the Central Powers, was defeated by the Allies and partitioned into a number of separate nations, initially under British and French Mandates. Other defining events in this transformation included the establishment of Israel in 1948 and the eventual departure of European powers, notably Britain and France by the end of the 1960s. They were supplanted in some part by the rising influence of the United States from the 1970s onwards. In the 20th century, the region's significant stocks of crude oil gave it new strategic and economic importance. Mass production of oil began around 1945, with Saudi Arabia, Iran, Kuwait, Iraq, and the United Arab Emirates having large quantities of oil. Estimated oil reserves, especially in Saudi Arabia and Iran, are some of the highest in the world, and the international oil cartel OPEC is dominated by Middle Eastern countries. During the Cold War, the Middle East was a theater of ideological struggle between the two superpowers and their allies: NATO and the United States on one side, and the Soviet Union and Warsaw Pact on the other, as they competed to influence regional allies. Besides the political reasons there was also the "ideological conflict" between the two systems. Moreover, as Louise Fawcett argues, among many important areas of contention, or perhaps more accurately of anxiety, were, first, the desires of the superpowers to gain strategic advantage in the region, second, the fact that the region contained some two-thirds of the world's oil reserves in a context where oil was becoming increasingly vital to the economy of the Western world [...] Within this contextual framework, the United States sought to divert the Arab world from Soviet influence. Throughout the 20th and 21st centuries, the region has experienced both periods of relative peace and tolerance and periods of conflict particularly between Sunnis and Shiites. Geography In 2018, the MENA region emitted 3.2 billion tonnes of carbon dioxide and produced 8.7% of global greenhouse gas emissions (GHG) despite making up only 6% of the global population. These emissions are mostly from the energy sector, an integral component of many Middle Eastern and North African economies due to the extensive oil and natural gas reserves that are found within the region. The Middle East region is one of the most vulnerable to climate change. The impacts include increase in drought conditions, aridity, heatwaves and sea level rise. Sharp global temperature and sea level changes, shifting precipitation patterns and increased frequency of extreme weather events are some of the main impacts of climate change as identified by the Intergovernmental Panel on Climate Change (IPCC). The MENA region is especially vulnerable to such impacts due to its arid and semi-arid environment, facing climatic challenges such as low rainfall, high temperatures and dry soil. The climatic conditions that foster such challenges for MENA are projected by the IPCC to worsen throughout the 21st century. If greenhouse gas emissions are not significantly reduced, part of the MENA region risks becoming uninhabitable before the year 2100. Climate change is expected to put significant strain on already scarce water and agricultural resources within the MENA region, threatening the national security and political stability of all included countries. Over 60 percent of the region's population lives in high and very high water-stressed areas compared to the global average of 35 percent. This has prompted some MENA countries to engage with the issue of climate change on an international level through environmental accords such as the Paris Agreement. Law and policy are also being established on a national level amongst MENA countries, with a focus on the development of renewable energies. Economy Middle Eastern economies range from being very poor (such as Gaza and Yemen) to extremely wealthy nations (such as Qatar and UAE). According to the International Monetary Fund, the three largest Middle Eastern economies in nominal GDP in 2023 were Saudi Arabia ($1.06 trillion), Turkey ($1.03 trillion), and Israel ($0.54 trillion). For nominal GDP per person, the highest ranking countries are Qatar ($83,891), Israel ($55,535), the United Arab Emirates ($49,451) and Cyprus ($33,807). Turkey ($3.6 trillion), Saudi Arabia ($2.3 trillion), and Iran ($1.7 trillion) had the largest economies in terms of GDP PPP. For GDP PPP per person, the highest-ranking countries are Qatar ($124,834), the United Arab Emirates ($88,221), Saudi Arabia ($64,836), Bahrain ($60,596) and Israel ($54,997). The lowest-ranking country in the Middle East, in terms of GDP nominal per capita, is Yemen ($573). The economic structure of Middle Eastern nations are different because while some are heavily dependent on export of only oil and oil-related products (Saudi Arabia, the UAE and Kuwait), others have a highly diverse economic base (such as Cyprus, Israel, Turkey and Egypt). Industries of the Middle Eastern region include oil and oil-related products, agriculture, cotton, cattle, dairy, textiles, leather products, surgical instruments, defence equipment (guns, ammunition, tanks, submarines, fighter jets, UAVs, and missiles). Banking is an important sector, especially for UAE and Bahrain. With the exception of Cyprus, Turkey, Egypt, Lebanon and Israel, tourism has been a relatively undeveloped area of the economy, in part because of the socially conservative nature of the region as well as political turmoil in certain regions. Since the end of the COVID pandemic however, countries such as the UAE, Bahrain, and Jordan have begun attracting greater numbers of tourists because of improving tourist facilities and the relaxing of tourism-related restrictive policies. Unemployment is high in the Middle East and North Africa region, particularly among people aged 15–29, a demographic representing 30% of the region's population. The total regional unemployment rate in 2025 is 10.8%, and among youth is as high as 28%. Demographics Arabs constitute the largest ethnic group in the Middle East, followed by various Iranian peoples and then by Turkic peoples (Turkish, Azeris, Syrian Turkmen, and Iraqi Turkmen). Native ethnic groups of the region include, in addition to Arabs, Arameans, Assyrians, Baloch, Berbers, Copts, Druze, Greek Cypriots, Jews, Kurds, Lurs, Mandaeans, Persians, Samaritans, Shabaks, Tats, and Zazas. European ethnic groups that form a diaspora in the region include Albanians, Bosniaks, Circassians (including Kabardians), Crimean Tatars, Greeks, Franco-Levantines, Italo-Levantines, and Iraqi Turkmens. Among other migrant populations are Chinese, Filipinos, Indians, Indonesians, Pakistanis, Pashtuns, Romani, and Afro-Arabs. "Migration has always provided an important vent for labor market pressures in the Middle East. For the period between the 1970s and 1990s, the Arab states of the Persian Gulf in particular provided a rich source of employment for workers from Egypt, Yemen and the countries of the Levant, while Europe had attracted young workers from North African countries due both to proximity and the legacy of colonial ties between France and the majority of North African states." According to the International Organization for Migration, there are 13 million first-generation migrants from Arab nations in the world, of which 5.8 reside in other Arab countries. Expatriates from Arab countries contribute to the circulation of financial and human capital in the region and thus significantly promote regional development. In 2009 Arab countries received a total of US$35.1 billion in remittance in-flows and remittances sent to Jordan, Egypt and Lebanon from other Arab countries are 40 to 190 per cent higher than trade revenues between these and other Arab countries. In Somalia, the Somali Civil War has greatly increased the size of the Somali diaspora, as many of the best educated Somalis left for Middle Eastern countries as well as Europe and North America. Non-Arab Middle Eastern countries such as Turkey, Israel and Iran are also subject to important migration dynamics. A fair proportion of those migrating from Arab nations are from ethnic and religious minorities facing persecution and are not necessarily ethnic Arabs, Iranians or Turks.[citation needed] Large numbers of Kurds, Jews, Assyrians, Greeks and Armenians as well as many Mandeans have left nations such as Iraq, Iran, Syria and Turkey for these reasons during the last century. In Iran, many religious minorities such as Christians, Baháʼís, Jews and Zoroastrians have left since the Islamic Revolution of 1979. The Middle East is very diverse when it comes to religions, many of which originated there. Islam is the largest religion in the Middle East, but other faiths that originated there, such as Judaism and Christianity, are also well represented. Christian communities have played a vital role in the Middle East, and they represent 78% of Cyprus population, and 40.5% of Lebanon, where the Lebanese president, half of the cabinet, and half of the parliament follow one of the various Lebanese Christian rites. There are also important minority religions like the Baháʼí Faith, Yarsanism, Yazidism, Zoroastrianism, Mandaeism, Druze, and Shabakism, and in ancient times the region was home to Mesopotamian religions, Canaanite religions, Manichaeism, Mithraism and various monotheist gnostic sects. The six top languages, in terms of numbers of speakers, are Arabic, Persian, Turkish, Kurdish, Modern Hebrew and Greek. About 20 minority languages are also spoken in the Middle East. Arabic, with all its dialects, is the most widely spoken language in the Middle East, with Literary Arabic being official in all North African and in most West Asian countries. Arabic dialects are also spoken in some adjacent areas in neighbouring Middle Eastern non-Arab countries. It is a member of the Semitic branch of the Afro-Asiatic languages. Several Modern South Arabian languages such as Mehri and Soqotri are also spoken in Yemen and Oman. Another Semitic language is Aramaic and its dialects are spoken mainly by Assyrians and Mandaeans, with Western Aramaic still spoken in two villages near Damascus, Syria. There is also an Oasis Berber-speaking community in Egypt where the language is also known as Siwa. It is a non-Semitic Afro-Asiatic sister language. Persian is the second most spoken language. While it is primarily spoken in Iran and some border areas in neighbouring countries, the country is one of the region's largest and most populous. It belongs to the Indo-Iranian branch of the family of Indo-European languages. Other Western Iranic languages spoken in the region include Achomi, Daylami, Kurdish dialects, Semmani, Lurish, amongst many others. The close third-most widely spoken language, Turkish, is largely confined to Turkey, which is also one of the region's largest and most populous countries, but it is present in areas in neighboring countries. It is a member of the Turkic languages, which have their origins in East Asia. Another Turkic language, Azerbaijani, is spoken by Azerbaijanis in Iran. The fourth-most widely spoken language, Kurdish, is spoken in the countries of Iran, Iraq, Syria and Turkey, Sorani Kurdish is the second official language in Iraq (instated after the 2005 constitution) after Arabic. Hebrew is the official language of Israel, with Arabic given a special status after the 2018 Basic law lowered its status from an official language prior to 2018. Hebrew is spoken and used by over 80% of Israel's population, the other 20% using Arabic. Modern Hebrew only began being spoken in the 20th century after being revived in the late 19th century by Elizer Ben-Yehuda (Elizer Perlman) and European Jewish settlers, with the first native Hebrew speaker being born in 1882. Greek is one of the two official languages of Cyprus, and the country's main language. Small communities of Greek speakers exist all around the Middle East; until the 20th century it was also widely spoken in Asia Minor (being the second most spoken language there, after Turkish) and Egypt. During the antiquity, Ancient Greek was the lingua franca for many areas of the western Middle East and until the Muslim expansion it was widely spoken there as well. Until the late 11th century, it was also the main spoken language in Asia Minor; after that it was gradually replaced by the Turkish language as the Anatolian Turks expanded and the local Greeks were assimilated, especially in the interior. English is one of the official languages of Akrotiri and Dhekelia. It is also commonly taught and used as a foreign second language, in countries such as Egypt, Jordan, Iran, Iraq, Qatar, Bahrain, United Arab Emirates and Kuwait. It is also a main language in some Emirates of the United Arab Emirates. It is also spoken as native language by Jewish immigrants from Anglophone countries (UK, US, Australia) in Israel and understood widely as second language there. French is taught and used in many government facilities and media in Lebanon, and is taught in some primary and secondary schools of Egypt and Syria. Maltese, a Semitic language mainly spoken in Europe, is used by the Franco-Maltese diaspora in Egypt. Due to widespread immigration of French Jews to Israel, it is the native language of approximately 200,000 Jews in Israel. Armenian speakers are to be found in the region. Georgian is spoken by the Georgian diaspora. Russian is spoken by a large portion of the Israeli population, because of emigration in the late 1990s. Russian today is a popular unofficial language in use in Israel; news, radio and sign boards can be found in Russian around the country after Hebrew and Arabic. Circassian is also spoken by the diaspora in the region and by almost all Circassians in Israel who speak Hebrew and English as well. The largest Romanian-speaking community in the Middle East is found in Israel, where as of 1995[update] Romanian is spoken by 5% of the population.[d] Bengali, Hindi and Urdu are widely spoken by migrant communities in many Middle Eastern countries, such as Saudi Arabia (where 20–25% of the population is South Asian), the United Arab Emirates (where 50–55% of the population is South Asian), and Qatar, which have large numbers of Pakistani, Bangladeshi and Indian immigrants. Culture The Middle East has recently become more prominent in hosting global sport events due to its wealth and desire to diversify its economy. The South Asian diaspora is a major backer of cricket in the region. See also Notes References Further reading External links 29°N 41°E / 29°N 41°E / 29; 41 |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Animal#cite_ref-Clarke2014_56-0] | [TOKENS: 6011] |
Contents Animal Animals are multicellular, eukaryotic organisms belonging to the biological kingdom Animalia (/ˌænɪˈmeɪliə/). With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Animals form a clade, meaning that they arose from a single common ancestor. Over 1.5 million living animal species have been described, of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are as many as 7.77 million animal species on Earth. Animal body lengths range from 8.5 μm (0.00033 in) to 33.6 m (110 ft). They have complex ecologies and interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology, and the study of animal behaviour is known as ethology. The animal kingdom is divided into five major clades, namely Porifera, Ctenophora, Placozoa, Cnidaria and Bilateria. Most living animal species belong to the clade Bilateria, a highly proliferative clade whose members have a bilaterally symmetric and significantly cephalised body plan, and the vast majority of bilaterians belong to two large clades: the protostomes, which includes organisms such as arthropods, molluscs, flatworms, annelids and nematodes; and the deuterostomes, which include echinoderms, hemichordates and chordates, the latter of which contains the vertebrates. The much smaller basal phylum Xenacoelomorpha have an uncertain position within Bilateria. Animals first appeared in the fossil record in the late Cryogenian period and diversified in the subsequent Ediacaran period in what is known as the Avalon explosion. Nearly all modern animal phyla first appeared in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago (Mya), and most classes during the Ordovician radiation 485.4 Mya. Common to all living animals, 6,331 groups of genes have been identified that may have arisen from a single common ancestor that lived about 650 Mya during the Cryogenian period. Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on advanced techniques, such as molecular phylogenetics, which are effective at demonstrating the evolutionary relationships between taxa. Humans make use of many other animal species for food (including meat, eggs, and dairy products), for materials (such as leather, fur, and wool), as pets and as working animals for transportation, and services. Dogs, the first domesticated animal, have been used in hunting, in security and in warfare, as have horses, pigeons and birds of prey; while other terrestrial and aquatic animals are hunted for sports, trophies or profits. Non-human animals are also an important cultural element of human evolution, having appeared in cave arts and totems since the earliest times, and are frequently featured in mythology, religion, arts, literature, heraldry, politics, and sports. Etymology The word animal comes from the Latin noun animal of the same meaning, which is itself derived from Latin animalis 'having breath or soul'. The biological definition includes all members of the kingdom Animalia. In colloquial usage, the term animal is often used to refer only to nonhuman animals. The term metazoa is derived from Ancient Greek μετα meta 'after' (in biology, the prefix meta- stands for 'later') and ζῷᾰ zōia 'animals', plural of ζῷον zōion 'animal'. A metazoan is any member of the group Metazoa. Characteristics Animals have several characteristics that they share with other living things. Animals are eukaryotic, multicellular, and aerobic, as are plants and fungi. Unlike plants and algae, which produce their own food, animals cannot produce their own food, a feature they share with fungi. Animals ingest organic material and digest it internally. Animals have structural characteristics that set them apart from all other living things: Typically, there is an internal digestive chamber with either one opening (in Ctenophora, Cnidaria, and flatworms) or two openings (in most bilaterians). Animal development is controlled by Hox genes, which signal the times and places to develop structures such as body segments and limbs. During development, the animal extracellular matrix forms a relatively flexible framework upon which cells can move about and be reorganised into specialised tissues and organs, making the formation of complex structures possible, and allowing cells to be differentiated. The extracellular matrix may be calcified, forming structures such as shells, bones, and spicules. In contrast, the cells of other multicellular organisms (primarily algae, plants, and fungi) are held in place by cell walls, and so develop by progressive growth. Nearly all animals make use of some form of sexual reproduction. They produce haploid gametes by meiosis; the smaller, motile gametes are spermatozoa and the larger, non-motile gametes are ova. These fuse to form zygotes, which develop via mitosis into a hollow sphere, called a blastula. In sponges, blastula larvae swim to a new location, attach to the seabed, and develop into a new sponge. In most other groups, the blastula undergoes more complicated rearrangement. It first invaginates to form a gastrula with a digestive chamber and two separate germ layers, an external ectoderm and an internal endoderm. In most cases, a third germ layer, the mesoderm, also develops between them. These germ layers then differentiate to form tissues and organs. Repeated instances of mating with a close relative during sexual reproduction generally leads to inbreeding depression within a population due to the increased prevalence of harmful recessive traits. Animals have evolved numerous mechanisms for avoiding close inbreeding. Some animals are capable of asexual reproduction, which often results in a genetic clone of the parent. This may take place through fragmentation; budding, such as in Hydra and other cnidarians; or parthenogenesis, where fertile eggs are produced without mating, such as in aphids. Ecology Animals are categorised into ecological groups depending on their trophic levels and how they consume organic material. Such groupings include carnivores (further divided into subcategories such as piscivores, insectivores, ovivores, etc.), herbivores (subcategorised into folivores, graminivores, frugivores, granivores, nectarivores, algivores, etc.), omnivores, fungivores, scavengers/detritivores, and parasites. Interactions between animals of each biome form complex food webs within that ecosystem. In carnivorous or omnivorous species, predation is a consumer–resource interaction where the predator feeds on another organism, its prey, who often evolves anti-predator adaptations to avoid being fed upon. Selective pressures imposed on one another lead to an evolutionary arms race between predator and prey, resulting in various antagonistic/competitive coevolutions. Almost all multicellular predators are animals. Some consumers use multiple methods; for example, in parasitoid wasps, the larvae feed on the hosts' living tissues, killing them in the process, but the adults primarily consume nectar from flowers. Other animals may have very specific feeding behaviours, such as hawksbill sea turtles which mainly eat sponges. Most animals rely on biomass and bioenergy produced by plants and phytoplanktons (collectively called producers) through photosynthesis. Herbivores, as primary consumers, eat the plant material directly to digest and absorb the nutrients, while carnivores and other animals on higher trophic levels indirectly acquire the nutrients by eating the herbivores or other animals that have eaten the herbivores. Animals oxidise carbohydrates, lipids, proteins and other biomolecules in cellular respiration, which allows the animal to grow and to sustain basal metabolism and fuel other biological processes such as locomotion. Some benthic animals living close to hydrothermal vents and cold seeps on the dark sea floor consume organic matter produced through chemosynthesis (via oxidising inorganic compounds such as hydrogen sulfide) by archaea and bacteria. Animals originated in the ocean; all extant animal phyla, except for Micrognathozoa and Onychophora, feature at least some marine species. However, several lineages of arthropods begun to colonise land around the same time as land plants, probably between 510 and 471 million years ago, during the Late Cambrian or Early Ordovician. Vertebrates such as the lobe-finned fish Tiktaalik started to move on to land in the late Devonian, about 375 million years ago. Other notable animal groups that colonized land environments are Mollusca, Platyhelmintha, Annelida, Tardigrada, Onychophora, Rotifera, Nematoda. Animals occupy virtually all of earth's habitats and microhabitats, with faunas adapted to salt water, hydrothermal vents, fresh water, hot springs, swamps, forests, pastures, deserts, air, and the interiors of other organisms. Animals are however not particularly heat tolerant; very few of them can survive at constant temperatures above 50 °C (122 °F) or in the most extreme cold deserts of continental Antarctica. The collective global geomorphic influence of animals on the processes shaping the Earth's surface remains largely understudied, with most studies limited to individual species and well-known exemplars. Diversity The blue whale (Balaenoptera musculus) is the largest animal that has ever lived, weighing up to 190 tonnes and measuring up to 33.6 metres (110 ft) long. The largest extant terrestrial animal is the African bush elephant (Loxodonta africana), weighing up to 12.25 tonnes and measuring up to 10.67 metres (35.0 ft) long. The largest terrestrial animals that ever lived were titanosaur sauropod dinosaurs such as Argentinosaurus, which may have weighed as much as 73 tonnes, and Supersaurus which may have reached 39 metres. Several animals are microscopic; some Myxozoa (obligate parasites within the Cnidaria) never grow larger than 20 μm, and one of the smallest species (Myxobolus shekel) is no more than 8.5 μm when fully grown. The following table lists estimated numbers of described extant species for the major animal phyla, along with their principal habitats (terrestrial, fresh water, and marine), and free-living or parasitic ways of life. Species estimates shown here are based on numbers described scientifically; much larger estimates have been calculated based on various means of prediction, and these can vary wildly. For instance, around 25,000–27,000 species of nematodes have been described, while published estimates of the total number of nematode species include 10,000–20,000; 500,000; 10 million; and 100 million. Using patterns within the taxonomic hierarchy, the total number of animal species—including those not yet described—was calculated to be about 7.77 million in 2011.[a] 3,000–6,500 4,000–25,000 Evolutionary origin Evidence of animals is found as long ago as the Cryogenian period. 24-Isopropylcholestane (24-ipc) has been found in rocks from roughly 650 million years ago; it is only produced by sponges and pelagophyte algae. Its likely origin is from sponges based on molecular clock estimates for the origin of 24-ipc production in both groups. Analyses of pelagophyte algae consistently recover a Phanerozoic origin, while analyses of sponges recover a Neoproterozoic origin, consistent with the appearance of 24-ipc in the fossil record. The first body fossils of animals appear in the Ediacaran, represented by forms such as Charnia and Spriggina. It had long been doubted whether these fossils truly represented animals, but the discovery of the animal lipid cholesterol in fossils of Dickinsonia establishes their nature. Animals are thought to have originated under low-oxygen conditions, suggesting that they were capable of living entirely by anaerobic respiration, but as they became specialised for aerobic metabolism they became fully dependent on oxygen in their environments. Many animal phyla first appear in the fossil record during the Cambrian explosion, starting about 539 million years ago, in beds such as the Burgess Shale. Extant phyla in these rocks include molluscs, brachiopods, onychophorans, tardigrades, arthropods, echinoderms and hemichordates, along with numerous now-extinct forms such as the predatory Anomalocaris. The apparent suddenness of the event may however be an artefact of the fossil record, rather than showing that all these animals appeared simultaneously. That view is supported by the discovery of Auroralumina attenboroughii, the earliest known Ediacaran crown-group cnidarian (557–562 mya, some 20 million years before the Cambrian explosion) from Charnwood Forest, England. It is thought to be one of the earliest predators, catching small prey with its nematocysts as modern cnidarians do. Some palaeontologists have suggested that animals appeared much earlier than the Cambrian explosion, possibly as early as 1 billion years ago. Early fossils that might represent animals appear for example in the 665-million-year-old rocks of the Trezona Formation of South Australia. These fossils are interpreted as most probably being early sponges. Trace fossils such as tracks and burrows found in the Tonian period (from 1 gya) may indicate the presence of triploblastic worm-like animals, roughly as large (about 5 mm wide) and complex as earthworms. However, similar tracks are produced by the giant single-celled protist Gromia sphaerica, so the Tonian trace fossils may not indicate early animal evolution. Around the same time, the layered mats of microorganisms called stromatolites decreased in diversity, perhaps due to grazing by newly evolved animals. Objects such as sediment-filled tubes that resemble trace fossils of the burrows of wormlike animals have been found in 1.2 gya rocks in North America, in 1.5 gya rocks in Australia and North America, and in 1.7 gya rocks in Australia. Their interpretation as having an animal origin is disputed, as they might be water-escape or other structures. Phylogeny Animals are monophyletic, meaning they are derived from a common ancestor. Animals are the sister group to the choanoflagellates, with which they form the Choanozoa. Ros-Rocher and colleagues (2021) trace the origins of animals to unicellular ancestors, providing the external phylogeny shown in the cladogram. Uncertainty of relationships is indicated with dashed lines. The animal clade had certainly originated by 650 mya, and may have come into being as much as 800 mya, based on molecular clock evidence for different phyla. Holomycota (inc. fungi) Ichthyosporea Pluriformea Filasterea The relationships at the base of the animal tree have been debated. Other than Ctenophora, the Bilateria and Cnidaria are the only groups with symmetry, and other evidence shows they are closely related. In addition to sponges, Placozoa has no symmetry and was often considered a "missing link" between protists and multicellular animals. The presence of hox genes in Placozoa shows that they were once more complex. The Porifera (sponges) have long been assumed to be sister to the rest of the animals, but there is evidence that the Ctenophora may be in that position. Molecular phylogenetics has supported both the sponge-sister and ctenophore-sister hypotheses. In 2017, Roberto Feuda and colleagues, using amino acid differences, presented both, with the following cladogram for the sponge-sister view that they supported (their ctenophore-sister tree simply interchanging the places of ctenophores and sponges): Porifera Ctenophora Placozoa Cnidaria Bilateria Conversely, a 2023 study by Darrin Schultz and colleagues uses ancient gene linkages to construct the following ctenophore-sister phylogeny: Ctenophora Porifera Placozoa Cnidaria Bilateria Sponges are physically very distinct from other animals, and were long thought to have diverged first, representing the oldest animal phylum and forming a sister clade to all other animals. Despite their morphological dissimilarity with all other animals, genetic evidence suggests sponges may be more closely related to other animals than the comb jellies are. Sponges lack the complex organisation found in most other animal phyla; their cells are differentiated, but in most cases not organised into distinct tissues, unlike all other animals. They typically feed by drawing in water through pores, filtering out small particles of food. The Ctenophora and Cnidaria are radially symmetric and have digestive chambers with a single opening, which serves as both mouth and anus. Animals in both phyla have distinct tissues, but these are not organised into discrete organs. They are diploblastic, having only two main germ layers, ectoderm and endoderm. The tiny placozoans have no permanent digestive chamber and no symmetry; they superficially resemble amoebae. Their phylogeny is poorly defined, and under active research. The remaining animals, the great majority—comprising some 29 phyla and over a million species—form the Bilateria clade, which have a bilaterally symmetric body plan. The Bilateria are triploblastic, with three well-developed germ layers, and their tissues form distinct organs. The digestive chamber has two openings, a mouth and an anus, and in the Nephrozoa there is an internal body cavity, a coelom or pseudocoelom. These animals have a head end (anterior) and a tail end (posterior), a back (dorsal) surface and a belly (ventral) surface, and a left and a right side. A modern consensus phylogenetic tree for the Bilateria is shown below. Xenacoelomorpha Ambulacraria Chordata Ecdysozoa Spiralia Having a front end means that this part of the body encounters stimuli, such as food, favouring cephalisation, the development of a head with sense organs and a mouth. Many bilaterians have a combination of circular muscles that constrict the body, making it longer, and an opposing set of longitudinal muscles, that shorten the body; these enable soft-bodied animals with a hydrostatic skeleton to move by peristalsis. They also have a gut that extends through the basically cylindrical body from mouth to anus. Many bilaterian phyla have primary larvae which swim with cilia and have an apical organ containing sensory cells. However, over evolutionary time, descendant spaces have evolved which have lost one or more of each of these characteristics. For example, adult echinoderms are radially symmetric (unlike their larvae), while some parasitic worms have extremely simplified body structures. Genetic studies have considerably changed zoologists' understanding of the relationships within the Bilateria. Most appear to belong to two major lineages, the protostomes and the deuterostomes. It is often suggested that the basalmost bilaterians are the Xenacoelomorpha, with all other bilaterians belonging to the subclade Nephrozoa. However, this suggestion has been contested, with other studies finding that xenacoelomorphs are more closely related to Ambulacraria than to other bilaterians. Protostomes and deuterostomes differ in several ways. Early in development, deuterostome embryos undergo radial cleavage during cell division, while many protostomes (the Spiralia) undergo spiral cleavage. Animals from both groups possess a complete digestive tract, but in protostomes the first opening of the embryonic gut develops into the mouth, and the anus forms secondarily. In deuterostomes, the anus forms first while the mouth develops secondarily. Most protostomes have schizocoelous development, where cells simply fill in the interior of the gastrula to form the mesoderm. In deuterostomes, the mesoderm forms by enterocoelic pouching, through invagination of the endoderm. The main deuterostome taxa are the Ambulacraria and the Chordata. Ambulacraria are exclusively marine and include acorn worms, starfish, sea urchins, and sea cucumbers. The chordates are dominated by the vertebrates (animals with backbones), which consist of fishes, amphibians, reptiles, birds, and mammals. The protostomes include the Ecdysozoa, named after their shared trait of ecdysis, growth by moulting, Among the largest ecdysozoan phyla are the arthropods and the nematodes. The rest of the protostomes are in the Spiralia, named for their pattern of developing by spiral cleavage in the early embryo. Major spiralian phyla include the annelids and molluscs. History of classification In the classical era, Aristotle divided animals,[d] based on his own observations, into those with blood (roughly, the vertebrates) and those without. The animals were then arranged on a scale from man (with blood, two legs, rational soul) down through the live-bearing tetrapods (with blood, four legs, sensitive soul) and other groups such as crustaceans (no blood, many legs, sensitive soul) down to spontaneously generating creatures like sponges (no blood, no legs, vegetable soul). Aristotle was uncertain whether sponges were animals, which in his system ought to have sensation, appetite, and locomotion, or plants, which did not: he knew that sponges could sense touch and would contract if about to be pulled off their rocks, but that they were rooted like plants and never moved about. In 1758, Carl Linnaeus created the first hierarchical classification in his Systema Naturae. In his original scheme, the animals were one of three kingdoms, divided into the classes of Vermes, Insecta, Pisces, Amphibia, Aves, and Mammalia. Since then, the last four have all been subsumed into a single phylum, the Chordata, while his Insecta (which included the crustaceans and arachnids) and Vermes have been renamed or broken up. The process was begun in 1793 by Jean-Baptiste de Lamarck, who called the Vermes une espèce de chaos ('a chaotic mess')[e] and split the group into three new phyla: worms, echinoderms, and polyps (which contained corals and jellyfish). By 1809, in his Philosophie Zoologique, Lamarck had created nine phyla apart from vertebrates (where he still had four phyla: mammals, birds, reptiles, and fish) and molluscs, namely cirripedes, annelids, crustaceans, arachnids, insects, worms, radiates, polyps, and infusorians. In his 1817 Le Règne Animal, Georges Cuvier used comparative anatomy to group the animals into four embranchements ('branches' with different body plans, roughly corresponding to phyla), namely vertebrates, molluscs, articulated animals (arthropods and annelids), and zoophytes (radiata) (echinoderms, cnidaria and other forms). This division into four was followed by the embryologist Karl Ernst von Baer in 1828, the zoologist Louis Agassiz in 1857, and the comparative anatomist Richard Owen in 1860. In 1874, Ernst Haeckel divided the animal kingdom into two subkingdoms: Metazoa (multicellular animals, with five phyla: coelenterates, echinoderms, articulates, molluscs, and vertebrates) and Protozoa (single-celled animals), including a sixth animal phylum, sponges. The protozoa were later moved to the former kingdom Protista, leaving only the Metazoa as a synonym of Animalia. In human culture The human population exploits a large number of other animal species for food, both of domesticated livestock species in animal husbandry and, mainly at sea, by hunting wild species. Marine fish of many species are caught commercially for food. A smaller number of species are farmed commercially. Humans and their livestock make up more than 90% of the biomass of all terrestrial vertebrates, and almost as much as all insects combined. Invertebrates including cephalopods, crustaceans, insects—principally bees and silkworms—and bivalve or gastropod molluscs are hunted or farmed for food, fibres. Chickens, cattle, sheep, pigs, and other animals are raised as livestock for meat across the world. Animal fibres such as wool and silk are used to make textiles, while animal sinews have been used as lashings and bindings, and leather is widely used to make shoes and other items. Animals have been hunted and farmed for their fur to make items such as coats and hats. Dyestuffs including carmine (cochineal), shellac, and kermes have been made from the bodies of insects. Working animals including cattle and horses have been used for work and transport from the first days of agriculture. Animals such as the fruit fly Drosophila melanogaster serve a major role in science as experimental models. Animals have been used to create vaccines since their discovery in the 18th century. Some medicines such as the cancer drug trabectedin are based on toxins or other molecules of animal origin. People have used hunting dogs to help chase down and retrieve animals, and birds of prey to catch birds and mammals, while tethered cormorants have been used to catch fish. Poison dart frogs have been used to poison the tips of blowpipe darts. A wide variety of animals are kept as pets, from invertebrates such as tarantulas, octopuses, and praying mantises, reptiles such as snakes and chameleons, and birds including canaries, parakeets, and parrots all finding a place. However, the most kept pet species are mammals, namely dogs, cats, and rabbits. There is a tension between the role of animals as companions to humans, and their existence as individuals with rights of their own. A wide variety of terrestrial and aquatic animals are hunted for sport. The signs of the Western and Chinese zodiacs are based on animals. In China and Japan, the butterfly has been seen as the personification of a person's soul, and in classical representation the butterfly is also the symbol of the soul. Animals have been the subjects of art from the earliest times, both historical, as in ancient Egypt, and prehistoric, as in the cave paintings at Lascaux. Major animal paintings include Albrecht Dürer's 1515 The Rhinoceros, and George Stubbs's c. 1762 horse portrait Whistlejacket. Insects, birds and mammals play roles in literature and film, such as in giant bug movies. Animals including insects and mammals feature in mythology and religion. The scarab beetle was sacred in ancient Egypt, and the cow is sacred in Hinduism. Among other mammals, deer, horses, lions, bats, bears, and wolves are the subjects of myths and worship. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Mars_trilogy] | [TOKENS: 7871] |
Contents Mars trilogy The Mars trilogy is a series of science fiction novels by Kim Stanley Robinson that chronicles the settlement and terraforming of the planet Mars through the personal and detailed viewpoints of a wide variety of characters spanning 187 years, from 2026 to 2212. Ultimately more utopian than dystopian, the story focuses on egalitarian, sociological, and scientific advances made on Mars, while Earth suffers from overpopulation and ecological disaster. The three novels are Red Mars (1992), Green Mars (1993), and Blue Mars (1996). The Martians (1999) is a collection of short stories set in the same fictional universe. Red Mars won the BSFA Award in 1992 and Nebula Award for Best Novel in 1993. Green Mars won the Hugo Award for Best Novel and Locus Award for Best Science Fiction Novel in 1994. Blue Mars also won the Hugo and Locus Awards in 1997. Icehenge (1984), Robinson's first novel about Mars, is not set in this universe but deals with similar themes and plot elements. The trilogy shares some similarities with Robinson's more recent novel 2312 (2012); for instance, the terraforming of Mars and the extreme longevity of the characters in both novels. Plot Red Mars starts in 2026 with the first colonial voyage to Mars aboard the Ares, the largest interplanetary spacecraft ever built and home to a crew who are to be the first hundred Martian colonists. The ship was built from clustered space shuttle external fuel tanks which, instead of reentering Earth's atmosphere, had been boosted into orbit until enough had been amassed to build the ship. The mission is a joint American–Russian undertaking, and seventy of the First Hundred are drawn from these countries (except, for example, Michel Duval, a French psychologist assigned to observe their behavior). The book details: the trip out; construction of the first Martian settlement (eventually called Underhill) by Russian engineer Nadia Cherneshevsky, as well as establishing colonies on Mars' hollowed out asteroid-moon Phobos; the ever-changing relationships between the colonists; and debates among the colonists regarding both the terraforming of the planet and its future relationship to Earth. The two extreme views on terraforming are personified by Saxifrage "Sax" Russell, who believes their very presence on the planet means some level of terraforming has already begun and that it is humanity's obligation to spread life as it is the most scarce thing in the known universe; and Ann Clayborne, who poses that humankind does not have the right to change entire planets at their will. Russell's view is initially purely scientific but in time comes to blend with the views of Hiroko Ai, the chief of the Agricultural Team who assembles a new belief system (the "Areophany") devoted to the appreciation and furthering of life ("viriditas"); these views are collectively known as the "Green" position, while Clayborne's naturalist stance comes to be known as "Red". The actual decision is left to the United Nations Organization of Mars Affairs (UNOMA), which greenlights terraforming, and a series of actions get underway, including the drilling of "moholes" to release subsurface heat; thickening of the atmosphere according to a complicated biochemical formula that comes to be known as the "Russell cocktail" after Sax Russell; and the detonation of nuclear explosions deep in the subsurface permafrost to release water. Additional steps are taken to connect Mars more closely with Earth, including the insertion of an areosynchronous asteroid "Clarke" to which a space elevator cable is tethered. Against the backdrop of this development is another debate, the principal instigator of which is Arkady Bogdanov of the Russian contingent (possibly named in homage to the Russian polymath and science fiction writer Alexander Bogdanov—it is later revealed in Blue Mars that Arkady is a descendant of Alexander). Bogdanov argues that Mars need not and should not be subject to Earth traditions, limitations, or authority. He is to some extent joined in this position by John Boone, famous as the "First Man on Mars" from a preceding expedition and rival to Frank Chalmers, the technical leader of the American contingent. Their rivalry is further exacerbated by competing romantic interest in Maya Katarina Toitovna, the leader of the Russian contingent. (In the opening of the book, Chalmers instigates a sequence of events that leads to Boone being assassinated; much of what follows is a retrospective examination of what led to that point.) Earth meanwhile increasingly falls under the control of transnational corporations (transnats) that come to dominate its governments, particularly smaller nations adopted as "flags of convenience" for extending their influence into Martian affairs. As UNOMA's power erodes, the Mars treaty is renegotiated in a move led by Frank Chalmers; the outcome is impressive but proves short-lived as the transnats find ways around it through loopholes. Things get worse as the nations of Earth start to clash over limited resources, expanding debt, and population growth, as well as restrictions on access to a new longevity treatment developed by Martian scientists—one that holds the promise of lifespans into the hundreds of years. In 2061, with Boone dead and exploding immigration threatening the fabric of Martian society, Bogdanov launches a revolution against what many now view as occupying transnat troops operating only loosely under an UNOMA rubber-stamp approval. Initially successful, the revolution proves infeasible on the basis of both a greater-than-expected willingness of the Earth troops to use violence, and the extreme vulnerability of life on a planet without a habitable atmosphere. A series of exchanges sees the cutting of the space elevator cable, bombardment of several Martian cities (including the city where Bogdanov is himself organizing the rebellion; he is killed), the destruction of Phobos and its military complex, and the unleashing of a great flood of torrential groundwater freed by nuclear detonations. By the end, most of the First Hundred are dead, and virtually all who remain have fled to a hidden refuge established years earlier by Ai and her followers (one exception is Phyllis Boyle, who has allied herself with the transnats; she is on Clarke when the space elevator cable is cut and sent flying out of orbit to a fate unknown by the conclusion of the book). The revolution dies and life on Mars returns to a sense of stability under heavy transnat control. The clash over resources on Earth breaks out into a full-blown world war leaving hundreds of millions dead, but ceasefire arrangements are reached when the transnats flee to the safety of the developed nations, which use their huge militaries to restore order, forming police states. However, a new generation of humans born on Mars holds the promise of change. In the meantime, the remaining First Hundred—including Russell, Clayborne, Toitovna, and Cherneshevsky—settle into life in Hiroko Ai's refuge called Zygote, hidden under the Martian south pole. Green Mars takes its title from the stage of terraforming that has allowed plants to grow. It picks up the story 50 years after the events of Red Mars in the dawn of the 22nd century, following the lives of the remaining First Hundred and their children and grandchildren. Melting ice causes the top of the dome of Hiroko Ai's base under the south pole to collapse, forcing the survivors to escape into a (less literal) underground organization known as the Demimonde. Among the expanded group are the First Hundred's children, the Nisei, a number of whom live in Hiroko's second secret base, Gamete. As unrest in the multinational control over Mars' affairs grows, various groups start to form with different aims and methods. Watching these groups evolve from Earth, the CEO of the Praxis Corporation sends a representative, Arthur Randolph, to organize resistance movements. This culminates into the Dorsa Brevia agreement, in which nearly all the underground factions take part. Preparations are made for a second revolution beginning in the 2120s, from converting moholes to missiles silos or hidden bases, sabotaging orbital mirrors, to propelling Deimos out of Mars' gravity well and out into deep space so it could never be used as a weapons platform as Phobos was. The book follows the characters across the Martian landscape, which is explained in detail. Sax Russell infiltrates the transnat terraforming project, with a carefully crafted fake identity as Stephen Lindholm. The newly evolving Martian biosphere is described at great length and with more profound changes mostly aimed at warming up the surface of Mars to the brink of making it habitable, from continent-sized orbital mirrors, another space elevator built (using another anchored asteroid that is dubbed "New Clarke"), to melting the northern polar ice cap, and digging moholes deep enough to form volcanoes. A mainstay of the novel is a detailed analysis of philosophical, political, personal, economic, and geological experiences of the characters. The story weaves back and forth from character to character, providing a picture of Mars as seen by them. Sax, alias Stephen, eventually becomes romantically involved with Phyllis, who had survived the events of 2061 from the end of the first novel, but she discovers his true identity and has him arrested. Members of the underground launch a daring rescue from the prison facility where Sax suffers torture and interrogation that causes him to have a stroke; Maya kills Phyllis in the process of the rescue. The book ends on a major event which is a sudden catastrophic rise in Earth's global sea levels not caused primarily by any greenhouse effect but by the eruption of a chain of volcanoes underneath the ice of West Antarctica, disintegrating the ice sheet and displacing the fragments into the ocean. The resultant flooding causes global chaos on Earth, creating the perfect moment for the Martian underground to seize control of Martian society from Earth. Following a series of largely bloodless coups, an extremist faction of Reds bombs a dam near Burroughs, the major city where the remaining United Nations forces have concentrated, in order to force the security forces to evacuate. The entire city is flooded and the population of the city has to walk a staggeringly long distance in the open Martian atmosphere (which just barely has the temperature, atmospheric pressure, and gas mixture to support human life) to Libya Station, in order to resettle in other locations. With this, control of Mars is finally wrested away from Earth with minimal loss of life, leaving the weary survivors hopeful about the prospects of their newfound political autonomy. Blue Mars takes its title from the stage of terraforming that has allowed atmospheric pressure and temperature to increase so that liquid water can exist on the planet's surface, forming rivers and seas. It follows closely in time from the end of Green Mars and has a much wider scope than the previous two books, covering an entire century after the second revolution. As Earth is heavily flooded by the sudden melting of the Antarctic ice cap, the once mighty metanats are brought to their knees, as the Praxis Corporation paves a new way of "democratic businesses". Mars becomes the "Head" of the system, giving universal healthcare, free education, and an abundance of food. However, this sparks illegal immigration from Earth, so to ease the population strain on the Blue Planet, Martian scientists and engineers are soon put to the task of creating asteroid cities; where small planetoids of the asteroid belt are hollowed out, given a spin to produce gravity, and a miniature sun is created to produce light and heat. With a vast increase in sciences, technologies, and spacecraft manufacturing, this begins the "Accelerando"; where humankind spreads its civilization throughout the Solar System, and eventually beyond. As Venus, the Jovian moons, the Saturnian moons, and eventually Triton are colonized and terraformed in some way, Jackie Boone (the granddaughter of John Boone, the first man to walk on Mars from the first book) takes an interstellar vessel (made out of an asteroid) to another star system twenty light-years away, where they will start to terraform the planets and moons found there. The remaining First Hundred are generally regarded as living legends. Reports of Hiroko's survival are numerous, and purported sightings occur all over the colonized solar system, but none are substantiated. Nadia and Art Randolph lead a constitutional congress in which a global system of government is established that leaves most cities and settlements generally autonomous, but subject to a central representative legislature and two systems of courts, one legal and the other environmental. The environmental court is packed with members of the Red faction as a concession (in exchange for their support in the congress, as much of their power was broken when they attempted and failed to violently expel remaining UN forces early on after the second revolution of Green Mars; yet they still retained enough power to stymie constitutional negotiations). Vlad, Marina, and Ursula, the original inventors of the longevity treatments, introduce a new economic system that is a hybrid of capitalism, socialism, and environmental conservationism. During a trip to Earth occurring alongside the congress, Nirgal (one of the original children to be born on Mars to the First Hundred, and something of a Mars-wide celebrity), Maya, and Sax negotiate an agreement that allows Earth to send a number of migrants equal to 10% of Mars' population to Mars every year. Following the adoption of the new constitution, Nadia is elected the first president of Mars and serves competently, although she does not enjoy politics. She and Art work together closely, and eventually fall in love and have a child. Sax Russell devotes himself to various scientific projects, all the while continuing to recover from the effects of his stroke. Since the second revolution, he feels enormous guilt that his pro-terraforming position became the dominant one at the expense of the goals of Ann's anti-terraforming stance, as Sax and Ann have come to be regarded as the original champions of their respective positions. Sax becomes increasingly preoccupied with seeking forgiveness and approval from Ann, while Ann, depressed and bitter from her many political and personal losses, is suicidal and refuses to accept any more longevity treatments. However, when Sax witnesses Ann collapse into a coma during an attempt to demonstrate to her the beauty of the terraformed world, he arranges for her to be resuscitated and to be treated with the longevity treatment, both against her will. The longevity treatments themselves begin to show weaknesses once those receiving them reach the two-century mark in age. The treatments reduce most aging processes to a negligible rate, but are much less effective when it comes to brain function, and in particular memory. Maya in particular suffers extreme lapses in memory, although she remains high functioning most of the time. Further, as people age, they begin to show susceptibility to strange, fatal conditions which have no apparent explanation and are resistant to any treatment. Most common is the event that comes to be known as the "quick decline", where a person of extremely advanced age and in apparently good health suffers a sudden fatal heart arrhythmia and dies abruptly. The exact mechanism is never explained. Michel dies of the quick decline, while attending the wake of another First Hundred member. Russell speculates that Michel's quick decline was brought on by the shock of seeing Maya fail to remember Frank Chalmers (who was killed while escaping security forces in the first revolution) upon looking at a treasured photo of him on her refrigerator. As a result of this and Russell's own problems with memory, he organizes a team of scientists to develop medicine that will restore memory. The remaining members of the First Hundred, of which there are only 12, congregate in Underhill, and take the medicine. It works so well that Russell remembers his own birth. He and Ann Clayborne finally recall that they had been in love prior to leaving Earth the very first time, but both had been too socially inept and nervous about their chances for selection for the Mars voyage to reveal this to each other. Their famous argument over terraforming had been a mere continuation of a running conversation they had been having since they still lived on Earth. Through the memory treatment it is also revealed that Phyllis had been lobbying to free Sax from his torturers when she was murdered by Maya. Maya herself declines the treatment. Sax also distinctly recalls Hiroko assisting him in finding his rover in a storm before he nearly froze to death before disappearing once again and is convinced she remains alive, although the question of whether she is actually alive is never resolved. Eventually, the anti-immigration factions of the Martian government provoke massive illegal immigration from Earth, risking another war; however, under the leadership of Ann and Sax, who have fallen in love again following their reconciliation, along with Maya, the Martian population unites to reconstitute the government to accept more immigration from Earth, defusing the imminent conflict and ushering in a new golden age of harmony and security on Mars. The Martians is a collection of short stories that takes place over the timespan of the original trilogy of novels, as well as some stories that take place in an alternate version of the novels where the First Hundred's mission was one of exploration rather than colonization. Buried in the stories are several hints about the eventual fate of the Martian terraforming program. Story elements Trans-national Corporations, nicknamed "transnats", are extremely powerful multinational corporations that first emerge in the mid-21st century. Robinson tracks the evolution of the transnats into what he terms "metanats" (metanational). These multinational corporations have grown so large as a result of globalization that they have sufficient economic power to take over or strongly manipulate national governments, initially only relatively small third-world governments, but later, larger developed governments too, effectively running whole countries. In Robinson's future history, the metanational corporations become similar to nation-states in some respects, while continually attempting to take over competitors in order to become the sole controller of the interplanetary market. As the Mars trilogy draws to a close in the mid-23rd century, the metanational corporations are forced by a global catastrophe to concede more democratic powers to their workforces. Although there are many transnational and metanational corporations mentioned, two play an active role in the development of the plotline: Praxis, a largely benevolent and relatively democratic firm, and Subarashī, which plays a large role in the maltreatment of the citizens of Mars. Genetic engineering (GE) is first mentioned in Red Mars; it takes off when Sax creates an alga to withstand the harsh Martian temperature and convert its atmosphere into breathable air. Eventually this is done on a massive scale, with thousands of types of GE algae, lichen and bacteria being created to terraform the planet. In Green Mars, GE animals began to be created to withstand the thin Martian atmosphere, and to produce a working planetary-biosphere. By Blue Mars, GE is commonly being done on humans, willingly, to help them better adapt to the new worlds; to breathe thinner air (e.g. Russell), or to see better in the dimmer light of the outer planets. The books also speculate on the colonization of other planets and moons in the Solar System, and include descriptions of settlements or terraforming efforts on Callisto, Mercury, Titania, Miranda and Venus. Toward the end of the last novel, humans are taking sub-lightspeed colony ships to other stars, taking advantage of the longevity treatments to survive the trip to their destinations. A great portion of Blue Mars is concerned with the effects of extreme longevity on its protagonists, most of whom have lived over two hundred years as a result of repeated longevity treatments. In particular, Robinson speculates on the psychological effects of ultra-longevity, including memory loss, personality change, mental instability, and existential boredom. Characters The initial colonists from the Ares who established a permanent colony. Many of them later become leaders or exemplary figures in the transformation of Mars or its new society. The "First Hundred" actually consisted of 101, with Coyote being smuggled aboard the Ares by Hiroko. An American astronaut, who was the first human to walk on Mars in the year 2020. He returns a public hero and uses his considerable influence to lobby for a second mission, this time one of colonization. Boone received a large amount of radiation on his first trip to Mars, more than the recommended dosage according to medical regulations. However, his celebrity status allows him to skirt this. On the second voyage, Boone is one of the "First Hundred" colonists sent to permanently colonize Mars. His accomplishments and natural charm yield him an informal leadership role. In the first chapter of Red Mars, John Boone is assassinated in a plot instigated by Frank Chalmers. The narrative then steps back to the First Hundred's voyage to Mars aboard the spaceship Ares. His ideas continue as a point of reference for the remainder of the trilogy. Boone's character portrayal is complex; in one light, Boone is a stereotypically simple, heroic figure, an everyman hero: his first words on his first trip to Mars are "Well, here we are." He is almost uniformly cheerful and good-natured, and approaches everything he undertakes with hale bonhomie. But later in Red Mars, Robinson switches to Boone's point of view, and it is in this section that it is revealed that late in life, Boone is addicted to omegendorph, a fictional drug that is based on endorphins in the human brain. In addition, it reveals that at least some of his seeming simplicity might simply be an act designed to further his political goals. Overall, Boone is presented as larger-than-life. Head of the American contingent, he is Machiavellian in his use of power. However, his cynicism is later shown to be a form of self-defense; Chalmers is at least partly driven by a hidden idealistic side. Early in the voyage to Mars, he becomes sexually involved with Maya Toitovna, the leader of the Russian contingent of the mission. During the second half of the voyage, Toitovna becomes involved with Boone. Already bitter that Boone became the first to walk on Mars instead of him as they were both candidates for the mission and that he was allowed to join the colonization trip despite his manipulations, Chalmers further despises Boone because of Toitovna's affection. His dislike culminates in his involvement in a plot to assassinate Boone, which ultimately succeeds and allows him to take over handling major affairs on Mars. This ultimately becomes his undoing, as his ruthless governance and aggressive diplomatic work backfire on him during the revolution of 2061. In the final chapters of Red Mars, Chalmers flees with Toitovna and other members of the First Hundred to join the hidden colonists at the polar ice cap but dies along the way when he is caught outside their vehicle during an aquifer flood in Valles Marineris. An emotional woman who is at the center of a love triangle between Boone and Chalmers, she begins as head of the Russian contingent. The novels hint that she used both wit and seduction to rise through the ranks of the Russian space agency to become the leader of the first colonization mission. After the first revolution, she flees with other members of the First Hundred to the hidden colony in the pole. She becomes a school teacher of the children of the hidden colonists but later becomes a powerful political force. After the deaths of Chalmers and Boone, she falls in love with Michel Duval. She suffers heavily from bipolar disorder and from memory-related psychological disorders with growing age, which often lead her to isolate herself from others and sometimes turn violent. Throughout the novels, Maya takes an active political role, helping to keep the surviving First Hundred together during the failed revolution of 2061 and guiding the successful revolutions that occur decades later, despite her psychological problems. A Russian engineer who started out building nuclear reactors in Siberia, during the voyage and initial exploration of Mars, she does her best to avoid the squabbles of the other members of the First Hundred. Instead, she busies herself by building the first permanent habitation of Mars, Underhill, using programmed automated robots. She also helps to construct a new and larger habitat, and research facility in a nearby canyon. In the later books, she becomes a reluctant politician. Chernyshevski is in love with Bogdanov and is devastated when he is killed in an attack by anti-revolutionary forces associated with UNOMA, the transnationals and Phyllis Boyle during the first Martian revolution. In retaliation for Bogdanov's murder, she activates his hidden weapon system, built into Phobos, which causes the entire moon (a UNOMA/transnational military base) to decelerate in orbit and destructively aerobrake in Mars' atmosphere, utterly destroying it. In Blue Mars, she falls in love with Art Randolph, with whom she eventually starts a family. After Martian independence, she grudgingly becomes the first president of Mars. A mechanical engineer with anarchist leanings, possibly based on the Russian Machist, Alexander Bogdanov (the character's ancestor) and Arkady Strugatsky, he is regarded by many other members of the First Hundred, particularly Boyle, as a troublemaker. He leads the team which establishes an outpost on the moon Phobos, and leads an uprising against the transnational corporations towards the end of the first novel. Like Boone (with whom he was good friends), his political ideas (later known as Bogdanovism) weigh heavily on characters later in the series. In love with Nadia Chernyshevski, he is killed during the first Martian revolution in 2061. An American physicist, he is a brilliant and creative scientist, and is greatly respected for his intellectual gifts. However, he is socially awkward and often finds it difficult to understand and relate to other people. Russell is a leader of the Green movement, the goal of which is to terraform Mars. During Green Mars, Sax suffers a stroke while being tortured by government security forces and fellow member of the First Hundred, Phyllis Boyle (although it is later revealed that she actually opposed Sax's torture). He subsequently suffers from expressive aphasia and has to relearn how to speak and becomes less predictable in his actions. Originally apolitical, this event and a growing attachment to Mars itself leads Russell to become the physical architect of the second revolution. After memory issues become apparent in many of the remaining first hundred including Sax he begins work on an ambitious project to gather the remaining first hundred and have them try an experimental treatment he helped to develop. It is after this that Sax realizes his persistent attempts to please Ann are actually because he is also secretly in love with Ann Clayborne, who cannot stand him at first, but after centuries on Mars, eventually reconciles. Saxifrage means "stonebreaker" and is the name for an Alpine plant that grows between stones. An American geologist, Clayborne is one of the first areologists and maintains a stalwart desire to see Mars preserved in the state it holds when humans arrive. Clayborne early on debates Saxifrage Russell over the proper role of humanity on Mars and though initially apolitical, this stance marks her as the original "Red," while Russell's hands-on terraforming reflects the antithesis of these views. Clayborne is shown to prefer solitude during much of the series, and even her relationship with fellow First Hundred settler Simon (with whom she has a child) is subject to introspective silence in most cases. Simon's death and the estrangement she finds from their son Peter when the latter emerges as a leading moderate "green" drive her to further isolation. Clayborne's relationship with Russell is shown to be complex, the two of them taking early opposite views but the situation slowly changing as Russell comes to appreciate what has been unleashed and what has indeed been lost as science gives way to commercial exploitation that he cannot control. During the events of Blue Mars, Russell intervenes to save Clayborne's life; later, the two are revealed to have once shared an attraction that went astray because of a casual misinterpretation between them. Ann undergoes a drastic change toward Blue Mars due to the emergence of something inside of her that she describes as anti-Ann and something else that she can't quite describe. A Japanese expert on biology, agriculture, and ecological systems, it was Ai who smuggled Desmond "Coyote" Hawkins onto the Ares (the two were friends and lovers as students in London). She is the charismatic leader of the farm team, one of the important work groups and cliques among the First Hundred. She thus becomes the focus of many of the trilogy's central themes. Most importantly, she teaches the importance of maintaining a respectful relation with one's planet. On Mars, this is called the Areophany. In the secret colony Zygote, which Hiroko established, the first generation of children of the First Hundred, the ectogenes, are all the product of artificial insemination outside of any human body. Hiroko uses the ova of the female members of the First Hundred as the female genetic material and uses the sperm of the male members of the First Hundred to fertilize the ova. Although Hiroko is seldom at the center of the narrative, her influence is pervasive. She disappears for the final time in Green Mars. Her ultimate fate is left unresolved. Ai (愛) is the Japanese word for love. A French psychologist pivotally involved in the early psychological screening of the First Hundred candidates in Antarctica, which he describes as being a collection of double-bind requirements. Duval is assigned to accompany the Mars mission and is treated as an observer rather than as a member of the team during the early events of Red Mars. His aloof personality enforces this ostracism and also subverts his relationships with others, but in time it becomes clear that Duval is struggling with his own psychological issues perhaps more than anyone else from the expedition. During the first disappearance of the farm team, he is invited by Hiroko to flee with the farm team and establish Zygote, the first hidden colony. Duval desperately wants to return to Provence as he remembers it, and after visiting as a part of the Martian diplomatic mission to Earth, he becomes even more homesick. Duval falls in love with Maya Toitovna and guides her through particularly challenging psychological episodes throughout most of the series, dying late in Blue Mars of heart arrhythmia when Maya displays signs of very heavy temporary memory loss. Nearly sixty when he arrives on Mars, a Russian biological scientist who is the oldest of the First Hundred. Taneev heads medical treatment and most research projects on Mars, becoming famous as the creator of the gerontological treatment used to regenerate human cellular systems and ushering in a new era of longevity. He lives in Acheron on the Great Escarpment in the north of Mars before fleeing to the hidden colony after the First Revolution but later returns to his research, falling victim to "quick decline" late in the events of Blue Mars. For much of his Mars-centric life, Taneev lives in a ménage à trois with Ursula and Marina, the exact nature of which is never resolved. A Christian American geologist with a harsh personality that does not win her many friends among the First Hundred and gains particular enmity from Ann Clayborne. As the Mars situation develops, Boyle sides against most of the First Hundred in favor of the increasingly authoritarian United Nations Office of Mars Affairs (UNOMA) and its successor, the corporate/quasi-fascist United Nations Transitional Authority (UNTA). Her influence is strongest during the later events of Red Mars, where by the 2061 revolution she has been placed in charge of the asteroid Clarke that serves as the counterweight of the First Space Elevator. The events of the revolution send Clarke (and Boyle) spinning off into the outer Solar System at the end of Red Mars; Green Mars finds her back in the equation, but her influence is greatly reduced against the backdrop of a much-expanded UNTA presence. Boyle engages in a brief sexual relationship with Saxifrage Russell (who despises her) while the latter is living under an assumed identity and is singularly capable of discerning who he really is, turning him over the UNTA. She is later present at a session in Kasei Vallis where Russell is being tortured, and is killed by Maya Toitovna. Later, as his memory recovers, Russell reveals that Boyle had been opposed to his torture and was demanding that he be released at the time that Maya's team freed him. A Trinidadian stowaway, he is a friend and supporter of Hiroko, and a fervent anarchist communist. Present in Red Mars only as a stowaway who eventually blends effortlessly into the Martian background, he is not even identified as anything more than Coyote until the beginning of Green Mars. He becomes a leading figure in the underground and an unofficial coordinator of a developing gift economy. Since the trilogy covers over 200 years of human history, later immigrants and the children and grandchildren of the First Hundred eventually become important characters in their own right. The Martians use the same terminology for different generations as Japanese Americans. People who immigrated from Earth are called issei, the first generation born on Mars are nisei, and the second-generation Martians are sansei. Third-generation Martians are called yonsei. Kasei is the son of Hiroko and John Boone and the father of Jackie Boone. Kasei is the leader of the Kakaze, a radical Red faction. His name is Japanese for the planet Mars. He dies during the second revolution, after an unsuccessful attack on the second space elevator. The son of Hiroko and Coyote, he is raised communally by Hiroko and her followers in Zygote. He is a good-natured wanderer who eventually becomes a political leader advocating ties with Earth. He is one of the founders of the Free Mars movement and is famous for his running technique that allows him to run all day for days on end. As Nadia's assistants, he and Art are instrumental in getting the Martian constitutional declaration written. Later he is sent on a diplomatic mission to Earth but nearly dies from an infection. His name is ancient Babylonian for Mars (the planet and the war-god). The granddaughter of Hiroko and John Boone (raised with Nirgal), she emerges as a leader of the Free Mars movement, but is seen to change her platform based on whatever keeps her in power (e.g. changing from banning Earth immigration to allowing almost unlimited immigrants). After her daughter Zo's death, she retires in grief and joins a one-way expedition to an extrasolar planet near Aldebaran. Peter Clayborne is the son of Ann Clayborne and Simon Frazier, being one of the first children born on Mars. Peter holds a position of older brother to all of the following first generation. Many revolutionary and later political decisions of the Mars First movement are influenced by his opinions and judgment. He works part-time as an engineer and a green politician. Jackie's daughter; she has feline traits (purring) inserted into her genome via the gerontological longevity treatment. In Blue Mars, she travels the solar system running political errands for Jackie, although the two do not get along particularly well. Her character is portrayed as hedonistic and explicitly nihilistic, making sexual satisfaction a priority and seemingly having little regard for the feelings of others. On the other hand, she apparently has a conscience, risking her life to rescue a man on Mercury and later dying in an attempt to save a distressed flier. The daughter of Nadia and Art. A representative of the Praxis corporation sent to contact the Martian underground movement on a quasi-diplomatic mission in an attempt to create a system of ecological capitalism based on democratic corporations. Like the other metanationals, it takes on intensive economic and political ties with governments, but Praxis aims for partnerships rather than exploitive relationships. Bedouin nomads who originally emigrated from Egypt and respected figures in the Arab Martian community. Zeyk is a close friend of Chalmers. His eidetic memory becomes a minor plot point. The founder of Praxis, one of the huge multinational corporations. He embraces a fusion of Eastern and Western lifestyles. Development history In an interview at UCSD, Robinson said that he was looking at a satellite photo of Mars and thought that would be a great place to go backpacking. He said the Mars trilogy grew out of that urge. Awards Adaptations and uses The series has had difficulty moving into film and TV for over two decades. The Mars trilogy screen production rights were held by James Cameron in the late 1990s, who conceived a five-hour miniseries to be directed by Martha Coolidge, but he subsequently passed on the option.[citation needed] Later, Gale Ann Hurd planned a similar mini-series for the Sci-Fi Channel, which also remained unproduced. Then, in October 2008, it was reported that AMC and Jonathan Hensleigh had teamed up and were planning to develop a television mini-series based on Red Mars. In September 2014, SpikeTV announced it was working with producer Vince Gerardis to develop a TV series adaptation of Red Mars and in December 2015, formally greenlit a ten-episode first season of a TV series based on the novels, with J. Michael Straczynski serving as showrunner and writer. However, in March 2016, Deadline reported that Straczynski had left his position as showrunner with Peter Noah replacing him, but he too left due to creative differences with Spike. Spike then put the series on hold for further development. The content of Green Mars and the cover artwork for Red Mars are included on the Phoenix DVD, carried on board Phoenix, a NASA lander that successfully touched down on Mars in May 2008. The First Interplanetary Library is intended to be a sort of time capsule for future Mars explorers and colonists. Translations in other languages The trilogy has been translated into Spanish, French, German, Russian, Chinese, Polish, Hebrew, Japanese, Italian, Romanian, Bulgarian, and Serbian, among others. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Social_media_optimization] | [TOKENS: 2963] |
Contents Social media optimization Social media optimization (SMO) is the use of online platforms to generate income or publicity to increase the awareness of a brand, event, product or service. Types of social media involved include RSS feeds, blogging sites, social bookmarking sites, social news websites, video sharing websites such as YouTube and social networking sites such as Facebook, Instagram, TikTok and X (Twitter). SMO is similar to search engine optimization (SEO) in that the goal is to drive web traffic, and draw attention to a company or creator. SMO's focal point is on gaining organic links to social media content. In contrast, SEO's core is about reaching the top of the search engine hierarchy. In general, social media optimization refers to optimizing a website and its content to encourage more users to use and share links to the website across social media and networking sites. SMO is used to strategically create online content ranging from well-written text to eye-catching digital photos or video clips that encourages and entices people to engage with a website. Users share this content, via its weblink, with social media contacts and friends. Common examples of social media engagement are "liking and commenting on posts, retweeting, embedding, sharing, and promoting content". Social media optimization is also an effective way of implementing online reputation management (ORM), meaning that if someone posts bad reviews of a business, an SMO strategy can ensure that the negative feedback is not the first link to come up in a list of search engine results. In the 2010s, with social media sites overtaking TV as a source for news for young people, news organizations have become increasingly reliant on social media platforms for generating web traffic. Publishers such as The Economist employ large social media teams to optimize their online posts and maximize traffic, while other major publishers now use advanced artificial intelligence (AI) technology to generate higher volumes of web traffic. Relationship with search engine optimization Social media optimization is an increasingly important factor in search engine optimization, which is the process of designing a website in a way so that it has as high a ranking as possible on search engines. Search engines are increasingly utilizing the recommendations of users of social networks such as Reddit, Facebook, Tumblr, Twitter, YouTube, LinkedIn, Pinterest and Instagram to rank pages in the search engine result pages. The implication is that when a webpage is shared or "liked" by a user on a social network, it counts as a "vote" for that webpage's quality. Thus, search engines can use such votes accordingly to properly ranked websites in search engine results pages. Furthermore, since it is more difficult to tip the scales or influence the search engines in this way, search engines are putting more stock into social search. This, coupled with increasingly personalized search based on interests and location, has significantly increased the importance of a social media presence in search engine optimization. Due to personalized search results, location-based social media presences on websites such as Yelp, Google Places, Foursquare, and Yahoo! Local have become increasingly important. While social media optimization is related to search engine marketing, it differs in several ways. Primarily, SMO focuses on driving web traffic from sources other than search engines, though improved search engine ranking is also a benefit of successful social media optimization. Further, SMO is helpful to target particular geographic regions in order to target and reach potential customers. This helps in lead generation (finding new customers) and contributes to high conversion rates (i.e., converting previously uninterested individuals into people who are interested in a brand or organization). Relationship with viral marketing Social media optimization is in many ways connected to the technique of viral marketing or "viral seeding" where word of mouth is created through the use of networking in social bookmarking, video and photo sharing websites. An effective SMO campaign can harness the power of viral marketing; for example, 80% of activity on Pinterest is generated through "repinning."[citation needed] Furthermore, by following social trends and utilizing alternative social networks, websites can retain existing followers while also attracting new ones. This allows businesses to build an online following and presence, all linking back to the company's website for increased traffic. For example, with an effective social bookmarking campaign, not only can website traffic be increased, but a site's rankings can also be increased. In a similar way, the engagement with blogs creates a similar result by sharing content through the use of RSS in the blogosphere. Social media optimization is considered an integral part of an online reputation management (ORM) or search engine reputation management (SERM) strategy for organizations or individuals who care about their online presence. SMO is one of six key influencers that affect Social Commerce Construct (SCC). Online activities such as consumers' evaluations and advices on products and services constitute part of what creates a Social Commerce Construct (SCC).[citation needed] Social media optimization is not limited to marketing and brand building. Increasingly, smart businesses are integrating social media participation as part of their knowledge management strategy (i.e., product/service development, recruiting, employee engagement and turnover, brand building, customer satisfaction and relations, business development and more). Additionally, social media optimization can be implemented to foster a community of the associated site, allowing for a healthy business-to-consumer (B2C) relationship. Origins and implementation According to technologist Danny Sullivan, the term "social media optimization" was first used and described by marketer Rohit Bhargava on his marketing blog in August 2006. In the same post, Bhargava established the five important rules of social media optimization. Bhargava believed that by following his rules, anyone could influence the levels of traffic and engagement on their site, increase popularity, and ensure that it ranks highly in search engine results. An additional 11 SMO rules have since been added to the list by other marketing contributors. The 16 rules of SMO, according to one source, are as follows: Bhargava's initial five rules were more specifically designed to SMO, while the list is now much broader and addresses everything that can be done across different social media platforms. According to author and CEO of TopRank Online Marketing, Lee Odden, a Social Media Strategy is also necessary to ensure optimization. This is a similar concept to Bhargava's list of rules for SMO. The Social Media Strategy may consider: According to Lon Safko and David K. Brake in The Social Media Bible, it is also important to act like a publisher by maintaining an effective organizational strategy, to have an original concept and unique "edge" that differentiates one's approach from competitors, and to experiment with new ideas if things do not work the first time. If a business is blog-based, an effective method of SMO is using widgets that allow users to share content to their personal social media platforms. This will ultimately reach a wider target audience and drive more traffic to the original post. Blog widgets and plug-ins for post-sharing are most commonly linked to Facebook, LinkedIn and x.com. They occasionally also link to social media platforms such as Tumblr and Pinterest. Many sharing widgets also include user counters which indicate how many times the content has been liked and shared across different social media pages. This can influence whether or not new users will engage with the post, and also gives businesses an idea of what kind of posts are most successful at engaging audiences. By using relevant and trending keywords in titles and throughout blog posts, a business can also increase search engine optimization and the chances of their content of being read and shared by a large audience. The root of effective SMO is the content that is being posted, so professional content creation tools can be very beneficial. These can include editing programs such as Photoshop, GIMP, Final Cut Pro, and Dreamweaver. Many websites also offer customization options such as different layouts to personalize a page and create a point of difference. Publishing industry With social media sites overtaking TV as a source for news for young people, news organizations have become increasingly reliant on social media platforms for generating traffic. A report by Reuters Institute for the Study of Journalism described how a 'second wave of disruption' had hit news organizations, with publishers such as The Economist having to employ large social media teams to optimize their posts, and maximize traffic. Within the context of the publishing industry, even professional fields are utilizing SMO. Because doctors want to maximize exposure to their research findings SMO has also found a place in the medical field. Today, 3.8 billion people globally are using some form of social media.[citation needed] People frequently obtain health-related information from online social media platforms like Twitter and Facebook. Healthcare professionals and scientists can communicate with other medical-counterparts to discuss research and findings through social media platforms. These platforms provide researchers with data sets and surveillance that help detect patterns and behavior in preventing, informing, and studying global disease; COVID-19. Additionally, researchers utilize SMO to reach and recruit hard-to-reach patients. SMO narrows specified demographics that filter necessary data in a given study.[citation needed] Social network games Social media gaming is online gaming activity performed through social media sites with friends and online gaming activity that promotes social media interaction. Examples of the former include FarmVille, Clash of Clans, Clash Royale, FrontierVille, and Mafia Wars. In these games a player's social network is exploited to recruit additional players and allies. An example of the latter is Empire Avenue, a virtual stock exchange where players buy and sell shares of each other's social network worth. Nielsen Media Research estimates that, as of June 2010, social networking and playing online games account for about one-third of all online activity by Americans. Facebook Facebook has in recent years become a popular channel for advertising, alongside traditional forms such as television, radio, and print. With over 1 billion active users, and 50% of those users logging into their accounts every day it is an important communication platform that businesses can utilize and optimize to promote their brand and drive traffic to their websites. There are three commonly used strategies to increase advertising reach on Facebook: Improving effectiveness and increasing network size are organic approaches, while buying more reach is a paid approach which does not require any further action. Most businesses will attempt an "organic" approach to gaining a significant following before considering a paid approach. Because Facebook requires a login, it is important that posts are public to ensure they will reach the widest possible audience. Posts that have been heavily shared and interacted with by users are displayed as 'highlighted posts' at the top of newsfeeds. In order to achieve this status, the posts need to be engaging, interesting, or useful. This can be achieved by being spontaneous, asking questions, addressing current events and issues, and optimizing trending hashtags and keywords. The more engagement a post receives, the further it will spread and the more likely it is to feature on first in search results. Another organic approach to Facebook optimization is cross-linking different social platforms. By posting links to websites or social media sites in the profile 'about' section, it is possible to direct traffic and ultimately increase search engine optimization. Another option is to share links to relevant videos and blog posts. Facebook Connect is a functionality that launched in 2008 to allow Facebook users to sign up to different websites, enter competitions, and access exclusive promotions by logging in with their existing Facebook account details. This is beneficial to users as they don't have to create a new login every time they want to sign up to a website, but also beneficial to businesses as Facebook users become more likely to share their content. Often the two are interlinked, where in order to access parts of a website, a user has to like or share certain things on their personal profile or invite a number of friends to like a page. This can lead to greater traffic flow to a website as it reaches a wider audience. Businesses have more opportunities to reach their target markets if they choose a paid approach to SMO. When Facebook users create an account, they are urged to fill out their personal details such as gender, age, location, education, current and previous employers, religious and political views, interests, and personal preferences such as movie and music tastes. Facebook then takes this information and allows advertisers to use it to determine how to best market themselves to users that they know will be interested in their product. This can also be known as micro-targeting. If a user clicks on a link to like a page, it will show up on their profile and newsfeed. This then feeds back into organic social media optimization, as friends of the user will see this and be encouraged to click on the page themselves. Although advertisers are buying mass reach, they are attracting a customer base with a genuine interest in their product. Once a customer base has been established through a paid approach, businesses will often run promotions and competitions to attract more organic followers. The number of businesses that use Facebook to advertise also holds significant relevance. in 2017, there were three million businesses that advertised on Facebook. This makes Facebook the world's largest platform for social media advertising. What also holds importance is the amount of money leading businesses are spending on Facebook advertising alone. Procter & Gamble spend $60 million every year on Facebook advertising. Other advertisers on Facebook include Microsoft, with a yearly spend of £35 million, Amazon, Nestle and American Express all with yearly expenditures above £25 million per year. Furthermore, the number of small businesses advertising on Facebook is of relevance. This number has grown rapidly over the upcoming years and demonstrates how important social media advertising actually is. Currently 70% of the UK's small businesses use Facebook advertising. This is a substantial number of advertisers. Almost half of the world's small businesses use social media marketing product of some sort. This demonstrates the impact that social media has had on the current digital marketing era. Engagement rate The engagement rate (ER) represents the activity of users specific for a certain profile on Facebook, Instagram, TikTok or any other social media. A common way to calculate it is the following: E R = i n t e r a c t i o n s ¯ f o l l o w e r s × 100 % {\displaystyle ER={\frac {\overline {interactions}}{followers}}\times 100\%} In the above formula followers is the total number of followers (friends, subscribers, etc.), interactions stands for the number of interactions, such as likes, comments, personal messages, shares. The latter is averaged over the certain period of time, which should normally be short enough to ensure the variance in followers number is negligible during this period. See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/County_(United_States)] | [TOKENS: 5118] |
Contents County (United States) In the United States, a county or county equivalent is an administrative subdivision of a state or territory, typically with defined geographic boundaries and some level of governmental authority. The term "county" is used in 48 states, while Louisiana and Alaska have functionally equivalent subdivisions called parishes and boroughs, respectively. Counties and other local governments exist as a matter of U.S. state law, so the specific governmental powers of counties may vary widely between the states, with many providing some level of services to civil townships, municipalities, and unincorporated areas. Certain municipalities are in multiple counties. Some municipalities have been consolidated with their county government to form consolidated city-counties or have been legally separated from counties altogether to form independent cities. Conversely, counties in Connecticut and Rhode Island, eight of Massachusetts's 14 counties, and Alaska's Unorganized Borough have no government power, existing only as geographic distinctions. The United States Census Bureau uses the term "county equivalent" to describe places that are comparable to counties, but called by different names. Louisiana parishes, the organized boroughs of Alaska, independent cities, and the District of Columbia are equivalent to counties for administrative purposes. Alaska's Unorganized Borough is further divided into 11 census areas that are statistically equivalent to counties. In 2024, the U.S. Census Bureau began to also recognize Connecticut's councils of governments, which took over some of the regional powers from the state's former county governments, as county equivalents. Territories of the United States do not have counties; instead, the United States Census Bureau also divides them into county equivalents. The U.S. Census Bureau counts American Samoa's districts and atolls as county equivalents. American Samoa locally has places called "counties", but these entities are considered to be "minor civil divisions" (not true counties) by the U.S. Census Bureau. The number of counties per state ranges from the three counties of Delaware to the 254 counties of Texas. County populations also vary widely; in 2017, according to the Census Bureau, more than half the U.S. population was concentrated in just 143 of the more than 3,000 counties, or just 4.6% of all counties. As of 2017, the five most populous counties, ordered from most to least, were Los Angeles County, California; Cook County, Illinois; Harris County, Texas; Maricopa County, Arizona; and San Diego County, California. As of 2022[update], there are 3,144 counties and county-equivalents in the 50 states and the District of Columbia. If the 100 county equivalents in the U.S. territories are counted, then the total is 3,244 counties and county-equivalents in the United States.[b] History The idea of counties originated with the counties of England. English (after 1707, British) colonists brought to their colonies in North America a political subdivision that they already used in the British metropole: the counties. Counties were among the earliest units of local government established in the Thirteen Colonies that would become the United States. Virginia created the first counties in order to ease the administrative workload in Jamestown. The House of Burgesses divided the colony first into four "incorporations" in 1617 and finally into eight shires (or counties) in 1634: James City, Henrico, Charles City, Charles River, Warrosquyoake, Accomac, Elizabeth City, and Warwick River. America's oldest intact county court records can be found at Eastville, Virginia, in Northampton (originally Accomac) County, dating to 1632. Maryland established its first county, St. Mary's in 1637. In 1639, the Province of Maine founded York County. Massachusetts followed in 1643. Pennsylvania and New York delegated significant power and responsibility from the colony government to county governments and thereby established a pattern for most of the United States, although counties remained relatively weak in New England. When independence came, the framers of the Constitution left the matter to the states. Subsequently, state constitutions conceptualized county governments as arms of the state. Louisiana instead adopted the local divisions called parishes that dated back to both the Spanish colonial and French colonial periods when the land was dominated by the Catholic Church. In the twentieth century, the role of local governments strengthened and counties began providing more services, acquiring home rule and county commissions to pass local ordinances pertaining to their unincorporated areas. In 1955, delegates to the Alaska Constitutional Convention wanted to avoid the traditional county system and adopted their own unique model with different types of boroughs varying in powers and duties. In some states, these powers are partly or mostly devolved to the counties' smaller divisions usually called townships, though in New York, New England and Wisconsin they are called "towns". The county may or may not be able to override its townships on certain matters, depending on state law. The newest county in the United States is the consolidated city-county of Broomfield, Colorado, established in 2001 from parts of four existing counties. The newest county equivalents are the Alaskan census areas of Chugach and Copper River, both formed in 2019 from the now-defunct Valdez–Cordova Census Area, and the Alaskan boroughs of Petersburg established in 2013, Wrangell established in 2008, and Skagway established in 2007. County variations A consolidated city-county is simultaneously a city, which is a municipality (municipal corporation), and a county, which is an administrative division of a state, having the powers and responsibilities of both types of entities. The city limit or jurisdiction is coterminous with the county line, as the two administrative entities become a non-dichotomous single entity. For this reason, a consolidated city-county is officially remarked as name of city – name of county (i.e., Augusta–Richmond County in Georgia). The same is true of the boroughs of New York City, each of which is coextensive with a county of New York State. For those entities in which the city uses the same name as the county, city and county of name may be used (i.e., City and County of Denver in Colorado). Similarly, some of Alaska's boroughs have merged with their principal cities, creating unified city-boroughs. Some such consolidations and mergers have created cities that rank among the geographically largest cities in the world, though often with population densities far below those of most urban areas. There are 40 consolidated city-counties in the U.S., including Augusta–Richmond County; the City and County of Denver, Colorado; the City and County of Honolulu, Hawaii; Indianapolis–Marion County, Indiana; Jacksonville–Duval County, Florida; Louisville–Jefferson County, Kentucky; Lexington–Fayette County, Kentucky; Kansas City–Wyandotte County, Kansas; Nashville–Davidson County, Tennessee; New Orleans–Orleans Parish, Louisiana; the City and County of Philadelphia, Pennsylvania; City and County of San Francisco, California; and Lynchburg-Moore County, Tennessee A consolidated city-county may still contain independent municipalities maintaining some governmental powers that did not merge with the rest of the county. For example, the government of Jacksonville–Duval County, Florida, still provides county-level services to the four independent municipalities within its borders: Atlantic Beach, Baldwin, Jacksonville Beach, and Neptune Beach. The term county equivalents is used by the United States Census Bureau to describe divisions that are comparable to counties but called by different names: Consolidated city-counties are not designated county equivalents for administrative purposes; since both the city and the county at least nominally exist, they are properly classified as counties in their own right. Likewise, the boroughs of New York City are coextensive with counties and are therefore by definition also not county equivalents. Most U.S. territories are directly divided into municipalities or similar units, which are mostly treated as county equivalents for statistical purposes: American Samoa has 15 of its own counties, but the U.S. Census Bureau treats these as minor civil divisions and the three districts and two atolls as county equivalents. The U.S. Census Bureau counts all of Guam as one county equivalent (with the FIPS code 66010), while the USGS counts Guam's election districts (villages) as county equivalents. The U.S. Census Bureau counts the three main islands in the U.S. Virgin Islands as county equivalents, while the USGS counts the districts of the U.S. Virgin Islands (of which there are 2) as county equivalents. Names and etymologies Common sources of county names are names of people, geographic features, places in other states or countries, and animals. Counties are most often named for people, often political figures or early settlers, with over 2,100 of the 3,144 total so named. The most common county name, with 31, is Washington County, for America's first president, George Washington. Up until 1871, there was a Washington County within the District of Columbia, but it was dissolved by the District of Columbia Organic Act. Jefferson County, for Thomas Jefferson, is next with 26. The most recent president to have a county named for him was Warren G. Harding, reflecting the slowing rate of county creation since New Mexico and Arizona became states in 1912. The most common counties named after non-presidents are Franklin (25), Clay (18), and Montgomery (18). After people, the next most common source of county names are geographic features and locations, with some counties even being named after counties in other states, or for places in other countries, such as the United Kingdom (the latter is most common in the area of the original Thirteen Colonies in the case of the United Kingdom, or in places which had a large number of immigrants from a particular area for other countries). The most common geographic county name is Lake. Words from Native American languages, as well as the names of Native American leaders and tribes, lend their names to many counties. Many counties bear names of French or Spanish origin, such as Marquette County being named after French missionary Father Jacques Marquette. The term for Louisiana's county equivalents, parishes (Fr. paroisse civile and Sp. parroquia), originates from the state's French and Spanish colonial periods. Before the Louisiana Purchase and granting of statehood, government was often administered in towns where major church parishes were located. Of the original 19 civil parishes of Louisiana that date from statehood in 1807, nine were named after the Roman Catholic parishes from which they were governed. County government The structure and powers of a county government may be defined by the general law of the state or by a charter specific to that county. States may allow only general-law counties, only charter counties, or both. Generally, general-law local governments have less autonomy than chartered local governments. Counties are usually governed by an elected body, variously called the county commission, board of supervisors, commissioners' court, county council, county court, or county legislature. In cases in which a consolidated city-county or independent city exists, a city council usually governs city/county or city affairs. In some counties, day-to-day operations are overseen by an elected county executive or by a chief administrative officer or county administrator who reports to the board, the mayor, or both. In many states, the board in charge of a county holds powers that transcend all three traditional branches of government. It has the legislative power to enact laws for the county; it has the executive power to oversee the executive operations of county government; and it has quasi-judicial power with regard to certain limited matters (such as hearing appeals from the planning commission if one exists). In many states, several important officials are elected separately from the board of commissioners or supervisors and cannot be fired by the board. These positions may include county clerk, county treasurer, county surrogate, sheriff, and others. District attorneys or state attorneys are usually state-level as opposed to county-level officials, but in many states, counties and state judicial districts have coterminous boundaries. The site of a county's administration, and often the county courthouse, is generally called the county seat ("parish seat" in Louisiana, "borough seat" in Alaska, or "shire town" in several New England counties). The county seat usually resides in a municipality. However, some counties may have multiple seats or no seat. In some counties with no incorporated municipalities, a large settlement may serve as the county seat. The power of county governments varies widely from state to state, as does the relationship between counties and incorporated cities. The powers of counties arise from state law and vary widely. In Connecticut and Rhode Island, counties are geographic entities, but not governmental jurisdictions. At the other extreme, Maryland counties and the county equivalent City of Baltimore handle almost all services, including public education, although the state retains an active oversight authority with many of these services. Counties in Hawaii also handle almost all services since there is no formal level of government (municipality, public education, or otherwise) existing below that of the county in the state. In most Midwestern and Northeastern states, counties are further subdivided into townships or towns, which sometimes exercise local powers or administration. Throughout the United States, counties may contain other independent, self-governing municipalities. In New England, counties function at most as judicial court districts and sheriff's departments (presently, in Connecticut only as judicial court districts—and in Rhode Island, they have lost both those functions and most others but they are still used by the United States Census Bureau and some other federal agencies for some federal functions), and most of the governmental authority below the state level is in the hands of towns and cities. In several of Maine's sparsely populated counties, small towns rely on the county for law enforcement, and in New Hampshire several social programs are administered at the state level. In Connecticut, Rhode Island, and parts of Massachusetts, counties are now only geographic designations, and they do not have any governmental powers. All government is either done at the state level or at the municipal level. In Connecticut and parts of Massachusetts, regional councils have been established to partially fill the void left behind by the abolished county governments.[e] The regional councils' authority is limited compared with a county government—they have authority only over infrastructure and land use planning, distribution of state and federal funds for infrastructure projects, emergency preparedness, and limited law enforcement duties. In the Mid-Atlantic and Midwest, counties typically provide, at a minimum, courts, public utilities, libraries, hospitals, public health services, parks, roads, law enforcement, and jails. There is usually a county registrar, recorder, or clerk (the exact title varies) who collects vital statistics, holds elections (sometimes in coordination with a separate elections office or commission), and prepares or processes certificates of births, deaths, marriages, and dissolutions (divorce decrees). The county recorder normally maintains the official record of all real estate transactions. Other key county officials include the coroner/medical examiner, treasurer, assessor, auditor, comptroller, and district attorney. In most states, the county sheriff is the chief law enforcement officer in the county. However, except in major emergencies where clear chains of command are essential, the county sheriff normally does not directly control the police departments of city governments, but merely cooperates with them (e.g., under mutual aid pacts). Thus, the most common interaction between county and city law enforcement personnel is when city police officers deliver suspects to sheriff's deputies for detention or incarceration in the county jail. In most states, the state courts and local law enforcement are organized and implemented along county boundaries. However, nearly all of the substantive and procedural law adjudicated in state trial courts originates from the state legislature and state appellate courts. In other words, most criminal defendants are prosecuted for violations of state law, not local ordinances, and if they, the district attorney, or police seek reforms to the criminal justice system, they will usually have to direct their efforts towards the state legislature rather than the county (which merely implements state law). A typical criminal defendant will be arraigned and subsequently indicted or held over for trial before a trial court in and for a particular county where the crime occurred, kept in the county jail (if he is not granted bail or cannot make bail), prosecuted by the county's district attorney, and tried before a jury selected from that county. But long-term incarceration is rarely a county responsibility, execution of capital punishment is never a county responsibility, and the state's responses to prisoners' appeals are the responsibility of the state attorney general, who has to defend before the state appellate courts the prosecutions conducted by locally elected district attorneys in the name of the state. Furthermore, county-level trial court judges are officers of the judicial branch of the state government rather than county governments. In many states, the county controls all unincorporated lands within its boundaries. In states with a township tier, unincorporated land is controlled by the townships. Residents of unincorporated land who are dissatisfied with county-level or township-level resource allocation decisions can attempt to vote to incorporate as a city, town, or village. A few counties directly provide public transportation themselves, usually in the form of a simple bus system. However, in most counties, public transportation is provided by one of the following: a special district that is coterminous with the county (but exists separately from the county government), a multi-county regional transit authority, or a state agency. In western and southern states, more populated counties provide many facilities, such as airports, convention centers, museums, recreation centers, beaches, harbors, zoos, clinics, law libraries, and public housing. They provide services such as child and family services, elder services, mental health services, welfare services, veterans assistance services, animal control, probation supervision, historic preservation, food safety regulation, and environmental health services. They have many additional officials like public defenders, arts commissioners, human rights commissioners, and planning commissioners. There may be a county fire department and a county police department – as distinguished from fire and police departments operated by individual cities, special districts, or the state government. For example, Gwinnett County, Georgia, and its county seat, the city of Lawrenceville, each have their own police departments. (A separate county sheriff's department is responsible for security of the county courts and administration of the county jail.) In several southern states, public school systems are organized and administered at the county level. Statistics As of 2024[update], there were 2,999 counties, 64 Louisiana parishes, 19 organized boroughs and 11 census areas in Alaska, 9 councils of government in Connecticut, 41 independent cities,[f] and the District of Columbia for a total of 3,144 counties and county equivalents in the 50 states and District of Columbia. There are an additional 100 county equivalents in the territories of the United States. The average number of counties per state is 62, with a range from the three counties of Delaware to the 254 counties of Texas. Southern and Midwestern states generally tend to have more counties than Western or Northeastern states, as many Northeastern states are not large enough in area to warrant a large number of counties, and many Western states were sparsely populated when counties were created by their respective state legislatures. The five counties of Rhode Island and eight of the 14 counties of Massachusetts no longer have functional county governments, but continue to exist as legal and census entities. Connecticut abolished county governments in 1960, leaving its eight counties as mere legal and census entities. In 2022, the U.S. Census Bureau recognized the state's nine councils of governments as replacement for the state's eight legacy counties for all statistical purposes; full implementation was completed in 2024. The average U.S. county population was 104,435 in 2019, while the median county, Nicholas County, West Virginia, had a population of 25,965 in 2019. The most populous county is Los Angeles County, California, with 10,014,009 residents in 2020. This number is greater than the populations of 41 U.S. states, and is only slightly smaller than the combined population of the 10 least populous states and Washington, D.C. It also makes the population of Los Angeles County 17.4 times greater than that of the least populous state, Wyoming. The second most populous county is Cook County, Illinois, with a population of 5,275,541. Cook County's population is larger than that of 28 individual U.S. states and the combined populations of the six smallest states. The least populous county is Loving County, Texas, with 64 residents in 2020. Eight county equivalents in the U.S. territories have no human population: Rose Atoll, Northern Islands Municipality, Baker Island, Howland Island, Jarvis Island, Johnston Atoll, Kingman Reef, and Navassa Island. The remaining three islands in the U.S. Minor Outlying Islands (Midway Atoll, Palmyra Atoll and Wake Island) have small non-permanent human populations. The county equivalent with the smallest non-zero population counted in the census is Swains Island, American Samoa (17 people), although since 2008 this population has not been permanent either. The most densely populated county or county equivalent is New York County, New York (coextensive with the New York City Borough of Manhattan), with 72,033 persons per square mile (27,812 persons/km2) in 2015. The Yukon–Koyukuk Census Area, Alaska, is both the most extensive and the least densely populated county or county equivalent with 0.0380 persons per square mile (0.0147 persons/km2) in 2015. In the 50 states (plus the District of Columbia), a total of 981 counties have a population over 50,000; 592 counties have a population over 100,000; 137 counties have a population over 500,000; 45 counties have a population over 1,000,000; and 14 counties have a population over 2,000,000. At the other extreme, 35 counties have a population under 1,000; 307 counties have a population under 5,000; 709 counties have a population under 10,000; and 1,492 counties have a population between 10,000 and 50,000. At the 2000 U.S. census, the median land area of U.S. counties was 622 sq mi (1,610 km2), which is two-thirds of the median land area of a ceremonial county of England, and a little more than a quarter of the median land area of a French département. Counties in the western United States typically have a much larger land area than those in the eastern United States. For example, the median land area of counties in Georgia is 343 sq mi (890 km2), whereas in Utah it is 2,427 sq mi (6,290 km2). The most extensive county or county equivalent is the Yukon–Koyukuk Census Area, Alaska, with a land area of 145,505 square miles (376,856 km2). All nine of the most extensive county equivalents are in Alaska. The most extensive county is San Bernardino County, California, with a land area of 20,057 square miles (51,947 km2). The least extensive county is Kalawao County, Hawaii, with a land area of 11.991 square miles (31.058 km2). The least extensive county equivalent in the 50 states is the independent city of Falls Church, Virginia, with a land area of 1.999 square miles (5.177 km2). If U.S. territories are included, the least extensive county equivalent is Kingman Reef, with a land area of 0.01 square miles (0.03 km2). Geographic relationships between cities and counties In some states, a municipality may be in only one county and may not annex territory in adjacent counties, but in the majority of states, the state constitution or state law allows municipalities to extend across county boundaries. At least 32 states include municipalities in multiple counties. Dallas, for example, contains portions of five counties, while numerous other cities comprise portions of four counties. New York City is an unusual case because it encompasses multiple entire counties in one city. Each of those counties is coextensive with one of the five boroughs of the city: Manhattan (New York County), The Bronx (Bronx County), Queens (Queens County), Brooklyn (Kings County), and Staten Island (Richmond County). See also Explanatory notes References External links |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.